R/6_2_textFineTuneDomain.R
textFineTuneDomain.Rd
Domain Adapted Pre-Training (EXPERIMENTAL - under development)
textFineTuneDomain(
text_data,
model_name_or_path = "bert-base-uncased",
output_dir = "./runs",
validation_proportion = 0.1,
evaluation_proportion = 0.1,
config_name = NULL,
tokenizer_name = NULL,
max_seq_length = 128L,
evaluation_strategy = "epoch",
eval_accumulation_steps = NULL,
num_train_epochs = 3,
past_index = -1,
set_seed = 2022,
...
)
A dataframe, where the first column contain text data, and the second column the to-be-predicted variable (numeric or categorical).
(string) Path to foundation/pretrained model or model identifier from huggingface.co/models
(string) Path to the output directory.
(Numeric) Proportion of the text_data to be used for validation.
(Numeric) Proportion of the text_data to be used for evaluation.
(String) Pretrained config name or path if not the same as model_name.
(String) Pretrained tokenizer name or path if not the same as model_name
(Numeric) The maximum total input sequence length after tokenization. Sequences longer than this will be truncated, sequences shorter will be padded.
(String or IntervalStrategy) — The evaluation strategy to adopt during training. Possible values are: "no": No evaluation is done during training. "steps": Evaluation is done (and logged) every eval_steps. "epoch": Evaluation is done at the end of each epoch.
(Integer) Number of predictions steps to accumulate the output tensors for, before moving the results to the CPU. If left unset, the whole predictions are accumulated on GPU/TPU before being moved to the CPU (faster but requires more memory).
(Numeric) Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training).
(Numeric, defaults to -1) Some models like TransformerXL or XLNet can make use of the past hidden states for their predictions. If this argument is set to a positive int, the Trainer will use the corresponding output (usually index 2) as the past state and feed it to the model at the next training step under the keyword argument mems.
(Numeric) Set the seed
Parameters related to the fine tuning, which can be seen in the text-package file inst/python/arg2.json.
A folder containing the pretrained model and output data. The model can then be used, for example, by textEmbed() by providing the model parameter with a the path to the output folder.
Information about more parameters see inst/python/args2.json (https://github.com/OscarKjell/text/tree/master/inst/python/args2.json). Descriptions of settings can be found in inst/python/task_finetune.py under "class ModelArguments" and "class DataTrainingArguments" as well as online at https://huggingface.co/docs/transformers/main_classes/trainer.
if (FALSE) { # \dontrun{
textFineTuneDomain(text_data)
} # }