Translation STILL UNDER DEVELOPMENT
textTranslate(
x,
source_lang = "",
target_lang = "",
model = "xlm-roberta-base",
device = "cpu",
tokenizer_parallelism = FALSE,
logging_level = "warning",
return_incorrect_results = FALSE,
return_tensors = FALSE,
return_text = TRUE,
clean_up_tokenization_spaces = FALSE
)
(string) The text to be translated.
(string) The input language. Might be needed for multilingual models (it will not have any effect for single pair translation models). using ISO 639-1 Code, such as: "en", "zh", "es", "fr", "de", "it", "sv", "da", "nn".
(string) The desired language output. Might be required for multilingual models (will not have any effect for single pair translation models).
(string) Specify a pre-trained language model that have been fine-tuned on a translation task.
(string) Name of device to use: 'cpu', 'gpu', or 'gpu:k' where k is a specific device number
(boolean) If TRUE this will turn on tokenizer parallelism.
(string) Set the logging level. Options (ordered from less logging to more logging): critical, error, warning, info, debug
(boolean) Many models are not created to be able to provide translation, so this setting stops them from returning incorrect results.
(boolean) Whether or not to include the predictions' tensors as token indices in the outputs.
(boolean) Whether or not to also output the decoded texts.
(boolean) Whether or not to clean the output from potential extra spaces.
A tibble with.
see textClassify
, textGeneration
, textNER
,
textSum
, textQA
, textTranslate
# \donttest{
# translation_example <- text::textTranslate(
# Language_based_assessment_data_8[1,1:2],
# source_lang = "en",
# target_lang = "fr",
# model = "t5-base")
# }