Question Answering. (experimental)

textQA(
  question,
  context,
  model = "",
  device = "cpu",
  tokenizer_parallelism = FALSE,
  logging_level = "warning",
  return_incorrect_results = FALSE,
  top_k = 1L,
  doc_stride = 128L,
  max_answer_len = 15L,
  max_seq_len = 384L,
  max_question_len = 64L,
  handle_impossible_answer = FALSE,
  set_seed = 202208L
)

Arguments

question

(string) A question

context

(string) The context(s) where the model will look for the answer.

model

(string) HuggingFace name of a pre-trained language model that have been fine-tuned on a question answering task.

device

(string) Device to use: 'cpu', 'gpu', or 'gpu:k' where k is a specific device number

tokenizer_parallelism

(boolean) If TRUE this will turn on tokenizer parallelism.

logging_level

(string) Set the logging level. Options (ordered from less logging to more logging): critical, error, warning, info, debug

return_incorrect_results

(boolean) Stop returning some incorrectly formatted/structured results. This setting does CANOT evaluate the actual results (whether or not they make sense, exist, etc.). All it does is to ensure the returned results are formatted correctly (e.g., does the question-answering dictionary contain the key "answer", is sentiments from textClassify containing the labels "positive" and "negative").

top_k

(integer) (int) Indicates number of possible answer span(s) to get from the model output.

doc_stride

(integer) If the context is too long to fit with the question for the model, it will be split into overlapping chunks. This setting controls the overlap size.

max_answer_len

(integer) Max answer size to be extracted from the model’s output.

max_seq_len

(integer) The max total sentence length (context + question) in tokens of each chunk passed to the model. If needed, the context is split in chunks (using doc_stride as overlap).

max_question_len

(integer) The max question length after tokenization. It will be truncated if needed.

handle_impossible_answer

(boolean) Whether or not impossible is accepted as an answer.

set_seed

(Integer) Set seed.

Value

Answers.

Examples

# \donttest{
#   qa_examples <- textQA(question = "Which colour have trees?",
#     context = "Trees typically have leaves, are mostly green and like water.")
# }