The text-package allows you to create pre-trained models using the text-train functions (e.g., textTrain()) or the fine tune functions (e.g., textFineTuneDomain). The models can be saved and used on new data using the text prediction functions (e.g., textPredict()). The L-BAM library below shows pre-trained models that are available to download. The models can be called with textPredict(), textAssess() or textClassify() like this:

library(text)

# Example calling a model using the URL
textPredict(
  model_info = "https://github.com/OscarKjell/text_models/raw/main/valence_models/facebook_model.rds",
  texts = "what is the valence of this text?"
)


# Example calling a model having an abbreviation
textClassify(
  model_info = "implicit_power_roberta_large_L23_v1",
  texts = "It looks like they have problems collaborating."
)

The text prediction functions can be given a model and a text, and automatically transform the text to word embeddings and produce estimated scores or probabilities.

If you want to add a pre-trained model to the L-BAM library, please fill out the details in this Google sheet and email us (oscar [ d_o t] kjell [a _ t] psy [DOT] lu [d_o_t]se) so that we can update the table online.

Note that you can adjust the width of the columns when scrolling the table.

Construct
Outcome
Language
Language.type
Level
Name
Path
Model.type
Feature
Validation.metric
CV.accuracy
Held.out.accuracy
SEMP.accuracy
Reference
Original
License
Description
N.training
Label.types
Other
Command.info
Depression (8)
Anxiety (8)
Valence (3)
Implicit need for power (1)
Implicit need for achievement (1)
Implicit need for affiliation (1)
Harmony in life (4)
Satisfaction with life (4)

References

Gu, Kjell, Schwartz & Kjell. (2024). Natural Language Response Formats for Assessing Depression and Worry with Large Language Models: A Sequential Evaluation with Model Pre-registration.

Kjell, O. N., Sikström, S., Kjell, K., & Schwartz, H. A. (2022). Natural language analyzed with AI-based transformers predict traditional subjective well-being measures approaching the theoretical upper limits in accuracy. Scientific reports, 12(1), 3918.

Nilsson, Runge, Ganesan, Lövenstierne, Soni & Kjell (2024) Automatic Implicit Motives Codings are at Least as Accurate as Humans’ and 99% Faster