Oscar Kjell

The text-package uses Hugging Face transformers language models, natural language processing and machine learning methods to examine text and numerical variables.

To learn more about the textEmbed() functions see the tutorial called: HuggingFace Transformers in R: Word Embeddings Defaults and Specifications.

This Getting Started tutorial is going through some central text functions. The data comes from the Kjell et al., 2019 (pre-print), which show how individuals’ open-ended text answers can be used to measure, describe and differentiate psychological constructs.

In short the workflow includes to first transform text variables into text-level and word type-level word embeddings. These word embeddings are then used to predict numerical variables, compute semantic similarity scores, and plot words in the word embedding space.

### textEmbed(): mapping text to numbers using HuggingFace language models

The textEmbed() function automatically transforms character variables in a given tibble to word embeddings. The example data that will be used in this tutorial comes from participants that have described their harmony in life and satisfaction with life with a text response, 10 descriptive words or rating scales. For a more detailed description please see the word embedding tutorial


library(text)

# View example data including both text and numerical variables
Language_based_assessment_data_8

# Transform the text data to BERT word embeddings
word_embeddings <- textEmbed(
texts = Language_based_assessment_data_8[3],
model = "bert-base-uncased",
layers = -2,
aggregation_from_tokens_to_texts = "mean",
aggregation_from_tokens_to_word_types = "mean",
keep_token_embeddings = FALSE)

# See how word embeddings are structured
word_embeddings

# Save the word embeddings to avoid having to import the text every time. (i.e., remove the ##)
## saveRDS(word_embeddings, "word_embeddings.rds")

# Get the word embeddings again (i.e., remove the ##)
## word_embeddings <- readRDS("_YOURPATH_/word_embeddings.rds")

### textTrain(): Examine the relationship between text and numeric variables

The textTrain() is used to examine how well the word embeddings from a text can predict a numeric variable. This is done by training the word embeddings using ridge regression and 10-fold cross-validation. In the example below we examine how well the harmony text responses can predict the rating scale scores from the Harmony in life scale.

library(text)

# Examine the relationship between harmonytext word embeddings and the harmony in life rating scale
model_htext_hils <- textTrain(word_embeddings$texts$harmonywords,
Language_based_assessment_data_8$hilstotal) # Examine the correlation between predicted and observed Harmony in life scale scores model_htext_hils$results

## Plot statistically significant words

The text-package has several ways to plot words; here we will use the Supervised Dimension Plot. The plotting is made in two steps: First the textProjection function is pre-processing the data, including computing statistics for each word type to be plotted. Second, textProjectionPlot() is visualizing the words, including many options to set color, font etc for the figure. Dividing this procedure into two steps makes the process more transparent (since the user naturally get to see the output that the words are plotted according to) and quicker since the more heavy computations are made in the first step, the last step goes quicker so that one can try different design settings.

### textProjection(): Pre-process data for plotting

library(text)

# Pre-process data
projection_results <- textProjection(
words = Language_based_assessment_data_8$harmonywords, word_embeddings = word_embeddings$texts,
word_types_embeddings = word_embeddings$word_types, x = Language_based_assessment_data_8$hilstotal,
y = Language_based_assessment_data_8$age ) projection_results$word_data

### textProjectionPlot(): A two-dimensional word plot

library(text)
# Supervised Dimension Projection Plot
# To avoid warnings -- and that words do not get plotted, first increase the max.overlaps for the entire session:
options(ggrepel.max.overlaps = 1000)

# Supervised Dimension Projection Plot
plot_projection_2D <- textProjectionPlot(
word_data = projection_results,
min_freq_words_plot = 1,
plot_n_word_extreme = 10,
plot_n_word_frequency = 5,
plot_n_words_middle = 5,
y_axes = TRUE,
p_alpha = 0.05,
title_top = "Harmony Words Responses (Supervised Dimension Projection)",
x_axes_label = "Low vs. High Harmony in Life Scale Score",
y_axes_label = "Low vs.High Age",
bivariate_color_codes = c("#E07f6a", "#60A1F7", "#85DB8E",
"#FF0000", "#EAEAEA", "#5dc688",
"#E07f6a", "#60A1F7", "#85DB8E"
))
# View plot
plot_projection_2D\$final_plot

### Other relevant references

The below list consists of papers analyzing human language in a similar fashion that is possible in text.

Methods Articles
Gaining insights from social media language: Methodologies and challenges.
Kern et al., (2016). Psychological Methods.

Semantic measures: Using natural language processing to measure, differentiate, and describe psychological constructs. Pre-print
Kjell et al., (2019). Psychological Methods.

Clinical Psychology
Facebook language predicts depression in medical records
Eichstaedt, J. C., … & Schwartz, H. A. (2018). PNAS.

Social and Personality Psychology
Personality, gender, and age in the language of social media: The open-vocabulary approach
Schwartz, H. A., … & Seligman, M. E. (2013). PloSOne.

Automatic Personality Assessment Through Social Media Language
Park, G., Schwartz, H. A., … & Seligman, M. E. P. (2014). Journal of Personality and Social Psychology.

Health Psychology
Psychological language on Twitter predicts county-level heart disease mortality
Eichstaedt, J. C., Schwartz, et al. (2015). Psychological Science.

Positive Psychology
The Harmony in Life Scale Complements the Satisfaction with Life Scale: Expanding the Conceptualization of the Cognitive Component of Subjective Well-Being
Kjell, et al., (2016). Social Indicators Research

Computer Science: Python Software
DLATK: Differential language analysis toolkit Schwartz, H. A., Giorgi, et al., (2017). In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

DLATK