`R/3_1_textSimilarity.R`

`textSimilarity.Rd`

Compute the semantic similarity between two text variables.

`textSimilarity(x, y, method = "cosine", center = TRUE, scale = FALSE)`

- x
Word embeddings from textEmbed.

- y
Word embeddings from textEmbed.

- method
Character string describing type of measure to be computed. Default is "cosine" (see also "spearmen", "pearson" as well as measures from textDistance() (which here is computed as 1 - textDistance) including "euclidean", "maximum", "manhattan", "canberra", "binary" and "minkowski").

- center
(boolean; from base::scale) If center is TRUE then centering is done by subtracting the column means (omitting NAs) of x from their corresponding columns, and if center is FALSE, no centering is done.

- scale
(boolean; from base::scale) If scale is TRUE then scaling is done by dividing the (centered) columns of x by their standard deviations if center is TRUE, and the root mean square otherwise.

A vector comprising semantic similarity scores.

```
library(dplyr)
similarity_scores <- textSimilarity(
x = word_embeddings_4$texts$harmonytext,
y = word_embeddings_4$texts$satisfactiontext
)
comment(similarity_scores)
#> [1] "x embedding = .Information about the embeddings. textEmbedRawLayers: model: bert-base-uncased ; layers: 11 ; word_type_embeddings: TRUE ; max_token_to_sentence: 4 ; text_version: 0.9.99. textEmbedLayerAggregation: layers = 11 aggregation_from_layers_to_tokens = concatenate aggregation_from_tokens_to_texts = mean tokens_select = tokens_deselect = .y embedding = .Information about the embeddings. textEmbedRawLayers: model: bert-base-uncased ; layers: 11 ; word_type_embeddings: TRUE ; max_token_to_sentence: 4 ; text_version: 0.9.99. textEmbedLayerAggregation: layers = 11 aggregation_from_layers_to_tokens = concatenate aggregation_from_tokens_to_texts = mean tokens_select = tokens_deselect = .method = .cosine.center = .TRUE.scale = .FALSE"
```