-
Notifications
You must be signed in to change notification settings - Fork 0
/
README.Rmd
226 lines (164 loc) · 11 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
---
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
```
# sunflower: A Package to Assess and Categorize Language Production Errors
<!-- badges start -->
![](https://img.shields.io/badge/sunflower-v._1.01-orange?style=flat&link=https%3A%2F%2Fgithub.com%2Fismaelgutier%2Fsunflower) [![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0) ![](https://img.shields.io/badge/Language-grey?style=flat&logo=R&color=grey&link=https%3A%2F%2Fwww.r-project.org%2F)
<!-- badges end -->
<div align="justify">
*sunflower* is a package designed to assist clinicians and researchers in the fields of Speech Therapy and Neuropsychology of Language. Its primary goal is to facilitate the management of multiple response data and compute formal similarity indices to assess the quality of oral and written productions in patients with aphasia and related disorders, such as apraxia of speech, in Spanish. Additionally, the package allows for the classification of these productions according to classical typologies in the field, prior to computing formal and semantic similarity measures. For the computation of the latter, *sunflower* partially relies on natural language processing models such as word2vec. The outputs provided by this package facilitate statistical analyses in R, a widely-used tool in the field for data wrangling, visualization, and analysis.
## Installation
*sunflower* can be installed as an R package with:
```r
install.packages("devtools")
devtools::install_github("ismaelgutier/sunflower")
```
```{r include=FALSE}
require("sunflower")
require("tidyverse")
require("htmltools")
require("readxl")
```
The *sunflower* package works using the pipe operator (`%>%`) from the [*tidyverse* package](https://www.tidyverse.org/), allowing it to work seamlessly with functions from other packages in the *tidyverse*, such as *dplyr* for data wrangling, *readr* for data reading, and *ggplot2* for data visualization. This can significantly enhance our workflow.
## How to use
### Loading the packages
Once installed, we only need to load the *sunflower* package. However, as previously mentioned, the *tidyverse* package can also be valuable for other complementary tasks.
```r
require("sunflower")
require("tidyverse")
```
### Compute Formal Quality Indexes
We can load a pre-loaded dataframe from the package, which is available for anyone interested in testing the functions. These dataframes include: `IGC_sample`, `IGC_long_sample`, `IGC_long_phon_sample` and `simulated_sample`.
```{r}
df_to_formal_metrics = sunflower::IGC_long_phon_sample
```
However, in this example we are going to conduct the formal quality analysis using phonological broad transcriptions from a larger dataset.
```{r echo=FALSE}
df_to_formal_metrics = readxl::read_xlsx("manuscript/data/long_with_phon.xlsx") %>%
dplyr::select(ID_general, test, task_type, task_modality, ID,
item_ID = task_item_ID, item, response,
RA = cda_behavior, attempt = Attempt,
item_phon = target_word_transcrito_clean, response_phon = response_word_transcrito_clean) %>%
dplyr::filter(test %in% c("SnodgrassVanderwart", "BETA", "EPLA", "Gutiérrez-Cordero")) %>%
dplyr::filter(!stringr::str_detect(task_type, "nonword")) %>%
dplyr::arrange(ID)
```
```{r}
formal_metrics_computed = df_to_formal_metrics %>%
get_formal_similarity(item_col = "item",
response_col = "response",
attempt_col = "attempt",
group_cols = c("ID", "item_ID"))
```
Display some of the results from the formal quality analysis.
```{r echo=FALSE}
formal_metrics_computed %>% head(8) %>% knitr::kable()
```
***Note.*** Move the dataframe to the right to see all the columns and metrics.
### Obtain Positional Accuracy Data
Apply the pertinent function to obtain positional accuracies...
```{r message=FALSE}
positions_accuracy = formal_metrics_computed %>%
positional_accuracy(item_col = "item_phon",
response_col = "response_phon",
match_col = "adj_strict_match_pos")
```
Display the results of the positional accuracy analysis.
```{r echo=FALSE}
positions_accuracy %>% select(ID:response_phon, RA, attempt, position:element_in_response) %>% head(8) %>% knitr::kable()
```
If we were to plot this dataframe, we would obtain...
```{r include=FALSE}
# Convertir targetL a character para evitar problemas al combinar dataframes
positions_accuracy <- positions_accuracy %>%
dplyr::mutate(targetL = as.character(targetL))
# Duplicar y modificar el dataframe para crear 'positions_general'
positions_general <- positions_accuracy %>%
dplyr::mutate(targetL = "General")
# Combinar ambos dataframes
positions <- dplyr::bind_rows(positions_accuracy, positions_general)
# Especificar manualmente los niveles en el orden deseado
desired_levels <- c("3", "4", "5", "6", "7", "8", "9", "10", "11", "12",
"13", "14", "15", "17", "21", "22", "24", "48", "General")
# Convertir correct_pos a numérico y ordenar targetL como factor según desired_levels
positions <- positions %>%
dplyr::mutate(correct_pos = as.numeric(correct_pos),
targetL = factor(targetL, levels = desired_levels)) %>%
dplyr::arrange(correct_pos, targetL)
# Definir un conjunto de linetypes que se pueda repetir
custom_linetypes <- rep(c("solid", "dashed", "dotted", "longdash", "dotdash"),
length.out = nlevels(positions$targetL))
# Calcular la precisión y contar el número de observaciones por grupo
plot_positions <- positions %>%
group_by(position, targetL) %>%
summarize(acc = mean(correct_pos, na.rm = TRUE),
n = n()) %>%
ggplot(aes(x = as.numeric(position), y = acc, group = targetL,
fill = targetL, color = targetL, lty = targetL)) +
geom_line(size = 0.70, alpha = 0.6) +
geom_point(aes(size = n), shape = 21, color = "black", alpha = 0.6) +
scale_linetype_manual(values = custom_linetypes) +
theme(panel.border = element_rect(colour = "black", fill = NA),
panel.background = element_blank(),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
axis.line = element_blank()) +
scale_x_continuous(breaks = seq(min(positions$position, na.rm = TRUE),
max(positions$position, na.rm = TRUE),
by = 1)) +
ylab("Proportion (%) of correct phonemes") +
xlab("Phoneme position") +
guides(fill = guide_legend(title = "Word Length", ncol = 2), # Set the legend to 2 columns
lty = guide_legend(title = "Word Length", ncol = 2),
color = guide_legend(title = "Word Length", ncol = 2),
size = guide_legend(title = "Datapoints", ncol = 2)) +
papaja::theme_apa() +
theme(legend.position = "right")
```
```{r plot_positions, out.width = "75%", fig.align="center", echo=FALSE}
plot_positions
```
***Note.*** This plot depicts the positional accuracy of `r nrow(positions)` datapoints.
### Classify Errors
```{r include=FALSE}
df_to_classify = sunflower::IGC_long_phon_sample
m_w2v = word2vec::read.word2vec(file = "dependency-bundle/sbw_vectors.bin", normalize = TRUE)
```
Following the necessary steps to classify the errors correctly.
```{r}
errors_classified = df_to_classify %>%
check_lexicality(item_col = "item", response_col = "response", criterion = "database") %>%
get_formal_similarity(item_col = "item", response_col = "response",
attempt_col = "attempt", group_cols = c("ID", "item_ID")) %>%
get_semantic_similarity(item_col = "item", response_col = "response", model = m_w2v) %>%
classify_errors(response_col = "response", item_col = "item",
access_col = "accessed", RA_col = "RA", also_classify_RAs = T)
```
Display the classification that was conducted.
```{r echo=FALSE}
errors_classified %>%
dplyr::select(ID, item_ID, item, response, RA, attempt, correct, nonword:check_comment) %>%
filter(nchar(response) > 3) %>% # Filtra respuestas con longitud mayor a 3 caracteres
sample_n(9) %>% # Selecciona 12 filas aleatorias
knitr::kable() # Muestra en formato de tabla
```
***Notes.*** Move the dataframe to the right to see all the columns and errors.
*sunflower* allows for the classification of production errors once some indexes related to responses to a stimulus have been obtained and contextualized based on whether they come from repeated attempts or single productions. This process involves three steps.
First, a lexicality check of the response is performed using the `check_lexicality()` function, which involves determining whether the response is a real word. To do this, the package searches for the response in a database such as *BuscaPalabras* ([BPal](https://www.uv.es/~mperea/Davis_Perea_in_press.pdf)) and compares its frequency with the target word to determine if it is a real word based on whether it has a higher frequency or not when the parameter `criterion = "database"` is set. Alternatively, the response can be checked against a dictionary (*sunflower* searches for responses among entries from the *Real Academia Española*, [RAE](https://www.rae.es/)) when the parameter `criterion = "dictionary"` is used.
Next, similarity measures between the targets and the responses are obtained using various algorithms within the `get_formal_similarity()` function. Finally, the cosine similarity between the two productions is computed if possible using the `get_semantic_similarity()` function, based on an NLP model. In our case, the parameter `model = m_w2v` refers to a binary file containing a Spanish Billion Words embeddings corpus created using the *word2vec* algorithm. This file is included in the zip file (for more information, see the markdown in the vignettes) located within the <a href="https://osf.io/mfcvb" style="color: purple;">dependency-bundle zip</a>, which can be found in our supplementary [OSF mirror repository](https://osf.io/akuxv/).
----
#### Making it faster - A guided usage tutorial
Before continuing, it is worth mentioning that there is a file that allows executing all the functions relatively quickly as a sample, which can be downloaded from <a href="https://osf.io/urz4y">its link in our OSF</a>. This can be helpful for both novice users and those who want to explore the package's functionalities in a more straightforward and/or faster way. Users would only need to run the code presented in the script in the link and would require the <em>word2vec</em> model made available in the <a href="https://osf.io/mfcvb" style="color: purple;">dependency-bundle zip</a>.</span>
----
Thanks to Cristian Cardellino for making his work on the [Spanish Billion Word Corpus and Embeddings](https://crscardellino.github.io/SBWCE/) publicly available.
----
Any suggestions, comments, or questions about the package's functionality are warmly welcomed. If you’d like to contribute to the project, please feel free to get in touch. 🌻