This library computes the Cohen’s
The measure that J. Cohen (Cohen, 1969) created is obtained from the
mean difference standardized using the pooled standard deviation. Hence,
measures such as
This whole mess implies lack of comparability and confusion as to what
statistics was actually reported. For that reason, I chose to call the
true Cohen’s
MBESS
is an excellent package which already computes standardized mean
difference and returns confidence intervals (Kelley, 2022). However, it
does not compute confidence intervals in within-subject design directly.
The Algina and Keselman approximate method can be implemented within
MBESS with some programming (Cousineau & Goulet-Pelletier, 2021). This
package, on the other hand, can be used with any experimental design. It
only requires an argument design
which specifies the type of
experimental design.
The confidence interval in within-subect design was unknown until recently. In recent work (Cousineau, 2022, submitted), its exact expression was found when the population correlation is know and an approximation was proposed when the sample correlation is known, but not the population correlation.
You can install this library on you computer from CRAN (note the uppercase C and uppercase L)
install.packages("CohensdpLibrary")
or if the library devtools is installed with:
devtools::install_github("dcousin3/CohensdpLibrary")
and before using it:
library(CohensdpLibrary)
The main function is Cohensdp
, which returns the Cohen’s
Cohensdp( statistics = list(m1=76, m2=72, n=20, s1=14.8, s2=18.8, r=0.2),
design = "within",
method = "adjustedlambdaprime"
)
## [1] -0.3422415 0.2364258 0.8025925
You get a more readable output with summarize
, e.g.,
summarize(Cohensdp( statistics = list(m1=76, m2=72, n=20, s1=14.8, s2=18.8, r=0.2),
design = "within",
method = "adjustedlambdaprime"
))
## Cohen's dp = 0.236
## 95.0% Confidence interval = [-0.342, 0.803]
The design can be replaced with between
for a between-subject design:
summarize(Cohensdp( statistics = list(m1=76, m2=72, n1=10, n2=10, s1=14.8, s2=18.8),
design = "between")
)
## Cohen's dp = 0.236
## 95.0% Confidence interval = [-0.647, 1.113]
(the statistic r
is removed as there is no correlation in
between-group design, and n
is provided separately for each group,
n1
and n2
).
Finally, it is also possible to get a Cohen’s m0
to compare the sample mean
to, e.g.,
summarize(Cohensdp( statistics = list(m=76, m0=72, n=20, s=14.8),
design = "single")
)
## Cohen's dp = 0.270
## 95.0% Confidence interval = [-0.180, 0.713]
Replace summarize
with explain
for additional information on the
result.
Check the web site https://github.com/dcousin3/CohensdpLibrary for
more. also, help(CohensdpLibrary)
will get you started.
Cohen, J. (1969). Statistical power analysis for the behavioral sciences. Academic Press.
Cousineau, D. (2022). The exact distribution of the Cohen’s $d_p$ in repeated-measure designs. PsyArXiv. https://doi.org/10.31234/osf.io/akcnd
Cousineau, D. (submitted). The exact confidence interval of the Cohen’s
Cousineau, D., & Goulet-Pelletier, J.-C. (2021). A study of confidence intervals for Cohen’s dp in within-subject designs with new proposals. The Quantitative Methods for Psychology, 17, 51–75. https://doi.org/10.20982/tqmp.17.1.p051
Kelley, K. (2022). MBESS: The MBESS R package. Retrieved from https://CRAN.R-project.org/package=MBESS
Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 1–12. https://doi.org/10.3389/fpsyg.2013.00863
Westfall, J. (2016). Five different “Cohen’s