You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The get_p_value() documentation notes that the function may return a p-value of 0. This may happen because the functions left_p_value() and right_p_value() (inside simulation_based_p_value() in the get_p_value.R file) compute the estimate of the p-value as a standard Monte Carlo average, and because it may happen zero times in the resampled data that the test statistic is more extreme than the observed statistic.
Besides the pratical nuisance there is actually theoretical support for not using the raw Monte Carlo average as an estimate of the p-value. See Section 17.2 of Lehmann and Romano, Testing Statistical Hypotheses, 2022 or https://doi.org/10.2202/1544-6115.1585, which also includes a discussion of the consequences of p-values being 0 when computing corrections for multiple testing. A fix is simply to replace right_p_value() (and similarly for left_p_value()) by something equivalent to
Thanks for the issue! This is related to discussions in #205, #206#257, and #458. We recognize that the current approach isn't a silver bullet but feel that it works well for our purposes, so I'm going to go ahead and close.
This issue has been automatically locked. If you believe you have found a related problem, please file a new issue (with a reprex: https://reprex.tidyverse.org) and link to this issue.
The
get_p_value()
documentation notes that the function may return a p-value of 0. This may happen because the functionsleft_p_value()
andright_p_value()
(insidesimulation_based_p_value()
in the get_p_value.R file) compute the estimate of the p-value as a standard Monte Carlo average, and because it may happen zero times in the resampled data that the test statistic is more extreme than the observed statistic.Besides the pratical nuisance there is actually theoretical support for not using the raw Monte Carlo average as an estimate of the p-value. See Section 17.2 of Lehmann and Romano, Testing Statistical Hypotheses, 2022 or https://doi.org/10.2202/1544-6115.1585, which also includes a discussion of the consequences of p-values being 0 when computing corrections for multiple testing. A fix is simply to replace
right_p_value()
(and similarly forleft_p_value()
) by something equivalent tosee, e.g., formula (17.7) in Lehmann and Romano. I suggest that this is a better default.
The text was updated successfully, but these errors were encountered: