Skip to content

Commit

Permalink
Spelling changes in vignettes
Browse files Browse the repository at this point in the history
  • Loading branch information
rvlenth committed Dec 18, 2024
1 parent 495ed3a commit c954f72
Show file tree
Hide file tree
Showing 8 changed files with 15 additions and 15 deletions.
6 changes: 3 additions & 3 deletions vignettes/AQuickStart.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ when there is more than one factor. You are better off keeping steps 1 and 2 sep
What you do in step 2 depends on how many factors you have, and how they relate.

### One-factor model {#one-factor}
If one-factor model fits well and the factor is named `treatment`, do
If a one-factor model fits well and the factor is named `treatment`, do
```r
EMM <- emmeans(model, "treatment") # or emmeans(model, ~ treatment)
EMM # display the means
Expand Down Expand Up @@ -139,7 +139,7 @@ return `summary_emm` objects (or lists thereof, class `summary_eml`):
```
SEMM <- summary(EMM)
```
If you display `EMM` and `SEMM`, they *look* identical; that's because `emmGrid` objects are displayed using `summary()`. But they are not identical. `EMM` has all the ingredients needed to do further analysis, e.g. `contrast(EMM, "consec")` will estimate comparisons between consecutive `Treatment` means. But `SEMM` is just an annotated data frame and we can do no further analysis with it. Similarly, we can change how `EMM` is displayed via arguments to `summary()` or relatives, whil;e in `SEMM`, everything has been computed and those results are locked-in.
If you display `EMM` and `SEMM`, they *look* identical; that's because `emmGrid` objects are displayed using `summary()`. But they are not identical. `EMM` has all the ingredients needed to do further analysis, e.g. `contrast(EMM, "consec")` will estimate comparisons between consecutive `Treatment` means. But `SEMM` is just an annotated data frame and we can do no further analysis with it. Similarly, we can change how `EMM` is displayed via arguments to `summary()` or relatives, while in `SEMM`, everything has been computed and those results are locked-in.


## Common things that can go wrong {#problems}
Expand Down Expand Up @@ -178,7 +178,7 @@ The `pairwise ~` construct is generally useful if you have only one factor;
otherwise, it likely gives you results you don't want.

## Further reading {#more}
There are several of these vignettes that offser more details and
There are several of these vignettes that offer more details and
more advanced topics. [An index of all these vignette topics is available here](vignette-topics.html).

The strings linked below are the names of the vignettes; i.e., they can
Expand Down
2 changes: 1 addition & 1 deletion vignettes/basics.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@ ref_grid(mod5)
The reference grid for `mod5` is different from that for `mod4` because in those models, `percent` is a factor in `mod4` and a covariate in `mod5`.

It is possible to modify the reference grid. In the context of the present example,
it might be inetersting to compare EMMs based on `mod4` and `mod5`, and we can put
it might be interesting to compare EMMs based on `mod4` and `mod5`, and we can put
them on an equal footing by using the same `percent` values as reference levels:
```{r}
(RG5 <- ref_grid(mod5, at = list(percent = c(9, 12, 15, 18))))
Expand Down
4 changes: 2 additions & 2 deletions vignettes/comparisons.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -65,14 +65,14 @@ and the differences in the lower triangle. Options exist to switch off any one o
and to switch which triangle is used for the latter two. Also, optional
arguments are passed. For instance, we can reverse the direction of the comparisons,
suppress the display of EMMs, swap where the $P$ values go,
and perform noninferiority tests with a threshold of 0.05 as follows:
and perform non-inferiority tests with a threshold of 0.05 as follows:
```{r}
pwpm(pigs.emm.s, means = FALSE, flip = TRUE, # args for pwpm()
reverse = TRUE, # args for pairs()
side = ">", delta = 0.05, adjust = "none") # args for test()
```
With all three *P* values so small, we have fish, soy, and skim in increasing order of
noninferiority based on the given threshold.
non-inferiority based on the given threshold.

When more than one factor is present, an existing or newly specified `by` variables()
can split the results into l list of matrices.
Expand Down
6 changes: 3 additions & 3 deletions vignettes/confidence-intervals.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -281,10 +281,10 @@ something akin to a Type II analysis of variance. See the [messy-data vignette](


## Testing equivalence, noninferiority, and nonsuperiority {#equiv}
<!-- @index Tests!Equivalence; Tests!Noninferiority; `test()`!`delta` -->
<!-- @index Tests!Equivalence; Tests!Non-inferiority; `test()`!`delta` -->
The `delta` argument in `summary()` or `test()` allows the user to
specify a threshold value to use in a test of equivalence, noninferiority,
or nonsuperiority. An equivalence test is kind of a backwards significance
specify a threshold value to use in a test of equivalence, non-inferiority,
or non-superiority. An equivalence test is kind of a backwards significance
test, where small *P* values are associated with small differences relative
to a specified threshold value `delta`.
The help page for `summary.emmGrid` gives the details of
Expand Down
2 changes: 1 addition & 1 deletion vignettes/models.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -382,7 +382,7 @@ should match `"prob"` or `"latent"`. With `mode = "prob"`, the
reference-grid predictions consist of the estimated multinomial
probabilities -- and this implies a re-gridding so no
link functions are passed on. The `"latent"` mode returns the linear predictor,
recentered so that it averages to zero over the levels of the response
re-centered so that it averages to zero over the levels of the response
variable (similar to sum-to-zero contrasts). Thus each latent variable can be
regarded as the log probability at that level minus the average log
probability over all levels.
Expand Down
4 changes: 2 additions & 2 deletions vignettes/transformations.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -434,7 +434,7 @@ The test statistics and P values differ somewhat from those for the odds ratios

###### {#not-logit}
<!-- @index Logistic-like regression!Non-logit links;
`regrid()`!probit or other lon-logit models;
`regrid()`!probit or other non-logit models;
Probit regression!Odds or risk ratios -->
We were able to obtain both odds ratios and risk ratios for `neuralgia.glm`.
But what if we had not used the logit link? Then the odds ratios would not
Expand Down Expand Up @@ -643,7 +643,7 @@ emmeans(ismod, "spray", type = "response", bias.adj = TRUE)
you will get exactly the same results, plus a warning message that says bias adjustment was disabled.
Why? Because in an ordinary GLM like this, we are *already* modeling the mean counts,
and the link function is not a response transformation as such, just a part of the relationship
we are specifying between the linear predictor and the mean. Given the simple structure of this dataset, we can verify this by noting that the estimates we have correspond examply to the simple observed mean counts:
we are specifying between the linear predictor and the mean. Given the simple structure of this dataset, we can verify this by noting that the estimates we have correspond exactly to the simple observed mean counts:
```{r}
with(InsectSprays, tapply(count, spray, mean))
```
Expand Down
4 changes: 2 additions & 2 deletions vignettes/vignette-topics.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -606,7 +606,7 @@ vignette: >
* [Registering `recover_data` and `emm_basis` methods](xtending.html#exporting)
* [`regrid` argument](transformations.html#stdize)
* [`regrid()`](transformations.html#regrid)
* [probit or other lon-logit models](transformations.html#not-logit)
* [probit or other non-logit models](transformations.html#not-logit)
* [`regrid` vs. `type`](transformations.html#regrid2)
* [to obtain risk ratios](transformations.html#riskrats)
* [`transform = "log"`](transformations.html#logs)
Expand Down Expand Up @@ -690,7 +690,7 @@ vignette: >
* [`joint = TRUE`](confidence-intervals.html#joint)
* Tests
* [Equivalence](confidence-intervals.html#equiv)
* [Noninferiority](confidence-intervals.html#equiv)
* [Non-inferiority](confidence-intervals.html#equiv)
* [Nonzero null](confidence-intervals.html#summary)
* [One- and two-sided](confidence-intervals.html#summary)
* [Too few means](AQuickStart.html#covar)
Expand Down
2 changes: 1 addition & 1 deletion vignettes/xtending.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ and `V` is the covariance matrix of those predictions. In those cases, we recomm
setting `misc$regrid.flag = TRUE`. Currently, this flag is used only for checking
whether the `nuisance` argument can be used in `ref_grid()`, and it is not
absolutely necessary because we also check to see if `X` is the identity. But
it provides a more efficient and reliable check. The code for nuisamce factors relies
it provides a more efficient and reliable check. The code for nuisance factors relies
on the structure of model matrices where columns are associated with model terms.
So it is not possible to process nuisance factors with a re-gridded basis.

Expand Down

0 comments on commit c954f72

Please sign in to comment.