-
Notifications
You must be signed in to change notification settings - Fork 44
/
18.Rmd
2800 lines (2335 loc) · 120 KB
/
18.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
```{r, echo = F}
knitr::opts_chunk$set(fig.retina = 2.5)
knitr::opts_chunk$set(fig.align = "center")
```
# Metric Predicted Variable with Multiple Metric Predictors
> We will consider models in which the predicted variable is an additive combination of predictors, all of which have proportional influence on the prediction. This kind of model is called *multiple linear regression*. We will also consider nonadditive combinations of predictors, which are called *interactions*. [@kruschkeDoingBayesianData2015, p. 509, *emphasis* in the original]
## Multiple linear regression
Say we have one criterion $y$ and two predictors, $x_1$ and $x_2$. If $y \sim \operatorname{Normal}(\mu, \sigma)$ and $\mu = \beta_0 + \beta_1 x_1 + \beta_2 x_2$, then it's also the case that we can rewrite the formula for $y$ as
$$y \sim \operatorname{Normal}(\beta_0 + \beta_1 x_1 + \beta_2 x_2, \sigma).$$
As Kruschke pointed out, the basic model "assumes homogeneity of variance, which means that at all values of $x_1$ and $x_2$, the variance $\sigma^2$ of $y$ is the same" (p. 510).
If we presume the data for the two $x$ variables are uniformly distributed within 0 and 10, we can make the data for Figure 18.1 like this.
```{r, message = F, warning = F}
library(tidyverse)
n <- 300
set.seed(18)
d <-
tibble(x_1 = runif(n = n, min = 0, max = 10),
x_2 = runif(n = n, min = 0, max = 10)) %>%
mutate(y = rnorm(n = n, mean = 10 + x_1 + 2 * x_2, sd = 2))
head(d)
```
Before we plot those `d` data, we'll want to make a data object containing the information necessary to make the grid lines for Kruschke's 3D regression plane. To my mind, this will be easier to do in stages. If you look at the top upper panel of Figure 18.1 as a reference, our first step will be to make the vertical lines. Save them as `d1`.
```{r, fig.width = 3.5, fig.height = 3.25, warning = F, message = F}
theme_set(
theme_linedraw() +
theme(panel.grid = element_blank())
)
d1 <-
tibble(index = 1:21,
x_1 = seq(from = 0, to = 10, length.out = 21)) %>%
expand_grid(x_2 = c(0, 10)) %>%
mutate(y = 10 + 1 * x_1 + 2 * x_2)
d1 %>%
ggplot(aes(x = x_1, y = y, group = index)) +
geom_path(color = "grey85") +
ylim(0, 50)
```
You may have noticed our `theme_set()` lines at the top. Though we'll be using a different default theme later in the project, this is the best theme to use for these initial few plots. Okay, now let's make the more horizontally-oriented grid lines and save them as `d2`.
```{r, fig.width = 3.5, fig.height = 3.25, warning = F, message = F}
d2 <-
tibble(index = 1:21 + 21,
x_2 = seq(from = 0, to = 10, length.out = 21)) %>%
expand_grid(x_1 = c(0, 10)) %>%
mutate(y = 10 + 1 * x_1 + 2 * x_2)
d2 %>%
ggplot(aes(x = x_1, y = y, group = index)) +
geom_path(color = "grey85") +
ylim(0, 50)
```
Now combine the two and save them as `grid`.
```{r, fig.width = 3.5, fig.height = 3.25, warning = F, message = F}
grid <-
bind_rows(d1, d2)
grid %>%
ggplot(aes(x = x_1, y = y, group = index)) +
geom_path(color = "grey85") +
ylim(0, 50)
grid %>%
ggplot(aes(x = x_2, y = y, group = index)) +
geom_path(color = "grey85") +
ylim(0, 50)
grid %>%
ggplot(aes(x = x_1, y = x_2, group = index)) +
geom_path(color = "grey85")
```
We're finally ready combine `d` and `grid` to make the three 2D scatter plots from Figure 18.1.
```{r, fig.width = 3.5, fig.height = 3.25, warning = F, message = F}
d %>%
ggplot(aes(x = x_1, y = y)) +
geom_path(data = grid,
aes(group = index),
color = "grey85") +
geom_segment(aes(xend = x_1,
yend = 10 + x_1 + 2 * x_2),
linewidth = 1/4, linetype = 3) +
geom_point(shape = 21, stroke = 1/10,
color = "white", fill = "steelblue4") +
scale_x_continuous(limits = c(0, 10), expand = c(0, 0), breaks = 0:5 * 2) +
scale_y_continuous(limits = c(0, 50), expand = c(0, 0))
d %>%
ggplot(aes(x = x_2, y = y)) +
geom_path(data = grid,
aes(group = index),
color = "grey85") +
geom_segment(aes(xend = x_2,
yend = 10 + x_1 + 2 * x_2),
linewidth = 1/4, linetype = 3) +
geom_point(shape = 21, stroke = 1/10,
color = "white", fill = "steelblue4") +
scale_x_continuous(limits = c(0, 10), expand = c(0, 0), breaks = 0:5 * 2) +
scale_y_continuous(limits = c(0, 50), expand = c(0, 0))
d %>%
ggplot(aes(x = x_1, y = x_2)) +
geom_path(data = grid,
aes(group = index),
color = "grey85") +
geom_point(shape = 21, stroke = 1/10,
color = "white", fill = "steelblue4") +
scale_x_continuous(limits = c(0, 10), expand = c(0, 0), breaks = 0:5 * 2) +
scale_y_continuous(limits = c(0, 10), expand = c(0, 0), breaks = 0:5 * 2)
```
As in previous chapters, I'm not aware that **ggplot2** allows for three-dimensional wireframe plots of the kind in the upper left panel. If you'd like to make one in base **R**, have at it.
For Figure 18.2, the $x$ variables look to be multivariate normal with a correlation of about -.95. We can simulate such data with help from the [**MASS** package](https://CRAN.R-project.org/package=MASS) [@R-MASS; @MASS2002].
Sven Hohenstein's [answer to this stats.stackexchange.com question](https://stats.stackexchange.com/questions/164471/generating-a-simulated-dataset-from-a-correlation-matrix-with-means-and-standard) provides the steps for simulating the data. First, we'll need to specify the desired means and standard deviations for our variables. Then we'll make a correlation matrix with 1s on the diagonal and the desired correlation coefficient, $\rho$ on the off-diagonal. Since the correlation matrix is symmetric, both off-diagonal positions are the same. Then we convert the correlation matrix to a covariance matrix.
```{r}
mus <- c(5, 5)
sds <- c(2, 2)
cors <- matrix(c(1, -.95,
-.95, 1),
ncol = 2)
cors
covs <- sds %*% t(sds) * cors
covs
```
Now we've defined our means, standard deviations, and covariance matrix, we're ready to simulate the data with the `MASS::mvrnorm()` function.
```{r, warning = F, message = F}
# how many data points would you like to simulate?
n <- 300
set.seed(18.2)
d <-
MASS::mvrnorm(n = n,
mu = mus,
Sigma = covs,
empirical = T) %>%
as_tibble() %>%
set_names("x_1", "x_2") %>%
mutate(y = rnorm(n = n, mean = 10 + x_1 + 2 * x_2, sd = 2))
```
Now we have our simulated data in hand, we're ready for three of the four panels of Figure 18.2.
```{r, fig.width = 6, fig.height = 5.5, warning = F, message = F}
p1 <-
d %>%
ggplot(aes(x = x_1, y = y)) +
geom_path(data = grid,
aes(group = index),
color = "grey85") +
geom_segment(aes(xend = x_1,
yend = 10 + x_1 + 2 * x_2),
linewidth = 1/4, linetype = 3) +
geom_point(shape = 21, stroke = 1/10,
color = "white", fill = "steelblue4") +
scale_x_continuous(limits = c(0, 10), expand = c(0, 0), breaks = 0:5 * 2) +
scale_y_continuous(limits = c(0, 50), expand = c(0, 0))
p2 <-
d %>%
ggplot(aes(x = x_2, y = y)) +
geom_path(data = grid,
aes(group = index),
color = "grey85") +
geom_segment(aes(xend = x_2,
yend = 10 + x_1 + 2 * x_2),
linewidth = 1/4, linetype = 3) +
geom_point(shape = 21, stroke = 1/10,
color = "white", fill = "steelblue4") +
scale_x_continuous(limits = c(0, 10), expand = c(0, 0), breaks = 0:5 * 2) +
scale_y_continuous(limits = c(0, 50), expand = c(0, 0))
p3 <-
d %>%
ggplot(aes(x = x_1, y = x_2)) +
geom_path(data = grid,
aes(group = index),
color = "grey85") +
geom_point(shape = 21, stroke = 1/10,
color = "white", fill = "steelblue4") +
scale_x_continuous(limits = c(0, 10), expand = c(0, 0), breaks = 0:5 * 2) +
scale_y_continuous(limits = c(0, 10), expand = c(0, 0), breaks = 0:5 * 2)
# bind them together with patchwork
library(patchwork)
plot_spacer() + p1 + p2 + p3
```
We came pretty close.
### The perils of correlated predictors.
> Figures 18.1 and 18.2 show data generated from the same model. In both figures, $\sigma = 2$, $\beta_0 = 10$, $\beta_1 = 1$, $\beta_2 = 2$. All that differs between the two figures is the distribution of the $\langle x_1, x_2 \rangle$ values, which is not specified by the model. In Figure 18.1, the $\langle x_1, x_2 \rangle$ values are distributed independently. In Figure 18.2, the $\langle x_1, x_2 \rangle$ values are negatively correlated: When $x_1$ is small, $x_2$ tends to be large, and when $x_1$ is large, $x_2$ tends to be small. (p. 510)
If you look closely at our simulation code from above, you'll see we have done so, too.
> Real data often have correlated predictors. For example, consider trying to predict a state's average high-school SAT score on the basis of the amount of money the state spends per pupil. If you plot only mean SAT against money spent, there is actually a *decreasing* trend... (p. 513, *emphasis* in the original)
Before we remake Figure 18.3 to examine that decreasing trend, we'll need to load the data from [@guber1999getting].
```{r, message = F}
my_data <- read_csv("data.R/Guber1999data.csv")
glimpse(my_data)
```
Before we get all excited and try to plot those data as in Figure 18.3, we'll need to redefine the 3D grid of our regression plane, this time based on the equation at the top of Figure 18.3.
```{r}
d1 <-
tibble(index = 1:21,
Spend = seq(from = 3.4, to = 10.1, length.out = 21)) %>%
expand_grid(PrcntTake = c(0, 85))
d2 <-
tibble(index = 1:21 + 21,
PrcntTake = seq(from = 0, to = 85, length.out = 21)) %>%
expand_grid(Spend = c(3.4, 10.1))
grid <-
bind_rows(d1, d2) %>%
mutate(SATT = 993.8 + -2.9 * PrcntTake + 12.3 * Spend)
grid %>% glimpse()
```
Now we have our updated `grid` object, we're ready to plot the data in our version of Figure 18.3.
```{r, fig.width = 6, fig.height = 6, warning = F, message = F}
p1 <-
my_data %>%
ggplot(aes(x = Spend, y = SATT)) +
geom_path(data = grid,
aes(group = index),
color = "grey85") +
geom_segment(aes(xend = Spend,
yend = 993.8 + -2.9 * PrcntTake + 12.3 * Spend),
linewidth = 1/4, linetype = 3) +
geom_point(shape = 21, stroke = 1/10,
color = "white", fill = "steelblue4") +
scale_x_continuous(limits = c(3.4, 10.1), expand = c(0, 0), breaks = 2:5 * 2) +
scale_y_continuous(limits = c(785, 1120))
p2 <-
my_data %>%
ggplot(aes(x = PrcntTake, y = SATT)) +
geom_path(data = grid,
aes(group = index),
color = "grey85") +
geom_segment(aes(xend = PrcntTake,
yend = 993.8 + -2.9 * PrcntTake + 12.3 * Spend),
linewidth = 1/4, linetype = 3) +
geom_point(shape = 21, stroke = 1/10,
color = "white", fill = "steelblue4") +
scale_x_continuous("% Take", limits = c(0, 85), expand = c(0, 0)) +
scale_y_continuous(limits = c(785, 1120))
p3 <-
my_data %>%
ggplot(aes(x = PrcntTake, y = Spend)) +
geom_path(data = grid,
aes(group = index),
color = "grey85") +
geom_point(shape = 21, stroke = 1/10,
color = "white", fill = "steelblue4") +
scale_x_continuous("% Take", limits = c(0, 85), expand = c(0, 0)) +
scale_y_continuous(limits = c(3.4, 10.1), expand = c(0, 0))
# bind them together and add a title
wrap_elements(grid::textGrob('No 3D wireframe plots for us')) +
p1 + p2 + p3 +
plot_annotation(title = "SATT ~ N(m,sd=31.5), m = 993.8 + −2.9 %Take + 12.3 Spend")
```
You can learn more about how we added that title to our plot ensemble from Pedersen's [-@Pedersen2020AddingAnnotation] vignette, [*Adding annotation and style*](https://patchwork.data-imaginist.com/articles/guides/annotation.html), and more about how we added that text in place of a wireframe plot from another of his [-@Pedersen2020PlotAssembly] vignettes, [*Plot assembly*](https://patchwork.data-imaginist.com/articles/guides/assembly.html).
> The separate influences of the two predictors could be assessed in this example because the predictors had only mild correlation with each other. There was enough independent variation of the two predictors that their distinct relationships to the outcome variable could be detected. In some situations, however, the predictors are so tightly correlated that their distinct effects are difficult to tease apart. Correlation of predictors causes the estimates of their regression coefficients to trade-off, as we will see when we examine the posterior distribution. (p. 514)
### The model and implementation.
Let's make our version of the model diagram in Figure 18.4 to get a sense of where we're going. If you look back to [Section 17.2][Robust linear regression], you'll see this is just a minor reworking of the code from Figure 17.2.
```{r, fig.width = 6.75, fig.height = 5, message = F}
# normal density
p1 <-
tibble(x = seq(from = -3, to = 3, by = .1)) %>%
ggplot(aes(x = x, y = (dnorm(x)) / max(dnorm(x)))) +
geom_area(fill = "steelblue4", color = "steelblue4", alpha = .6) +
annotate(geom = "text",
x = 0, y = .2,
label = "normal",
size = 7) +
annotate(geom = "text",
x = c(0, 1.5), y = .6,
label = c("italic(M)[0]", "italic(S)[0]"),
size = 7, family = "Times", parse = T) +
scale_x_continuous(expand = c(0, 0)) +
theme_void() +
theme(axis.line.x = element_line(linewidth = 0.5))
# a second normal density
p2 <-
tibble(x = seq(from = -3, to = 3, by = .1)) %>%
ggplot(aes(x = x, y = (dnorm(x)) / max(dnorm(x)))) +
geom_area(fill = "steelblue4", color = "steelblue4", alpha = .6) +
annotate(geom = "text",
x = 0, y = .2,
label = "normal",
size = 7) +
annotate(geom = "text",
x = c(0, 1.5), y = .6,
label = c("italic(M)[italic(j)]", "italic(S)[italic(j)]"),
size = 7, family = "Times", parse = T) +
scale_x_continuous(expand = c(0, 0)) +
theme_void() +
theme(axis.line.x = element_line(linewidth = 0.5))
## two annotated arrows
# save our custom arrow settings
my_arrow <- arrow(angle = 20, length = unit(0.35, "cm"), type = "closed")
p3 <-
tibble(x = c(.33, 1.67),
y = c(1, 1),
xend = c(.67, 1.2),
yend = c(0, 0)) %>%
ggplot(aes(x = x, xend = xend,
y = y, yend = yend)) +
geom_segment(arrow = my_arrow) +
annotate(geom = "text",
x = c(.35, 1.3), y = .5,
label = "'~'",
size = 10, family = "Times", parse = T) +
xlim(0, 2) +
theme_void()
# exponential density
p4 <-
tibble(x = seq(from = 0, to = 1, by = .01)) %>%
ggplot(aes(x = x, y = (dexp(x, 2) / max(dexp(x, 2))))) +
geom_area(fill = "steelblue4", color = "steelblue4", alpha = .6) +
annotate(geom = "text",
x = .5, y = .2,
label = "exp",
size = 7) +
annotate(geom = "text",
x = .5, y = .6,
label = "italic(K)",
size = 7, family = "Times", parse = T) +
scale_x_continuous(expand = c(0, 0)) +
theme_void() +
theme(axis.line.x = element_line(linewidth = 0.5))
# likelihood formula
p5 <-
tibble(x = .5,
y = .25,
label = "beta[0]+sum()[italic(j)]*beta[italic(j)]*italic(x)[italic(ji)]") %>%
ggplot(aes(x = x, y = y, label = label)) +
geom_text(size = 7, parse = T, family = "Times") +
scale_x_continuous(expand = c(0, 0), limits = c(0, 1)) +
ylim(0, 1) +
theme_void()
# half-normal density
p6 <-
tibble(x = seq(from = 0, to = 3, by = .01)) %>%
ggplot(aes(x = x, y = (dnorm(x)) / max(dnorm(x)))) +
geom_area(fill = "steelblue4", color = "steelblue4", alpha = .6) +
annotate(geom = "text",
x = 1.5, y = .2,
label = "half-normal",
size = 7) +
annotate(geom = "text",
x = 1.5, y = .6,
label = "0*','*~italic(S)[sigma]",
size = 7, family = "Times", parse = T) +
scale_x_continuous(expand = c(0, 0)) +
theme_void() +
theme(axis.line.x = element_line(linewidth = 0.5))
# four annotated arrows
p7 <-
tibble(x = c(.43, .43, 1.5, 2.5),
y = c(1, .55, 1, 1),
xend = c(.43, 1.225, 1.5, 1.75),
yend = c(.8, .15, .2, .2)) %>%
ggplot(aes(x = x, xend = xend,
y = y, yend = yend)) +
geom_segment(arrow = my_arrow) +
annotate(geom = "text",
x = c(.3, .7, 1.38, 2), y = c(.92, .22, .65, .6),
label = c("'~'", "'='", "'='", "'~'"),
size = 10, family = "Times", parse = T) +
annotate(geom = "text",
x = .43, y = .7,
label = "nu*minute+1",
size = 7, family = "Times", parse = T) +
xlim(0, 3) +
theme_void()
# student-t density
p8 <-
tibble(x = seq(from = -3, to = 3, by = .1)) %>%
ggplot(aes(x = x, y = (dt(x, 3) / max(dt(x, 3))))) +
geom_area(fill = "steelblue4", color = "steelblue4", alpha = .6) +
annotate(geom = "text",
x = 0, y = .2,
label = "student t",
size = 7) +
annotate(geom = "text",
x = 0, y = .6,
label = "nu~~~mu[italic(i)]~~~sigma",
size = 7, family = "Times", parse = T) +
scale_x_continuous(expand = c(0, 0)) +
theme_void() +
theme(axis.line.x = element_line(linewidth = 0.5))
# the final annotated arrow
p9 <-
tibble(x = c(.375, .625),
y = c(1/3, 1/3),
label = c("'~'", "italic(i)")) %>%
ggplot(aes(x = x, y = y, label = label)) +
geom_text(size = c(10, 7), parse = T, family = "Times") +
geom_segment(x = .5, xend = .5,
y = 1, yend = 0,
arrow = my_arrow) +
xlim(0, 1) +
theme_void()
# some text
p10 <-
tibble(x = .5,
y = .5,
label = "italic(y[i])") %>%
ggplot(aes(x = x, y = y, label = label)) +
geom_text(size = 7, parse = T, family = "Times") +
xlim(0, 1) +
theme_void()
# define the layout
layout <- c(
area(t = 1, b = 2, l = 3, r = 5),
area(t = 1, b = 2, l = 7, r = 9),
area(t = 4, b = 5, l = 1, r = 3),
area(t = 4, b = 5, l = 5, r = 7),
area(t = 4, b = 5, l = 9, r = 11),
area(t = 3, b = 4, l = 3, r = 9),
area(t = 7, b = 8, l = 5, r = 7),
area(t = 6, b = 7, l = 1, r = 11),
area(t = 9, b = 9, l = 5, r = 7),
area(t = 10, b = 10, l = 5, r = 7)
)
# combine and plot!
(p1 + p2 + p4 + p5 + p6 + p3 + p8 + p7 + p9 + p10) +
plot_layout(design = layout) &
ylim(0, 1) &
theme(plot.margin = margin(0, 5.5, 0, 5.5))
```
"As with the model for simple linear regression, the Markov Chain Monte Carlo (MCMC) sampling can be more efficient if the data are mean-centered or standardized" (p. 515). We'll make a custom function to standardize the criterion and predictor values.
```{r}
standardize <- function(x) {
(x - mean(x)) / sd(x)
}
my_data <-
my_data %>%
mutate(prcnt_take_z = standardize(PrcntTake),
spend_z = standardize(Spend),
satt_z = standardize(SATT))
```
Let's open **brms**.
```{r, message = F, warning = F}
library(brms)
```
Now we're ready to fit the model. As Kruschke pointed out, the priors on the standardized predictors are set with
> an arbitrary standard deviation of $2.0$. This value was chosen because standardized regression coefficients are algebraically constrained to fall between $−1$ and $+1$ in least-squares regression[^6], and therefore, the regression coefficients will not exceed those limits by much. A normal distribution with standard deviation of $2.0$ is reasonably flat over the range from $−1$ to $+1$. (p. 516)
With data like this, even a `prior(normal(0, 1), class = b)` would be only mildly regularizing.
This is a good place to emphasize how priors in **brms** are given classes. If you'd like all parameters within a given class to have the prior, you can just specify one prior argument within that class. For our `fit8.1`, both parameters of `class = b` have a `normal(0, 2)` prior. So we can just include one statement to handle both. Had we wanted different priors for the coefficients for `spend_z` and `prcnt_take_z`, we'd need to include two `prior()` arguments with at least one including a `coef` argument.
```{r fit18.1}
fit18.1 <-
brm(data = my_data,
family = student,
satt_z ~ 1 + spend_z + prcnt_take_z,
prior = c(prior(normal(0, 2), class = Intercept),
prior(normal(0, 2), class = b),
prior(normal(0, 1), class = sigma),
prior(exponential(one_over_twentynine), class = nu)),
chains = 4, cores = 4,
stanvars = stanvar(1/29, name = "one_over_twentynine"),
seed = 18,
file = "fits/fit18.01")
```
Check the model summary.
```{r}
print(fit18.1)
```
So when we use a multivariable model, increases in spending now appear associated with *increases* in SAT scores.
### The posterior distribution.
Based on Equation 18.1, we can convert the standardized coefficients from our multivariable model back to their original metric as follows:
\begin{align*}
\beta_0 & = \operatorname{SD}_y \zeta_0 + M_y - \operatorname{SD}_y \sum_j \frac{\zeta_j M_{x_j}}{\operatorname{SD}_{x_j}} \;\;\; \text{and} \\
\beta_j & = \frac{\operatorname{SD}_y \zeta_j}{\operatorname{SD}_{x_j}}.
\end{align*}
To use them, we'll first extract the posterior draws
```{r}
draws <- as_draws_df(fit18.1)
head(draws)
```
Like we did in [Chapter 17][Metric Predicted Variable with One Metric Predictor], let's wrap the consequences of Equation 18.1 into two functions.
```{r}
make_beta_0 <- function(zeta_0, zeta_1, zeta_2, sd_x_1, sd_x_2, sd_y, m_x_1, m_x_2, m_y) {
sd_y * zeta_0 + m_y - sd_y * ((zeta_1 * m_x_1 / sd_x_1) + (zeta_2 * m_x_2 / sd_x_2))
}
make_beta_j <- function(zeta_j, sd_j, sd_y) {
sd_y * zeta_j / sd_j
}
```
After saving a few values, we're ready to use our custom functions.
```{r}
sd_x_1 <- sd(my_data$Spend)
sd_x_2 <- sd(my_data$PrcntTake)
sd_y <- sd(my_data$SATT)
m_x_1 <- mean(my_data$Spend)
m_x_2 <- mean(my_data$PrcntTake)
m_y <- mean(my_data$SATT)
draws <-
draws %>%
mutate(b_0 = make_beta_0(zeta_0 = b_Intercept,
zeta_1 = b_spend_z,
zeta_2 = b_prcnt_take_z,
sd_x_1 = sd_x_1,
sd_x_2 = sd_x_2,
sd_y = sd_y,
m_x_1 = m_x_1,
m_x_2 = m_x_2,
m_y = m_y),
b_1 = make_beta_j(zeta_j = b_spend_z,
sd_j = sd_x_1,
sd_y = sd_y),
b_2 = make_beta_j(zeta_j = b_prcnt_take_z,
sd_j = sd_x_2,
sd_y = sd_y))
glimpse(draws)
```
Before we make the figure, we'll update our overall plot theme to `cowplot::theme_minimal_grid()`. Our overall color scheme and plot aesthetic will be based on some of the plots in [Chapter 16, *Visualizing uncertainty*](https://clauswilke.com/dataviz/visualizing-uncertainty.html), of @wilkeFundamentalsDataVisualization2019. As we'll be making a lot of customized density plots in this chapter, we may as well save those settings, here. We'll call the function with those settings `stat_wilke()`.
```{r, warning = F, message = F}
library(tidybayes)
library(ggdist)
library(cowplot)
# update the default theme setting
theme_set(theme_minimal_grid())
# define the function
stat_wilke <- function(height = 1.25, point_size = 5, ...) {
list(
# for the graded fill
stat_slab(aes(fill_ramp = stat(
cut_cdf_qi(cdf,
.width = c(.8, .95, .99),
labels = scales::percent_format(accuracy = 1)))),
height = height, slab_alpha = .75, fill = "steelblue4",
...),
# for the top outline and the mode dot
stat_halfeye(.width = 0, point_interval = mode_qi,
height = height, size = point_size, slab_size = 1/3,
slab_color = "steelblue4", fill = NA, color = "chocolate3",
...),
# fill settings
scale_fill_ramp_discrete(range = c(1, .4), na.translate = F),
# adjust the guide_legend() settings
guides(fill_ramp =
guide_legend(
direction = "horizontal",
keywidth = unit(0.925, "cm"),
label.hjust = 0.5,
label.position = "bottom",
title = "posterior prob.",
title.hjust = 0.5,
title.position = "top")),
# ensure we're using `cowplot::theme_minimal_hgrid()` as a base theme
theme_minimal_hgrid(),
# adjust the legend settings
theme(legend.background = element_rect(fill = "white"),
legend.text = element_text(margin = margin(-0.2, 0, -0.2, 0, "cm")),
legend.title = element_text(margin = margin(-0.2, 0, -0.2, 0, "cm")))
)
}
```
```{r, warning = F, message = F, eval = F, echo = F}
library(tidybayes)
library(cowplot)
# update the default theme setting
theme_set(theme_minimal_grid())
# keep a lookout at https://github.com/mjskay/ggdist/issues/11
stat_wilke <- function(height = 1.25, ...) {
list(
stat_halfeye(.width = 0, height = height, fill = "transparent", slab_color = "steelblue4", slab_size = 1/2, ...),
stat_halfeye(.width = 0, height = height, fill = "steelblue4", alpha = .6, ...),
stat_pointinterval(.width = .95, color = "chocolate3", size = 2, point_size = 4),
theme_minimal_hgrid()
)
}
```
Here's the top panel of Figure 18.5.
```{r, fig.width = 6, fig.height = 4, warning = F, message = F}
# here are the primary data
draws %>%
transmute(Intercept = b_0,
Spend = b_1,
`Percent Take` = b_2,
Scale = sigma * sd_y,
Normality = nu %>% log10()) %>%
pivot_longer(everything()) %>%
# the plot
ggplot(aes(x = value)) +
stat_wilke(normalize = "panels") +
scale_y_continuous(NULL, breaks = NULL) +
xlab(NULL) +
coord_cartesian(ylim = c(-0.01, NA)) +
panel_border() +
theme(legend.position = c(.72, .2)) +
facet_wrap(~ name, scales = "free", ncol = 3)
```
> The slope on spending has a mode of about $13$, which suggests that SAT scores rise by about $13$ points for every extra $\$1000$ spent per pupil. The slope on percentage taking the exam (PrcntTake) is also credibly non-zero, with a mode around $−2.8$, which suggests that SAT scores fall by about $2.8$ points for every additional $1\%$ of students who take the test. (p. 517)
If you want those exact modes and, say, 50% intervals around them, you can just use `tidybayes::mode_hdi()`.
```{r, warning = F}
draws %>%
transmute(Spend = b_1,
`Percent Take` = b_2) %>%
pivot_longer(everything()) %>%
group_by(name) %>%
mode_hdi(value, .width = .5)
```
The `brms::bayes_R2()` function makes it easy to compute a Bayesian $R^2$. Simply feed a `brm()` fit object into `bayes_R2()` and you'll get back the posterior mean, $\textit{SD}$, and 95% intervals.
```{r}
bayes_R2(fit18.1)
```
I'm not going to go into the technical details here, but you should be aware that the Bayeisan $R^2$ returned from the `bayes_R2()` function is not calculated the same as it is with OLS. If you want to dive in, check out the paper by @gelmanRsquaredBayesianRegression2019, [*R-squared for Bayesian regression models*](https://stat.columbia.edu/~gelman/research/published/bayes_R2_v3.pdf). Anyway, if you'd like to view the Bayesian $R^2$ distribution rather than just get the summaries, specify `summary = F`, convert the output to a tibble, and plot as usual.
```{r, fig.width = 3.5, fig.height = 2.25}
bayes_R2(fit18.1, summary = F) %>%
as_tibble() %>%
ggplot(aes(x = R2, y = 0)) +
stat_wilke() +
scale_y_continuous(NULL, breaks = NULL) +
labs(subtitle = expression(paste("Bayesian ", italic(R)^2)),
x = NULL) +
coord_cartesian(xlim = c(0, 1),
ylim = c(-0.01, NA)) +
theme(legend.position = c(.01, .8))
```
Since the `brms::bayes_R2()` function is not identical with Kruschke's method in the text, the results might differ a bit.
We can get a sense of the scatter plots with `bayesplot::mcmc_pairs()`.
```{r, fig.width = 6, fig.height = 5.5, warning = F, message = F}
library(bayesplot)
color_scheme_set(c("steelblue4", "steelblue4", "steelblue4", "steelblue4", "steelblue4", "steelblue4"))
draws %>%
transmute(Intercept = b_0,
Spend = b_1,
`Percent Take` = b_2,
Scale = sigma * sd_y,
Normality = nu %>% log10()) %>%
mcmc_pairs(diag_fun = "dens",
off_diag_args = list(size = 1/8, alpha = 1/8))
```
One way to get the Pearson's correlation coefficients among the parameters is with `psych::lowerCor()`.
```{r, warning = F}
draws %>%
transmute(Intercept = b_0,
Spend = b_1,
`Percent Take` = b_2,
Scale = sigma * sd_y,
Normality = nu %>% log10()) %>%
psych::lowerCor(digits = 3)
```
If you like more control for customizing your pairs plots, you'll find a friend in the `ggpairs()` function from the [**GGally** package](https://cran.r-project.org/package=GGally) [@R-GGally]. We're going to blow past the default settings and customize the format for the plots in the upper triangle, the diagonal, and the lower triangle.
```{r, warning = F, message = F}
library(GGally)
my_upper <- function(data, mapping, ...) {
ggplot(data = data, mapping = mapping) +
geom_point(size = 1/2, shape = 21, stroke = 1/10,
color = "white", fill = "steelblue4") +
panel_border()
}
my_diag <- function(data, mapping, ...) {
ggplot(data = data, mapping = mapping) +
stat_wilke(point_size = 2) +
scale_x_continuous(NULL, breaks = NULL) +
scale_y_continuous(NULL, breaks = NULL) +
coord_cartesian(ylim = c(-0.01, NA)) +
panel_border()
}
my_lower <- function(data, mapping, ...) {
# get the x and y data to use the other code
x <- eval_data_col(data, mapping$x)
y <- eval_data_col(data, mapping$y)
# compute the correlations
corr <- cor(x, y, method = "p", use = "pairwise")
# plot the cor value
ggally_text(
label = formatC(corr, digits = 2, format = "f") %>% str_replace(., "0\\.", "."),
mapping = aes(),
color = "black",
size = 4) +
scale_x_continuous(NULL, breaks = NULL) +
scale_y_continuous(NULL, breaks = NULL) +
panel_border()
}
```
Let's see what we've done.
```{r, fig.width = 6, fig.height = 5.5, warning = F, message = F}
draws %>%
transmute(`Intercept~(beta[0])` = b_0,
`Spend~(beta[1])` = b_1,
`Percent~Take~(beta[2])` = b_2,
sigma = sigma * sd_y,
`log10(nu)` = nu %>% log10()) %>%
ggpairs(upper = list(continuous = my_upper),
diag = list(continuous = my_diag),
lower = list(continuous = my_lower),
labeller = label_parsed) +
theme(strip.text = element_text(size = 8))
```
For more ideas on customizing a `ggpairs()` plot, go [here](https://ggobi.github.io/ggally/articles/ggpairs.html) or [here](https://stackoverflow.com/questions/30858337/how-to-customize-lines-in-ggpairs-ggally) or [here](https://stackoverflow.com/questions/45873483/ggpairs-plot-with-heatmap-of-correlation-values).
Kruschke finished the subsection with the observation: "Sometimes we are interested in using the linear model to predict $y$ values for $x$ values of interest. It is straight forward to generate a large sample of credible $y$ values for specified $x$ values" (p. 519).
Like we practiced with in the last chapter, the simplest way to do so in **brms** is with the `fitted()` function. For a quick example, say we wanted to know what the model would predict if we were to have a standard-score increase in spending and a simultaneous standard-score decrease in the percent taking the exam. We'd just specify those values in a tibble and feed that tibble into `fitted()` along with the model.
```{r}
nd <-
tibble(prcnt_take_z = -1,
spend_z = 1)
fitted(fit18.1,
newdata = nd)
```
### Redundant predictors.
> As a simplified example of correlated predictors, think of just two data points: Suppose $y = 1$ for $\langle x_1, x_2 \rangle = \langle 1, 1 \rangle$ and $y = 2$ for $\langle x_1, x_2 \rangle = \langle 2, 2 \rangle$. The linear model, $y = \beta_1 x_1 + \beta_2 x_2$ is supposed to satisfy both data points, and in this case both are satisfied by $1 = \beta_1 + \beta_2$. Therefore, many different combinations of $\beta_1$ and $\beta_2$ satisfy the data. For example, it could be that $\beta_1 = 2$ and $\beta_2 = -1$, or $\beta_1 = 0.5$ and $\beta_2 = 0.5$, or $\beta_1 = 0$ and $\beta_2 = 1$. In other words, the credible values of $\beta_1$ and $\beta_2$ are anticorrelated and trade-off to fit the data. (p. 519)
Here are what those data look like. You would not want to fit a regression model with these data.
```{r}
tibble(x_1 = 1:2,
x_2 = 1:2,
y = 1:2)
```
We can take percentages and turn them into their inverse re-expressed as a proportion.
```{r}
percent_take <- 37
(100 - percent_take) / 100
```
Let's make a redundant predictor and then `standardize()` it.
```{r}
my_data <-
my_data %>%
mutate(prop_not_take = (100 - PrcntTake) / 100) %>%
mutate(prop_not_take_z = standardize(prop_not_take))
glimpse(my_data)
```
Here's the correlation matrix for `Spend`, `PrcntTake` and `prop_not_take`, as seen on page 520.
```{r}
my_data %>%
select(Spend, PrcntTake, prop_not_take) %>%
cor()
```
We're ready to fit the redundant-predictor model.
```{r fit18.2}
fit18.2 <-
brm(data = my_data,
family = student,
satt_z ~ 0 + Intercept + spend_z + prcnt_take_z + prop_not_take_z,
prior = c(prior(normal(0, 2), class = b, coef = "Intercept"),
prior(normal(0, 2), class = b, coef = "spend_z"),
prior(normal(0, 2), class = b, coef = "prcnt_take_z"),
prior(normal(0, 2), class = b, coef = "prop_not_take_z"),
prior(normal(0, 1), class = sigma),
prior(exponential(one_over_twentynine), class = nu)),
chains = 4, cores = 4,
stanvars = stanvar(1/29, name = "one_over_twentynine"),
seed = 18,
# this will let us use `prior_samples()` later on
sample_prior = "yes",
file = "fits/fit18.02")
```
You might notice a few things about the `brm()` code. First, we have used the `~ 0 + Intercept + ...` syntax instead of the default syntax for intercepts. In normal situations, we would have been in good shape using the typical `~ 1 + ...` syntax for the intercept, especially given our use of standardized data. However, since **brms** version 2.5.0, using the `sample_prior` argument to draw samples from the prior distribution will no longer allow us to return samples from the typical **brms** intercept. Bürkner addressed the issue on the [Stan forums](https://discourse.mc-stan.org/t/prior-intercept-samples-no-longer-saved-in-brms-2-5-0/6107). As he pointed out, if you want to get prior samples from an intercept, you'll have to use the alternative syntax. The other thing to point out is that even though we used the same prior on all the predictors, including the intercept, we still explicitly spelled each out with the `coef` argument. If we hadn't been explicit like this, we would only get a single `b` vector from the `prior_samples()` function. But since we want separate vectors for each of our predictors, we used the verbose code. If you're having a difficult time understanding these two points, experiment. Fit the model in a few different ways with either the typical or the alternative intercept syntax and with either the verbose prior code or the simplified `prior(normal(0, 2), class = b)` code. And after each, execute `prior_samples(fit18.2)`. You'll see.
Let's move on. Kruschke mentioned high autocorrelations in the prose. Here are the autocorrelation plots for our $\beta$'s.
```{r, fig.width = 6.5, fig.height = 4}
color_scheme_set(c("steelblue4", "steelblue4", "chocolate3", "steelblue4", "steelblue4", "steelblue4"))
draws <- as_draws_df(fit18.2)
draws %>%
mutate(chain = .chain) %>%
mcmc_acf(pars = vars(b_Intercept:b_prop_not_take_z),
lags = 10)
```
Looks like HMC made a big difference. The $N_\textit{eff}/N$ ratios weren't terrible, either.
```{r, fig.width = 6, fig.height = 1.75}
color_scheme_set(c("steelblue4", "steelblue4", "chocolate3", "steelblue4", "chocolate3", "chocolate3"))
neff_ratio(fit18.2)[1:6] %>%
mcmc_neff() +
yaxis_text(hjust = 0)
```
Earlier we computed the correlations among the correlation matrix for the predictors, as Kruschke displayed on page 520. Here we'll compute the correlations among their coefficients in the model. The `brms::vcov()` function returns a variance/covariance matrix--or a correlation matrix when you set `correlation = T`--of the population-level parameters (i.e., the fixed effects). It returns the values to a decadent level of precision, so we'll simplify the output with `round()`.
```{r}
vcov(fit18.2, correlation = T) %>%
round(digits = 3)
```
The correlations among the redundant predictors were still very high.
> If any of the nondiagonal correlations are high (i.e., close to $+1$ or close to $−1$), be careful when interpreting the posterior distribution. Here, we can see that the correlation of PrcntTake and PropNotTake is $−1.0$, which is an immediate sign of redundant predictors. (p. 520)
You can really get a sense of the silliness of the parameters if you plot them. We'll use `stat_wilke()` to get a sense of densities and summaries of the $\beta$'s.
```{r, fig.width = 8, fig.height = 1.75, warning = F}
draws %>%
pivot_longer(b_Intercept:b_prop_not_take_z) %>%
# this line isn't necessary, but it does allow us to arrange the parameters on the y-axis
mutate(name = factor(name,
levels = c("b_prop_not_take_z", "b_prcnt_take_z", "b_spend_z", "b_Intercept"))) %>%
ggplot(aes(x = value, y = name)) +
geom_vline(xintercept = 0, color = "white") +
stat_wilke(normalize = "xy", point_size = 3) +
labs(x = NULL,
y = NULL) +
coord_cartesian(xlim = c(-5, 5),
ylim = c(1.4, NA)) +
theme(axis.text.y = element_text(hjust = 0),
legend.position = c(.76, .8))
```
Yeah, on the standardized scale those are some ridiculous estimates. Let's update our `make_beta_0()` function.
```{r}
make_beta_0 <- function(zeta_0, zeta_1, zeta_2, zeta_3, sd_x_1, sd_x_2, sd_x_3, sd_y, m_x_1, m_x_2, m_x_3, m_y) {
sd_y * zeta_0 + m_y - sd_y * ((zeta_1 * m_x_1 / sd_x_1) + (zeta_2 * m_x_2 / sd_x_2) + (zeta_3 * m_x_3 / sd_x_3))
}
```
```{r, warning = F}
sd_x_1 <- sd(my_data$Spend)
sd_x_2 <- sd(my_data$PrcntTake)
sd_x_3 <- sd(my_data$prop_not_take)
sd_y <- sd(my_data$SATT)
m_x_1 <- mean(my_data$Spend)
m_x_2 <- mean(my_data$PrcntTake)
m_x_3 <- mean(my_data$prop_not_take)
m_y <- mean(my_data$SATT)
draws <-
draws %>%
transmute(Intercept = make_beta_0(zeta_0 = b_Intercept,
zeta_1 = b_spend_z,
zeta_2 = b_prcnt_take_z,
zeta_3 = b_prop_not_take_z,
sd_x_1 = sd_x_1,
sd_x_2 = sd_x_2,
sd_x_3 = sd_x_3,
sd_y = sd_y,
m_x_1 = m_x_1,
m_x_2 = m_x_2,
m_x_3 = m_x_3,
m_y = m_y),
Spend = make_beta_j(zeta_j = b_spend_z,
sd_j = sd_x_1,
sd_y = sd_y),
`Percent Take` = make_beta_j(zeta_j = b_prcnt_take_z,
sd_j = sd_x_2,
sd_y = sd_y),
`Proportion not Take` = make_beta_j(zeta_j = b_prop_not_take_z,
sd_j = sd_x_3,
sd_y = sd_y),
Scale = sigma * sd_y,
Normality = nu %>% log10())
glimpse(draws)
```