-
Notifications
You must be signed in to change notification settings - Fork 35
/
building_gam.Rmd
129 lines (82 loc) · 6.34 KB
/
building_gam.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
# Building up to GAMs
```{r setupBuild, include=FALSE}
# knitr::opts_chunk$set(cache.rebuild=F, cache = T)
```
```{r polyreg_repeat, echo=FALSE}
plot_ly(data = d) %>%
add_markers( ~ x,
~ y,
marker = list(color = '#D55E00', opacity = .1),
showlegend = F) %>%
add_lines( ~ x, ~ fits, color = ~ polynomial, data = fits) %>%
config(displayModeBar = F) %>%
theme_plotly()
```
## Piecewise polynomial
So how might we solve the problem we saw with polynomial regression? One way would be to divide the data into chunks at various points (*knots*), and fit a linear regression or polynomial model within that subset of data. The following fits a cubic polynomial for each 10 evenly divided subsets of x.
```{r piecewisePoly, echo=FALSE}
knots = seq(0, 1, by = .1)
d$xcut = cut(x, knots, right = F)
d$xcut = factor(d$xcut, levels = c('Int', levels(d$xcut))) # add int for later
fits = d %>%
group_by(xcut) %>%
nest() %>%
mutate(
res = purrr::map(data, \(df) tibble(
x = df$x,
y = df$y,
fit = fitted(lm(y ~ poly(x, 3), data = df))
))) %>%
unnest(cols = res) %>%
select(-data)
# plot_ly(fits) %>%
# add_markers(~x, ~y, marker=list(color='#D55E00', opacity=.5), showlegend=F) %>%
# add_lines(~x, ~fit, color=I('#56B4E9'), showlegend=F) %>%
# config(displayModeBar=F) %>%
# theme_plotly()
fits %>%
ggplot(aes(x, y)) +
geom_point() +
geom_line(aes(y = fit, group = xcut), alpha = 1, size = 1)
```
<br>
While this is notably better fit than e.g., a cubic polynomial, again it is unsatisfactory. The separate fits are unconnected, leading to sometimes notably different predictions for values close together. Being fit to small amounts of data means each model *overfits* that area of data, leading to a fairly 'wiggly' result most of the time.
## What is a GAM?
In essence, a GAM is a GLM. What distinguishes it from the ones you know is that, unlike a standard GLM, it is composed of a sum of smooth functions of features instead of or in addition to the standard linear feature contributions. Consider the standard (g)lm[^subscripts]:
$$y = b_0 + b_1\cdot x$$
In the above, $y$ is our target, $x$ the feature variable, the coefficients are $b$ and $\epsilon$ the error.
For the GAM, we can specify it generally as follows:
$$y = f(x) + \epsilon$$
Now we are dealing with some specific (additive) function of inputs, which will not require the (possibly transformed) $y$ to be a linear function of $x$. It involves choosing a *basis*, which in technical terms means choosing a space of functions for which $f$ is some element of it. On the practical side, it is a means to capture nonlinear relationships. As we'll see later, an example would be choosing a *cubic spline* for the basis. Choosing a basis also means we're selecting basis functions, which will actually go into the analysis. We can add more detail as follows:
$$y = f(x) + \epsilon = \sum_{j=1}^{d}F_j(x)\beta_j + \epsilon$$
Above, each $F_j$ is a [basis function](https://en.wikipedia.org/wiki/Basis_function) that is the transformed $x$ depending on the type of basis considered, and the $b$ are the corresponding regression coefficients. This might sound complicated, until you realize you've done this before! Let's go back to the quadratic polynomial, which we can now say uses the polynomial basis.
$$f(x) = b_0 + b_1\cdot x^1 \ldots +b_d\cdot x^d$$
In our previous case, $d=2$ and we have our standard regression with a quadratic term, but in fact, we can use this approach to produce the bases for any polynomial.
As far as mechanics go, these basis functions become extra columns in the data, just like your $x^2$ etc. from the polynomial approach, and then you just run a GLM! However, an additional aspect is that we will use *penalized estimation*, something that is quite common in some modeling contexts like machine learning applications (but not common enough in other areas).
For those new to penalized regression, again consider a standard GLM that we usually estimate with maximum likelihood, where the likelihood corresponding to the estimated coefficients $l(\beta)$, where $\beta$ are the associated regression coefficients. Conceptually we can write the *penalized likelihood* as follows:
$$l_p(\beta)= l(\beta) - \color{darkred}{\mathcal{penalty}}$$
If you prefer least squares as the loss function, we can put it as:
$$\mathcal{Loss} = \sum (y-X\beta)^2 + \color{darkred}{\mathcal{penalty}}$$
The penalty regards the complexity of the model, and specifically the size of the coefficients for the smooth terms. The practical side is that it will help to keep us from overfitting the data, where our smooth function might get too wiggly[^wiggly].
In summary, you're doing a GLM, but a slightly modified one. You could have always been using penalized GLM (e.g. lasso or ridge regression), and you'd have slightly better predictive capability if you had. We can use different loss functions, different penalties etc., but the concepts are the main thing to note here. For a little more detail on this section visit the [technical section][GAM].
## Polynomial spline
Let's get things started by demonstrating the results from a GAM that uses a *polynomial spline* for the basis. The brave may refer to the [technical details][a detailed example] section for additional insights. Conceptually, it is useful to continue to think of the piecewise approach we talked about before. However, we'll end up with a smoother and connected result when all is said and done. Very nice!
```{r polysplinedemo, echo=FALSE}
# d %>%
# add_predictions(model = gam(y ~ s(x, bs='cr', k=12))) %>%
# arrange(x) %>%
# plot_ly() %>%
# add_markers(~x, ~y, marker=list(color='#D55E00', opacity=.5), showlegend=F) %>%
# add_lines(~x, ~pred, color=I('#56B4E9'), showlegend=F) %>%
# config(displayModeBar=F) %>%
# theme_plotly()
d %>%
add_predictions(model = gam(y ~ s(x, bs = 'cr', k = 12))) %>%
ggplot(aes(x, y)) +
geom_point() +
geom_line(aes(y = pred), alpha = 1, size = 1)
```
<br>
This is much better. We now have a connected result that isn't overly wiggly, and more importantly, actually fits the data quite well.
[^subscripts]: I am leaving out subscripts where I don't think it helps, and as these are conceptual depictions, they usually don't.
[^wiggly]: Wiggly is a highly technical term I will use throughout the presentation.