Skip to content

Commit

Permalink
Deploying to gh-pages from @ f8761d9 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
danhalligan committed Dec 31, 2024
1 parent 7ea5709 commit a20e386
Show file tree
Hide file tree
Showing 14 changed files with 102 additions and 99 deletions.
4 changes: 2 additions & 2 deletions 03-linear-regression.md

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions 04-classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -679,16 +679,16 @@ fit <- knn(
```
##
## fit Down Up
## Down 21 30
## Up 22 31
## Down 21 29
## Up 22 32
```

``` r
sum(diag(t)) / sum(t)
```

```
## [1] 0.5
## [1] 0.5096154
```

> h. Repeat (d) using naive Bayes.
Expand Down
2 changes: 1 addition & 1 deletion 05-resampling-methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ mean(store)
```
```
## [1] 0.6317
## [1] 0.638
```

The probability of including $4$ when resampling numbers $1...100$ is close to
Expand Down
66 changes: 33 additions & 33 deletions 10-deep-learning.md
Original file line number Diff line number Diff line change
Expand Up @@ -393,15 +393,15 @@ npred <- predict(nn, x[testid, ])
```

```
## 6/6 - 0s - 55ms/epoch - 9ms/step
## 6/6 - 0s - 56ms/epoch - 9ms/step
```

``` r
mean(abs(y[testid] - npred))
```

```
## [1] 2.264605
## [1] 2.230619
```

In this case, the neural network outperforms logistic regression having a lower
Expand Down Expand Up @@ -433,7 +433,7 @@ model <- application_resnet50(weights = "imagenet")

```
## Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
## 8192/102967424 [..............................] - ETA: 0s 2834432/102967424 [..............................] - ETA: 1s 4202496/102967424 [>.............................] - ETA: 3s 8396800/102967424 [=>............................] - ETA: 2s 16785408/102967424 [===>..........................] - ETA: 1s 25174016/102967424 [======>.......................] - ETA: 1s 33562624/102967424 [========>.....................] - ETA: 0s 41951232/102967424 [===========>..................] - ETA: 0s 50339840/102967424 [=============>................] - ETA: 0s 58728448/102967424 [================>.............] - ETA: 0s 67117056/102967424 [==================>...........] - ETA: 0s 75505664/102967424 [====================>.........] - ETA: 0s 83894272/102967424 [=======================>......] - ETA: 0s 92282880/102967424 [=========================>....] - ETA: 0s102967424/102967424 [==============================] - 1s 0us/step
## 8192/102967424 [..............................] - ETA: 0s 524288/102967424 [..............................] - ETA: 9s 4202496/102967424 [>.............................] - ETA: 2s 8036352/102967424 [=>............................] - ETA: 2s 16195584/102967424 [===>..........................] - ETA: 1s 25649152/102967424 [======>.......................] - ETA: 0s 31588352/102967424 [========>.....................] - ETA: 0s 37855232/102967424 [==========>...................] - ETA: 0s 44982272/102967424 [============>.................] - ETA: 0s 52256768/102967424 [==============>...............] - ETA: 0s 58728448/102967424 [================>.............] - ETA: 0s 65249280/102967424 [==================>...........] - ETA: 0s 72417280/102967424 [====================>.........] - ETA: 0s 80683008/102967424 [======================>.......] - ETA: 0s 83894272/102967424 [=======================>......] - ETA: 0s 93569024/102967424 [==========================>...] - ETA: 0s102967424/102967424 [==============================] - 1s 0us/step
```

``` r
Expand Down Expand Up @@ -721,15 +721,15 @@ kpred <- predict(model, xrnn[!istrain,, ])
```

```
## 56/56 - 0s - 59ms/epoch - 1ms/step
## 56/56 - 0s - 58ms/epoch - 1ms/step
```

``` r
1 - mean((kpred - arframe[!istrain, "log_volume"])^2) / V0
```

```
## [1] 0.414211
## [1] 0.412651
```

Both models estimate the same number of coefficients/weights (16):
Expand Down Expand Up @@ -762,25 +762,25 @@ model$get_weights()

```
## [[1]]
## [,1]
## [1,] -0.028455125
## [2,] 0.096374370
## [3,] 0.170092896
## [4,] -0.009119725
## [5,] 0.120137088
## [6,] -0.087098733
## [7,] 0.034915596
## [8,] 0.072963834
## [9,] 0.206371158
## [10,] -0.030112710
## [11,] 0.032973956
## [12,] -0.861049652
## [13,] 0.094083741
## [14,] 0.508985162
## [15,] 0.523820460
## [,1]
## [1,] -0.02934243
## [2,] 0.09882952
## [3,] 0.15910484
## [4,] -0.00699739
## [5,] 0.11945468
## [6,] -0.01814298
## [7,] 0.03936224
## [8,] 0.08019262
## [9,] 0.14023538
## [10,] -0.02768653
## [11,] 0.03386866
## [12,] -0.80795944
## [13,] 0.09746164
## [14,] 0.50933617
## [15,] 0.52033675
##
## [[2]]
## [1] -0.006536028
## [1] -0.007940574
```

The flattened RNN has a lower $R^2$ on the test data than our `lm` model
Expand Down Expand Up @@ -833,11 +833,11 @@ xfun::cache_rds({
```

```
## 56/56 - 0s - 65ms/epoch - 1ms/step
## 56/56 - 0s - 63ms/epoch - 1ms/step
```

```
## [1] 0.426638
## [1] 0.4259731
```

This approach improves our $R^2$ over the linear model above.
Expand Down Expand Up @@ -906,11 +906,11 @@ xfun::cache_rds({
```

```
## 56/56 - 0s - 137ms/epoch - 2ms/step
## 56/56 - 0s - 134ms/epoch - 2ms/step
```

```
## [1] 0.4488983
## [1] 0.452399
```

### Question 13
Expand Down Expand Up @@ -966,21 +966,21 @@ xfun::cache_rds({

```
## Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
## 8192/17464789 [..............................] - ETA: 0s 4890624/17464789 [=======>......................] - ETA: 0s17464789/17464789 [==============================] - 0s 0us/step
## 8192/17464789 [..............................] - ETA: 0s 647168/17464789 [>.............................] - ETA: 1s 6692864/17464789 [==========>...................] - ETA: 0s 8511488/17464789 [=============>................] - ETA: 0s13312000/17464789 [=====================>........] - ETA: 0s17464789/17464789 [==============================] - 0s 0us/step
## 782/782 - 15s - 15s/epoch - 20ms/step
## 782/782 - 15s - 15s/epoch - 20ms/step
## 782/782 - 15s - 15s/epoch - 20ms/step
## 782/782 - 15s - 15s/epoch - 20ms/step
## 782/782 - 16s - 16s/epoch - 20ms/step
## 782/782 - 16s - 16s/epoch - 20ms/step
```



| Max Features| Accuracy|
|------------:|--------:|
| 1000| 0.85884|
| 3000| 0.85708|
| 5000| 0.86468|
| 10000| 0.75284|
| 1000| 0.85964|
| 3000| 0.87856|
| 5000| 0.86376|
| 10000| 0.87292|

Varying the dictionary size does not make a substantial impact on our estimates
of accuracy. However, the models do take a substantial amount of time to fit and
Expand Down
Binary file modified 10-deep-learning_files/figure-html/unnamed-chunk-12-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified 10-deep-learning_files/figure-html/unnamed-chunk-21-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
9 changes: 4 additions & 5 deletions 13-multiple-testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -254,16 +254,15 @@ sum(p.adjust(pvals, method = "holm") < 0.1)
> c. What is the false discovery rate associated with using the Bonferroni
> procedure to control the FWER at level $\alpha = 0.05$?
False discovery rate is the expected ratio of false positives to total
positives. There are never any false positives (black points below the line).
There are always the same number of total positives (8).
False discovery rate is the expected ratio of false positives (FP) to total
positive (FP + TP).

For panels 1, 2, 3 this would be 0/8, 0/8 and 0/8 respectively.
For panels 1, 2, 3 this would be 0/7, 0/7 and 0/3 respectively.

> d. What is the false discovery rate associated with using the Holm procedure
> to control the FWER at level $\alpha = 0.05$?
For panels 1, 2, 3 this would be 0/8, 0/8 and 0/8 respectively.
For panels 1, 2, 3 this would be 0/7, 0/8 and 0/8 respectively.

> e. How would the answers to (a) and (c) change if we instead used the
> Bonferroni procedure to control the FWER at level $\alpha = 0.001$?
Expand Down
6 changes: 3 additions & 3 deletions classification.html
Original file line number Diff line number Diff line change
Expand Up @@ -1016,10 +1016,10 @@ <h3><span class="header-section-number">4.2.1</span> Question 13<a href="classif
<span id="cb199-6"><a href="classification.html#cb199-6" aria-hidden="true"></a>(t &lt;-<span class="st"> </span><span class="kw">table</span>(fit, Weekly[<span class="op">!</span>train, ]<span class="op">$</span>Direction))</span></code></pre></div>
<pre><code>##
## fit Down Up
## Down 21 30
## Up 22 31</code></pre>
## Down 21 29
## Up 22 32</code></pre>
<div class="sourceCode" id="cb201"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb201-1"><a href="classification.html#cb201-1" aria-hidden="true"></a><span class="kw">sum</span>(<span class="kw">diag</span>(t)) <span class="op">/</span><span class="st"> </span><span class="kw">sum</span>(t)</span></code></pre></div>
<pre><code>## [1] 0.5</code></pre>
<pre><code>## [1] 0.5096154</code></pre>
<blockquote>
<ol start="8" style="list-style-type: lower-alpha">
<li>Repeat (d) using naive Bayes.</li>
Expand Down
Loading

0 comments on commit a20e386

Please sign in to comment.