diff --git a/source/content/posts/Lottery ticket hypothesis.md b/source/content/posts/Lottery ticket hypothesis.md index f8a1ce6..a24fa80 100644 --- a/source/content/posts/Lottery ticket hypothesis.md +++ b/source/content/posts/Lottery ticket hypothesis.md @@ -9,7 +9,7 @@ weight: 100 --- # The Lottery Ticket Hypothesis -![Pruned Network (Simplified Landscape)](../../assets/images/pruned_landscape.png) +![Pruned Network (Simplified Landscape)](/images/pruned_landscape.png) This chart shows a simplified 3D landscape illustrating the loss function of the neural network. - Left: The original "full" network, which is overparameterized. - Right: The pruned network, which has less bumps and is cheaper to optimize. However, in some points, it can be less "certain" than the original network, which explains the higher training loss despite the higher accuracy on the test set. You can see it interpolating approximately rather than partially overfitting. diff --git a/source/assets/images/pruned_landscape.png b/source/static/images/pruned_landscape.png similarity index 100% rename from source/assets/images/pruned_landscape.png rename to source/static/images/pruned_landscape.png