Skip to content

Double descent experiments/repros on classical ML models and deep neural nets

Notifications You must be signed in to change notification settings

tensor-fusion/Double-Descent-Deep-Nets

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Double descent experiments

In this repo I'm trying to reproduce some double descent results from several papers:

Nothing particularly useful here (unless you're interested in double descent).

Example

Reproducing polynomial regression results from the Double Descent Demystified paper:

Underparameterized regime Interpolation threshold Overparameterized regime

Background

One of the most fervid claims made by modern-day DL researchers was always that "bigger models work better!!". This conflicts with standard statistical learning theory wisdom whose prediction was that bigger models would overfit on training data, interpolate noise and fail to generalize.

Who's right? Enter double descent.

Double descent describes the phenomenon where the error curve of a model as a function of model complexity or size doesn't follow the traditional bias-variance tradeoff U-shaped curve. Instead, after an initial descent (error reduction) and subsequent ascent (error increase due to overfitting), there is a second descent in error even as model complexity continues to grow beyond the interpolation threshold.

One point for modern DL folks (although this doesn't necessarily contradict classic bias-variance).

Linear regression example

Given a model's complexity represented by the number of parameters $p$ and training samples $n$, the interpolation threshold is reached when $p$ equals $n$. Traditionally, if $p$ exceeds $n$, models are expected to generalize poorly on new data due to overfitting, but in double descent, as $p$ grows even larger, the generalization error decreases after surpassing the threshold.

Assume a scenario where you fit a polynomial regression model:

$$y_i=f\left(x_i\right)+\epsilon_i$$

where $f(x)$ is the true function, $x_i$ are the data points, $y_i$ are the observed values, and $\epsilon_i$ represents noise.

As the degree of the polynomial (akin to model complexity) increases, the fit to the training data becomes perfect when the degree $d$ is at least $n-1$. If $d$ surpasses $n-1$, classical theory suggests a blowup in generalization error due to high variance. However, as observed in the double descent phenomena, if $d$ continues to increase significantly beyond $n$, test error decreases again.

About

Double descent experiments/repros on classical ML models and deep neural nets

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published