Skip to content

Commit

Permalink
ud readme with quickstart guide
Browse files Browse the repository at this point in the history
  • Loading branch information
louisabraham committed Sep 23, 2024
1 parent f74b5df commit 1baa417
Show file tree
Hide file tree
Showing 14 changed files with 172 additions and 35 deletions.
111 changes: 110 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,116 @@ print("Lambda =", model.best_lambda_)

You should always try to give normalized data to LassoNet as it uses neural networks under the hood.

You can read the full [documentation](https://lasso-net.github.io//lassonet/api/) or read the [examples](https://github.com/lasso-net/lassonet/tree/master/examples) that cover most features.
You can read the full [documentation](https://lasso-net.github.io//lassonet/api/) or read the [examples](https://github.com/lasso-net/lassonet/tree/master/examples) that cover most features. We also provide a Quickstart section below.



## Quickstart

Here we guide you through the features of LassoNet and how you typically use them.

### Task

LassoNet is based on neural networks and can be used for any kind of data. Currently, we have implemented losses for the following tasks:

- regression: `LassoNetRegressor`
- classification: `LassoNetClassifier`
- Cox regression: `LassoNetCoxRegressor`
- interval-censored Cox regression: `LassoNetIntervalRegressor`

If features naturally belong to groups, you can use the `groups` parameter to specify them. This will allow the model to put a penalty on groups of features instead of each feature individually.

### Data preparation

You should always normalize your data before passing it to the model to avoid too large (or too small) values in the data.

### What do you want to do?

The LassoNet family of models do a lot of things.

Here are some examples of what you can do with LassoNet. Note that you can switch `LassoNetRegressor` with any of the other models to perform the same operations.

#### Using the base interface

The base interface implements a `.fit()` method that is not very useful as it computes a path but does not store any intermediate result.

Usually, you want to store the intermediate results (with `return_state_dicts=True`) and then load one of the models from the path into the model to inspect it.

```python
from lassonet import LassoNetRegressor, plot_path

model = LassoNetRegressor()
path = model.path(X_train, y_train, return_state_dicts=True)
plot_path(model, X_test, y_test)

# choose `best_id` based on the plot
model.load(path[best_id].state_dict)
print(model.score(X_test, y_test))
```

You can also retrieve the mask of the selected features and train a dense model on the selected features.

```python
selected = path[best_id].selected
model.fit(X_train[:, selected], y_train, dense_only=True)
print(model.score(X_test[:, selected], y_test))
```

You get a `model.feature_importances_` attribute that is the value of the L1 regularization parameter at which each feature is removed. This can give you an idea of the most important features but is very unstable across different runs. You should use stability selection to select the most stable features.

#### Using the cross-validation interface

The cross-validation interface computes validation scores on multiple folds before running a final path on the whole training dataset with the best regularization parameter.

```python
model = LassoNetRegressorCV()
model.fit(X_train, y_train)
model.score(X_test, y_test)
```

You can also use the `plot_cv` method to get more information.

Some attributes give you more information about the best model, like `best_lambda_`, `best_selected_` or `best_cv_score_`.

This information is useful to pass to a base model to train it from scratch with the best regularization parameter or the best subset of features.

#### Using the stability selection interface


[Stability selection](https://arxiv.org/abs/0809.2932) is a method to select the most stable features when running the model multiple times on different random subsamples of the data. It is probably the best way to select the most important features.

```python
model = LassoNetRegressor()
oracle, order, wrong, paths, prob = model.stability_selection(X_train, y_train)
```

- `oracle` is a heuristic that can detect the most stable features when introducing noise.
- `order` sorts the features by their decreasing importance.
- `wrong[k]` is a measure of error when selecting the k+1 first features (read [this paper](https://arxiv.org/pdf/2206.06885) for more details). You can `plt.plot(wrong)` to see the error as a function of the number of selected features.
- `paths` stores all the computed paths.
- `prob` is the probability that a feature is selected at each value of the regularization parameter.

In practice, you might want to train multiple dense models on different subsets of features to get a better understanding of the importance of each feature.

For example:

```python
for i in range(10):
selected = order[:i]
model.fit(X_train[:, selected], y_train, dense_only=True)
print(model.score(X_test[:, selected], y_test))
```

### Important parameters

Here are the most important parameters you should be aware of:

- `hidden_dims`: the number of neurons in each hidden layer. The default value is `(100,)` but you might want to try smaller and deeper networks like `(10, 10)`.
- `path_multiplier`: the number of lambda values to compute on the path. The lower it is, the more precise the model is but the more time it takes. The default value is a trade-off to get a fast training but you might want to try smaller values like `1.01` or `1.005` to get a better model.
- `lambda_start`: the starting value of the regularization parameter. The default value is `"auto"` and the model will try to select a good starting value according to an unpublised heuristic (read the code to know more). You can identify a bad `lambda_start` by plotting the path. If `lambda_start` is too small, the model will stay dense for a long time, which does not affect performance but takes longer. If `lambda_start` is too large, the number of features with decrease very fast and the path will not be accurate. In that case you might also want to decrease `lambda_start`.
- `gamma`: puts some L2 penalty on the network. The default is `0.0` which means no penalty but some small value can improve the performance, especially on small datasets.
- more standard MLP training parameters are accessible: `dropout`, `batch_size`, `optim`, `n_iters`, `patience`, `tol`, `backtrack`, `val_size`. In particular, `batch_size` can be useful to do stochastic gradient descent instead of full batch gradient descent and to avoid memory issues on large datasets.
- `M`: this parameter has almost no effect on the model.

## Features

Expand Down
4 changes: 2 additions & 2 deletions docs/api/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -1490,16 +1490,16 @@ <h2>API<a class="headerlink" href="#api" title="Link to this heading">¶</a></h2

<dl class="py function">
<dt class="sig sig-object py" id="lassonet.plot_path">
<span class="sig-prename descclassname"><span class="pre">lassonet.</span></span><span class="sig-name descname"><span class="pre">plot_path</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">path</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">X_test</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">y_test</span></span></em>, <em class="sig-param"><span class="o"><span class="pre">*</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">score_function</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em><span class="sig-paren">)</span><a class="headerlink" href="#lassonet.plot_path" title="Link to this definition"></a></dt>
<span class="sig-prename descclassname"><span class="pre">lassonet.</span></span><span class="sig-name descname"><span class="pre">plot_path</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">model</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">X_test</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">y_test</span></span></em>, <em class="sig-param"><span class="o"><span class="pre">*</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">score_function</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em><span class="sig-paren">)</span><a class="headerlink" href="#lassonet.plot_path" title="Link to this definition"></a></dt>
<dd><p>Plot the evolution of the model on the path, namely:
- lambda
- number of selected variables
- score</p>
<p>Requires to have called model.path(return_state_dicts=True) beforehand.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>model</strong> (<a class="reference internal" href="#lassonet.LassoNetClassifier" title="lassonet.LassoNetClassifier"><em>LassoNetClassifier</em></a><em> or </em><a class="reference internal" href="#lassonet.LassoNetRegressor" title="lassonet.LassoNetRegressor"><em>LassoNetRegressor</em></a>)</p></li>
<li><p><strong>path</strong> – output of model.path</p></li>
<li><p><strong>X_test</strong> (<em>array-like</em>)</p></li>
<li><p><strong>y_test</strong> (<em>array-like</em>)</p></li>
<li><p><strong>score_function</strong> (<em>function</em><em> or </em><em>None</em>) – if None, use score_function=model.score
Expand Down
Loading

0 comments on commit 1baa417

Please sign in to comment.