Skip to content

Commit

Permalink
Update Book
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Jul 17, 2023
1 parent 46ceb1a commit 8a426cf
Show file tree
Hide file tree
Showing 5 changed files with 6 additions and 6 deletions.
Binary file modified images/ale-bike-cat-1.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified interpretable-ml.epub
Binary file not shown.
4 changes: 2 additions & 2 deletions intro.html
Original file line number Diff line number Diff line change
Expand Up @@ -493,12 +493,12 @@ <h1><span class="header-section-number">Chapter 2</span> Introduction<a href="in
Internalizing the basic concepts also empowers you to better understand and evaluate any new paper on interpretability published on <a href="https://arxiv.org/">arxiv.org</a> in the last 5 minutes since you began reading this book (I might be exaggerating the publication rate).</p>
<p>This book starts with some (dystopian) <a href="storytime.html#storytime">short stories</a> that are not needed to understand the book, but hopefully will entertain and make you think.
Then the book explores the concepts of <a href="interpretability.html#interpretability">machine learning interpretability</a>.
We will discuss when interpretability is important and what different types of explanations there are.
We will discuss when interpretability is important and the different types of explanations that exist.
Terms used throughout the book can be looked up in the <a href="terminology.html#terminology">Terminology chapter</a>.
Most of the models and methods explained are presented using real data examples which are described in the <a href="data.html#data">Data chapter</a>.
One way to make machine learning interpretable is to use <a href="simple.html#simple">interpretable models</a>, such as linear models or decision trees.
The other option is the use of <a href="agnostic.html#agnostic">model-agnostic interpretation tools</a> that can be applied to any supervised machine learning model.
Model-agnostic methods can be divided <a href="global-methods.html#global-methods">global methods</a> that describe the average behavior of the model and <a href="local-methods.html#local-methods">local methods</a> that explain individual predictions.
Model-agnostic methods can be divided into <a href="global-methods.html#global-methods">global methods</a> that describe the average behavior of the model, and <a href="local-methods.html#local-methods">local methods</a> that explain individual predictions.
The Model-Agnostic Methods chapter deals with methods such as <a href="pdp.html#pdp">partial dependence plots</a> and <a href="feature-importance.html#feature-importance">feature importance</a>.
Model-agnostic methods work by changing the input of the machine learning model and measuring changes in the prediction output.
The book ends with an optimistic outlook on what <a href="future.html#future">the future of interpretable machine learning</a> might look like.</p>
Expand Down
6 changes: 3 additions & 3 deletions lime.html
Original file line number Diff line number Diff line change
Expand Up @@ -845,7 +845,7 @@ <h4><span class="header-section-number">9.2.2.1</span> Example<a href="lime.html
0.1701170
</td>
<td style="text-align:left;">
PSY
is
</td>
<td style="text-align:right;">
0.000000
Expand All @@ -859,7 +859,7 @@ <h4><span class="header-section-number">9.2.2.1</span> Example<a href="lime.html
0.1701170
</td>
<td style="text-align:left;">
guy
good
</td>
<td style="text-align:right;">
0.000000
Expand All @@ -873,7 +873,7 @@ <h4><span class="header-section-number">9.2.2.1</span> Example<a href="lime.html
0.1701170
</td>
<td style="text-align:left;">
good
a
</td>
<td style="text-align:right;">
0.000000
Expand Down
2 changes: 1 addition & 1 deletion search_index.json

Large diffs are not rendered by default.

0 comments on commit 8a426cf

Please sign in to comment.