Skip to content

Commit

Permalink
Merge pull request #377 from Etrama/intro_lang
Browse files Browse the repository at this point in the history
01-introdruction.Rmd language fix
  • Loading branch information
christophM committed Jul 17, 2023
2 parents ba768ab + c906259 commit e6f261a
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions manuscript/01-introduction.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,12 @@ Internalizing the basic concepts also empowers you to better understand and eval

This book starts with some (dystopian) [short stories](#storytime) that are not needed to understand the book, but hopefully will entertain and make you think.
Then the book explores the concepts of [machine learning interpretability](#interpretability).
We will discuss when interpretability is important and what different types of explanations there are.
We will discuss when interpretability is important and the different types of explanations that exist.
Terms used throughout the book can be looked up in the [Terminology chapter](#terminology).
Most of the models and methods explained are presented using real data examples which are described in the [Data chapter](#data).
One way to make machine learning interpretable is to use [interpretable models](#simple), such as linear models or decision trees.
The other option is the use of [model-agnostic interpretation tools](#agnostic) that can be applied to any supervised machine learning model.
Model-agnostic methods can be divided [global methods](#global-methods) that describe the average behavior of the model and [local methods](#local-methods) that explain individual predictions.
Model-agnostic methods can be divided into [global methods](#global-methods) that describe the average behavior of the model, and [local methods](#local-methods) that explain individual predictions.
The Model-Agnostic Methods chapter deals with methods such as [partial dependence plots](#pdp) and [feature importance](#feature-importance).
Model-agnostic methods work by changing the input of the machine learning model and measuring changes in the prediction output.
The book ends with an optimistic outlook on what [the future of interpretable machine learning](#future) might look like.
Expand Down

0 comments on commit e6f261a

Please sign in to comment.