diff --git a/manuscript/01-introduction.Rmd b/manuscript/01-introduction.Rmd index 4cbefe9f..e888af86 100644 --- a/manuscript/01-introduction.Rmd +++ b/manuscript/01-introduction.Rmd @@ -18,12 +18,12 @@ Internalizing the basic concepts also empowers you to better understand and eval This book starts with some (dystopian) [short stories](#storytime) that are not needed to understand the book, but hopefully will entertain and make you think. Then the book explores the concepts of [machine learning interpretability](#interpretability). -We will discuss when interpretability is important and what different types of explanations there are. +We will discuss when interpretability is important and the different types of explanations that exist. Terms used throughout the book can be looked up in the [Terminology chapter](#terminology). Most of the models and methods explained are presented using real data examples which are described in the [Data chapter](#data). One way to make machine learning interpretable is to use [interpretable models](#simple), such as linear models or decision trees. The other option is the use of [model-agnostic interpretation tools](#agnostic) that can be applied to any supervised machine learning model. -Model-agnostic methods can be divided [global methods](#global-methods) that describe the average behavior of the model and [local methods](#local-methods) that explain individual predictions. +Model-agnostic methods can be divided into [global methods](#global-methods) that describe the average behavior of the model, and [local methods](#local-methods) that explain individual predictions. The Model-Agnostic Methods chapter deals with methods such as [partial dependence plots](#pdp) and [feature importance](#feature-importance). Model-agnostic methods work by changing the input of the machine learning model and measuring changes in the prediction output. The book ends with an optimistic outlook on what [the future of interpretable machine learning](#future) might look like.