diff --git a/DESCRIPTION b/DESCRIPTION index 2f8fcd070..8aa15c861 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -1,13 +1,13 @@ Package: parsnip Title: A Common API to Modeling and Analysis Functions -Version: 1.0.3.9003 +Version: 1.0.4 Authors@R: c( - person("Max", "Kuhn", , "max@rstudio.com", role = c("aut", "cre")), - person("Davis", "Vaughan", , "davis@rstudio.com", role = "aut"), + person("Max", "Kuhn", , "max@posit.co", role = c("aut", "cre")), + person("Davis", "Vaughan", , "davis@posit.co", role = "aut"), person("Emil", "Hvitfeldt", , "emilhhvitfeldt@gmail.com", role = "ctb"), - person("RStudio", role = c("cph", "fnd")) + person("Posit Software PBC", role = c("cph", "fnd")) ) -Maintainer: Max Kuhn +Maintainer: Max Kuhn Description: A common interface is provided to allow users to specify a model without having to remember the different argument names across different functions or computational engines (e.g. 'R', 'Spark', @@ -76,4 +76,4 @@ Config/testthat/edition: 3 Encoding: UTF-8 LazyData: true Roxygen: list(markdown = TRUE) -RoxygenNote: 7.2.3 +RoxygenNote: 7.2.3.9000 diff --git a/NEWS.md b/NEWS.md index 2df940e88..495de3ca9 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,4 +1,4 @@ -# parsnip (development version) +# parsnip 1.0.4 * For censored regression models, a "reverse Kaplan-Meier" curve is computed for the censoring distribution. This can be used when evaluating this type of model (#855). diff --git a/README.md b/README.md index 6f3cd5f20..dad552de8 100644 --- a/README.md +++ b/README.md @@ -77,22 +77,22 @@ between implementations. In this example: -- the **type** of model is “random forest”, -- the **mode** of the model is “regression” (as opposed to - classification, etc), and -- the computational **engine** is the name of the R package. +- the **type** of model is “random forest”, +- the **mode** of the model is “regression” (as opposed to + classification, etc), and +- the computational **engine** is the name of the R package. The goals of parsnip are to: -- Separate the definition of a model from its evaluation. -- Decouple the model specification from the implementation (whether - the implementation is in R, spark, or something else). For example, - the user would call `rand_forest` instead of `ranger::ranger` or - other specific packages. -- Harmonize argument names (e.g. `n.trees`, `ntrees`, `trees`) so that - users only need to remember a single name. This will help *across* - model types too so that `trees` will be the same argument across - random forest as well as boosting or bagging. +- Separate the definition of a model from its evaluation. +- Decouple the model specification from the implementation (whether the + implementation is in R, spark, or something else). For example, the + user would call `rand_forest` instead of `ranger::ranger` or other + specific packages. +- Harmonize argument names (e.g. `n.trees`, `ntrees`, `trees`) so that + users only need to remember a single name. This will help *across* + model types too so that `trees` will be the same argument across + random forest as well as boosting or bagging. Using the example above, the parsnip approach would be: @@ -166,18 +166,18 @@ This project is released with a [Contributor Code of Conduct](https://contributor-covenant.org/version/2/0/CODE_OF_CONDUCT.html). By contributing to this project, you agree to abide by its terms. -- For questions and discussions about tidymodels packages, modeling, - and machine learning, please [post on RStudio - Community](https://community.rstudio.com/new-topic?category_id=15&tags=tidymodels,question). +- For questions and discussions about tidymodels packages, modeling, and + machine learning, please [post on RStudio + Community](https://community.rstudio.com/new-topic?category_id=15&tags=tidymodels,question). -- If you think you have encountered a bug, please [submit an - issue](https://github.com/tidymodels/parsnip/issues). +- If you think you have encountered a bug, please [submit an + issue](https://github.com/tidymodels/parsnip/issues). -- Either way, learn how to create and share a - [reprex](https://reprex.tidyverse.org/articles/articles/learn-reprex.html) - (a minimal, reproducible example), to clearly communicate about your - code. +- Either way, learn how to create and share a + [reprex](https://reprex.tidyverse.org/articles/articles/learn-reprex.html) + (a minimal, reproducible example), to clearly communicate about your + code. -- Check out further details on [contributing guidelines for tidymodels - packages](https://www.tidymodels.org/contribute/) and [how to get - help](https://www.tidymodels.org/help/). +- Check out further details on [contributing guidelines for tidymodels + packages](https://www.tidymodels.org/contribute/) and [how to get + help](https://www.tidymodels.org/help/). diff --git a/man/details_boost_tree_h2o.Rd b/man/details_boost_tree_h2o.Rd index 81c5a1974..9957dfa35 100644 --- a/man/details_boost_tree_h2o.Rd +++ b/man/details_boost_tree_h2o.Rd @@ -135,7 +135,7 @@ their analogue to the \code{mtry} argument as the \emph{proportion} of predictor that will be randomly sampled at each split rather than the \emph{count}. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite -helpful—interpreting \code{mtry} as a proportion means that $\link{0, 1}$ is +helpful—interpreting \code{mtry} as a proportion means that \verb{[0, 1]} is always a valid range for that parameter, regardless of input data. parsnip and its extensions accommodate this parameterization using the @@ -153,7 +153,7 @@ to \code{TRUE}. For engines that support the proportion interpretation (currently \code{"xgboost"} and \code{"xrf"}, via the rules package, and \code{"lightgbm"} via the bonsai package) the user can pass the \code{counts = FALSE} argument to \code{set_engine()} to supply \code{mtry} values -within $\link{0, 1}$. +within \verb{[0, 1]}. } \subsection{Initializing h2o}{ diff --git a/man/details_boost_tree_lightgbm.Rd b/man/details_boost_tree_lightgbm.Rd index bf7296493..a2c1ecabd 100644 --- a/man/details_boost_tree_lightgbm.Rd +++ b/man/details_boost_tree_lightgbm.Rd @@ -137,7 +137,7 @@ their analogue to the \code{mtry} argument as the \emph{proportion} of predictor that will be randomly sampled at each split rather than the \emph{count}. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite -helpful—interpreting \code{mtry} as a proportion means that $\link{0, 1}$ is +helpful—interpreting \code{mtry} as a proportion means that \verb{[0, 1]} is always a valid range for that parameter, regardless of input data. parsnip and its extensions accommodate this parameterization using the @@ -155,7 +155,7 @@ to \code{TRUE}. For engines that support the proportion interpretation (currently \code{"xgboost"} and \code{"xrf"}, via the rules package, and \code{"lightgbm"} via the bonsai package) the user can pass the \code{counts = FALSE} argument to \code{set_engine()} to supply \code{mtry} values -within $\link{0, 1}$. +within \verb{[0, 1]}. } \subsection{Saving fitted model objects}{ diff --git a/man/details_boost_tree_xgboost.Rd b/man/details_boost_tree_xgboost.Rd index 588c7fe8f..7c220533b 100644 --- a/man/details_boost_tree_xgboost.Rd +++ b/man/details_boost_tree_xgboost.Rd @@ -135,7 +135,7 @@ boost_tree() \%>\% set_engine("xgboost", eval_metric = "mae") }\if{html}{\out{}} -\if{html}{\out{
}}\preformatted{## Boosted Tree Model Specification (unknown) +\if{html}{\out{
}}\preformatted{## Boosted Tree Model Specification (unknown mode) ## ## Engine-Specific Arguments: ## eval_metric = mae @@ -150,7 +150,7 @@ boost_tree() \%>\% set_engine("xgboost", params = list(eval_metric = "mae")) }\if{html}{\out{
}} -\if{html}{\out{
}}\preformatted{## Boosted Tree Model Specification (unknown) +\if{html}{\out{
}}\preformatted{## Boosted Tree Model Specification (unknown mode) ## ## Engine-Specific Arguments: ## params = list(eval_metric = "mae") @@ -190,7 +190,7 @@ their analogue to the \code{mtry} argument as the \emph{proportion} of predictor that will be randomly sampled at each split rather than the \emph{count}. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite -helpful—interpreting \code{mtry} as a proportion means that $\link{0, 1}$ is +helpful—interpreting \code{mtry} as a proportion means that \verb{[0, 1]} is always a valid range for that parameter, regardless of input data. parsnip and its extensions accommodate this parameterization using the @@ -208,7 +208,7 @@ to \code{TRUE}. For engines that support the proportion interpretation (currently \code{"xgboost"} and \code{"xrf"}, via the rules package, and \code{"lightgbm"} via the bonsai package) the user can pass the \code{counts = FALSE} argument to \code{set_engine()} to supply \code{mtry} values -within $\link{0, 1}$. +within \verb{[0, 1]}. } \subsection{Early stopping}{ diff --git a/man/details_rule_fit_xrf.Rd b/man/details_rule_fit_xrf.Rd index ee12446f8..a2ca913e2 100644 --- a/man/details_rule_fit_xrf.Rd +++ b/man/details_rule_fit_xrf.Rd @@ -155,7 +155,7 @@ their analogue to the \code{mtry} argument as the \emph{proportion} of predictor that will be randomly sampled at each split rather than the \emph{count}. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite -helpful—interpreting \code{mtry} as a proportion means that $\link{0, 1}$ is +helpful—interpreting \code{mtry} as a proportion means that \verb{[0, 1]} is always a valid range for that parameter, regardless of input data. parsnip and its extensions accommodate this parameterization using the @@ -173,7 +173,7 @@ to \code{TRUE}. For engines that support the proportion interpretation (currently \code{"xgboost"} and \code{"xrf"}, via the rules package, and \code{"lightgbm"} via the bonsai package) the user can pass the \code{counts = FALSE} argument to \code{set_engine()} to supply \code{mtry} values -within $\link{0, 1}$. +within \verb{[0, 1]}. } \subsection{Early stopping}{ diff --git a/man/parsnip-package.Rd b/man/parsnip-package.Rd index e7bbe5a2b..aade90cdc 100644 --- a/man/parsnip-package.Rd +++ b/man/parsnip-package.Rd @@ -20,17 +20,17 @@ Useful links: } \author{ -\strong{Maintainer}: Max Kuhn \email{max@rstudio.com} +\strong{Maintainer}: Max Kuhn \email{max@posit.co} Authors: \itemize{ - \item Davis Vaughan \email{davis@rstudio.com} + \item Davis Vaughan \email{davis@posit.co} } Other contributors: \itemize{ \item Emil Hvitfeldt \email{emilhhvitfeldt@gmail.com} [contributor] - \item RStudio [copyright holder, funder] + \item Posit Software PBC [copyright holder, funder] } } diff --git a/man/rmd/boost_tree_h2o.md b/man/rmd/boost_tree_h2o.md index 5a90de7e3..7b12a78f1 100644 --- a/man/rmd/boost_tree_h2o.md +++ b/man/rmd/boost_tree_h2o.md @@ -118,11 +118,11 @@ Non-numeric predictors (i.e., factors) are internally converted to numeric. In t The `mtry` argument denotes the number of predictors that will be randomly sampled at each split when creating tree models. -Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that $[0, 1]$ is always a valid range for that parameter, regardless of input data. +Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that `[0, 1]` is always a valid range for that parameter, regardless of input data. parsnip and its extensions accommodate this parameterization using the `counts` argument: a logical indicating whether `mtry` should be interpreted as the number of predictors that will be randomly sampled at each split. `TRUE` indicates that `mtry` will be interpreted in its sense as a count, `FALSE` indicates that the argument will be interpreted in its sense as a proportion. -`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within $[0, 1]$. +`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within `[0, 1]`. ## Initializing h2o diff --git a/man/rmd/boost_tree_lightgbm.md b/man/rmd/boost_tree_lightgbm.md index 3f6844f12..678b5ad90 100644 --- a/man/rmd/boost_tree_lightgbm.md +++ b/man/rmd/boost_tree_lightgbm.md @@ -115,11 +115,11 @@ Non-numeric predictors (i.e., factors) are internally converted to numeric. In t The `mtry` argument denotes the number of predictors that will be randomly sampled at each split when creating tree models. -Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that $[0, 1]$ is always a valid range for that parameter, regardless of input data. +Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that `[0, 1]` is always a valid range for that parameter, regardless of input data. parsnip and its extensions accommodate this parameterization using the `counts` argument: a logical indicating whether `mtry` should be interpreted as the number of predictors that will be randomly sampled at each split. `TRUE` indicates that `mtry` will be interpreted in its sense as a count, `FALSE` indicates that the argument will be interpreted in its sense as a proportion. -`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within $[0, 1]$. +`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within `[0, 1]`. ### Saving fitted model objects diff --git a/man/rmd/boost_tree_xgboost.md b/man/rmd/boost_tree_xgboost.md index cd7da965b..8a1dbf48b 100644 --- a/man/rmd/boost_tree_xgboost.md +++ b/man/rmd/boost_tree_xgboost.md @@ -121,7 +121,7 @@ boost_tree() %>% ``` ``` -## Boosted Tree Model Specification (unknown) +## Boosted Tree Model Specification (unknown mode) ## ## Engine-Specific Arguments: ## eval_metric = mae @@ -139,7 +139,7 @@ boost_tree() %>% ``` ``` -## Boosted Tree Model Specification (unknown) +## Boosted Tree Model Specification (unknown mode) ## ## Engine-Specific Arguments: ## params = list(eval_metric = "mae") @@ -162,11 +162,11 @@ By default, the model is trained without parallel processing. This can be change The `mtry` argument denotes the number of predictors that will be randomly sampled at each split when creating tree models. -Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that $[0, 1]$ is always a valid range for that parameter, regardless of input data. +Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that `[0, 1]` is always a valid range for that parameter, regardless of input data. parsnip and its extensions accommodate this parameterization using the `counts` argument: a logical indicating whether `mtry` should be interpreted as the number of predictors that will be randomly sampled at each split. `TRUE` indicates that `mtry` will be interpreted in its sense as a count, `FALSE` indicates that the argument will be interpreted in its sense as a proportion. -`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within $[0, 1]$. +`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within `[0, 1]`. ### Early stopping diff --git a/man/rmd/discrim_regularized_klaR.md b/man/rmd/discrim_regularized_klaR.md index 2d6641fcc..e5fcc0d3e 100644 --- a/man/rmd/discrim_regularized_klaR.md +++ b/man/rmd/discrim_regularized_klaR.md @@ -6,6 +6,8 @@ For this engine, there is a single mode: classification ## Tuning Parameters + + This model has 2 tuning parameter: - `frac_common_cov`: Fraction of the Common Covariance Matrix (type: double, default: (see below)) diff --git a/man/rmd/mlp_h2o.md b/man/rmd/mlp_h2o.md index 3dbd1b58c..469fb4119 100644 --- a/man/rmd/mlp_h2o.md +++ b/man/rmd/mlp_h2o.md @@ -5,6 +5,8 @@ For this engine, there are multiple modes: classification and regression ## Tuning Parameters + + This model has 6 tuning parameters: - `hidden_units`: # Hidden Units (type: integer, default: 200L) diff --git a/man/rmd/rule_fit_xrf.md b/man/rmd/rule_fit_xrf.md index 41910a966..60f7caa35 100644 --- a/man/rmd/rule_fit_xrf.md +++ b/man/rmd/rule_fit_xrf.md @@ -146,11 +146,11 @@ Factor/categorical predictors need to be converted to numeric values (e.g., dumm The `mtry` argument denotes the number of predictors that will be randomly sampled at each split when creating tree models. -Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that $[0, 1]$ is always a valid range for that parameter, regardless of input data. +Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that `[0, 1]` is always a valid range for that parameter, regardless of input data. parsnip and its extensions accommodate this parameterization using the `counts` argument: a logical indicating whether `mtry` should be interpreted as the number of predictors that will be randomly sampled at each split. `TRUE` indicates that `mtry` will be interpreted in its sense as a count, `FALSE` indicates that the argument will be interpreted in its sense as a proportion. -`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within $[0, 1]$. +`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within `[0, 1]`. ### Early stopping diff --git a/man/rmd/template-mtry-prop.Rmd b/man/rmd/template-mtry-prop.Rmd index 3ebb57f58..b5b67d771 100644 --- a/man/rmd/template-mtry-prop.Rmd +++ b/man/rmd/template-mtry-prop.Rmd @@ -1,7 +1,7 @@ The `mtry` argument denotes the number of predictors that will be randomly sampled at each split when creating tree models. -Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that $[0, 1]$ is always a valid range for that parameter, regardless of input data. +Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that `[0, 1]` is always a valid range for that parameter, regardless of input data. parsnip and its extensions accommodate this parameterization using the `counts` argument: a logical indicating whether `mtry` should be interpreted as the number of predictors that will be randomly sampled at each split. `TRUE` indicates that `mtry` will be interpreted in its sense as a count, `FALSE` indicates that the argument will be interpreted in its sense as a proportion. -`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within $[0, 1]$. +`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within `[0, 1]`.