Interpreting Linear Beta Coefficients Alongside Feature Importances in Machine Learning
9 Pages Posted: 29 Mar 2021
Date Written: March 1, 2021
Machine-learning regression models lack the interpretability of their conventional linear counterparts. Tree- and forest-based models offer feature importances, a vector of probabilities indicating the impact of each predictive variable on a model’s results. This brief note describes how to interpret the beta coefficients of the corresponding linear model so that they may be compared directly to feature importances in machine learning.
Keywords: machine learning, feature importances, linear regression, beta coefficients
JEL Classification: C18, C33
Suggested Citation: Suggested Citation