FES icon indicating copy to clipboard operation
FES copied to clipboard

R-squared can be a deceiving metric

Open crossxwill opened this issue 6 years ago • 0 comments

In section 3.2.1, the book says:

Unfortunately, R^2 can be a deceiving metric. The main problem is that it is a measure of correlation and not accuracy.

Yes, this is true, based on one of the seven formulas for R-squared:

There are several formulas for computing this value (Kvalseth 1985), but the most conceptually simple one finds the standard correlation between the observed and predicted values (a.k.a. R ) and squares it.

Another definition of R-squared is the relative difference between the MSE of the model and the MSE of the null (aka, "variance explained"). For in-sample fit, the two definitions give you the same results (between 0% and 100%). For out-of-sample data, the second definition could lead to a negative value (negative R-squared).

One appeal of R-squared, is it is generally between 0% and 100%. Although, the second definition may allow for negative values (i.e., null fits the data better than a model). This is often easier to interpret than RMSE, which depends on the scale of the units.

A drawback of the second definition is the choice of the null is subjective: (1) mean response; (2) last response for time series; (3) most popular response.

crossxwill avatar Jan 24 '20 01:01 crossxwill