When Cross-Validation is More Powerful than Regularization

Win-Vector Blog 2019-11-12

Summary:

Regularization is a way of avoiding overfit by restricting the magnitude of model coefficients (or in deep learning, node weights). A simple example of regularization is the use of ridge or lasso regression to fit linear models in the presence of collinear variables or (quasi-)separation. The intuition is that smaller coefficients are less sensitive to … Continue reading When Cross-Validation is More Powerful than Regularization

Link:

http://www.win-vector.com/blog/2019/11/when-cross-validation-is-more-powerful-than-regularization/

From feeds:

Statistics and Visualization » Win-Vector Blog

Tags:

coding

Authors:

Nina Zumel

Date tagged:

11/12/2019, 21:50

Date published:

11/12/2019, 14:45