When Cross-Validation is More Powerful than Regularization
Win-Vector Blog 2019-11-12
Summary:
Regularization is a way of avoiding overfit by restricting the magnitude of model coefficients (or in deep learning, node weights). A simple example of regularization is the use of ridge or lasso regression to fit linear models in the presence of collinear variables or (quasi-)separation. The intuition is that smaller coefficients are less sensitive to … Continue reading When Cross-Validation is More Powerful than Regularization