About 50 results
Open links in new tab
  1. What is regularization in plain english? - Cross Validated

    Is regularization really ever used to reduce underfitting? In my experience, regularization is applied on a complex/sensitive model to reduce complexity/sensitvity, but never on a simple/insensitive model to …

  2. Boosting: why is the learning rate called a regularization parameter?

    The learning rate parameter ($\nu \in [0,1]$) in Gradient Boosting shrinks the contribution of each new base model -typically a shallow tree- that is added in the series. It was shown to dramatically

  3. Why do we only see $L_1$ and $L_2$ regularization but not other norms?

    Mar 27, 2017 · The intuition behind regularization is that I have some vector, and I would like that vector to be "small" in some sense. How do you describe a vector's size? Well, you have choices: Do you …

  4. What are Regularities and Regularization? - Cross Validated

    Is regularization a way to ensure regularity? i.e. capturing regularities? Why do ensembling methods like dropout, normalization methods all claim to be doing regularization?

  5. L1 & L2 double role in Regularization and Cost functions?

    Mar 19, 2023 · Regularization - penalty for the cost function, L1 as Lasso & L2 as Ridge Cost/Loss Function - L1 as MAE (Mean Absolute Error) and L2 as MSE (Mean Square Error) Are [1] and [2] the …

  6. neural networks - Why would regularization reduce training error ...

    Feb 11, 2026 · An answer on this very site states that "regularization (including L2) will increase the error on training set" so observing the obverse is certainly noteworthy.

  7. How does regularization reduce overfitting? - Cross Validated

    Mar 13, 2015 · A common way to reduce overfitting in a machine learning algorithm is to use a regularization term that penalizes large weights (L2) or non-sparse weights (L1) etc. How can such …

  8. Difference between weight decay and L2 regularization

    Apr 6, 2025 · I'm reading [Ilya Loshchilov's work] [1] on decoupled weight decay and regularization. The big takeaway seems to be that weight decay and $L^2$ norm regularization are the same for SGD …

  9. When will L1 regularization work better than L2 and vice versa?

    Nov 29, 2015 · Note: I know that L1 has feature selection property. I am trying to understand which one to choose when feature selection is completely irrelevant. How to decide which regularization (L1 or …

  10. Is Tikhonov regularization the same as Ridge Regression?

    Sep 10, 2016 · Tikhonov regularization and ridge regression are terms often used as if they were identical. Is it possible to specify exactly what the difference is?