Any modification to a learning algorithm to reduce its generalization error but not its training error. Reduce generalization error even at the expense of increasing training error is called Regularization.
E.g., Limiting model capacity is a regularization method
It change the error function to penalize hypothesis complexity that leads to overfitting. it also used to increase the training accuracy.
Regularization method encode prior knowledge, express preference for simpler model and need to make underdetermined problem determined.
A. It is a technique that is used to increase the training accuracy.
B. It is a technique that is used to increase the model's complexity.
C. It is a technique that is used to strike a balance between model complexity and model accuracy on training data.
What is the answer?