The package scikit-learn is a widely used Python library for machine learning, built on top of NumPy and some other packages. It provides the means for preprocessing data, reducing dimensionality, implementing regression, classification, clustering, and more. Like NumPy, scikit-learn is also open source.
If you want to implement linear regression and need the functionality beyond the scope of scikit-learn, you should consider statsmodel. It’s a powerful Python package for the estimation of statistical models, performing tests, and more. It’s open source as well.
With scikit-learn, to turn off regularization we set penalty = " none ", but with statsmodels regularization is turned off by default.
scikit-learn allows us to easily tune the model to optimize predictive power.
Statsmodels will provide a summary of statistical measures which will be very familiar to those who’ve used SAS or R.
You should use Scikit-learn for logistic regression unless you need the statistics results provided by StatsModels.
|Regularization||Uses L2 regularization by default, but regularization can be turned off using penalty=’none’||Does not use regularization by default|
|Hyperparameter tuning||GridSearchCV allows for easy tuning of regularization parameter||User will need to write lines of code to tune regularization parameter|
|Intercept||Includes intercept by default||Use the add_constant method to include an intercept|
|Model Evaluation||The score method reports prediction accuracy||The summary method shows p-values, confidence intervals, and other statistical measures|
|When should you use it?||For accurate predictions||For statistical inference.|
|Comparison with R and SAS||Different||Similar|