Last Updated on August 16, 2020
If you are a Python programmer or you are looking for a robust library you can use to bring machine learning into a production system then a library that you will want to seriously consider is scikit-learn.
In this post you will get an overview of the scikit-learn library and useful references of where you can learn more.
Kick-start your project with my new book Machine Learning Mastery With Python, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
Where did it come from?
Scikit-learn was initially developed by David Cournapeau as a Google summer of code project in 2007.
Later Matthieu Brucher joined the project and started to use it as apart of his thesis work. In 2010 INRIA got involved and the first public release (v0.1 beta) was published in late January 2010.
What is scikit-learn?
Scikit-learn provides a range of supervised and unsupervised learning algorithms via a consistent interface in Python.
It is licensed under a permissive simplified BSD license and is distributed under many Linux distributions, encouraging academic and commercial use.
The library is built upon the SciPy (Scientific Python) that must be installed before you can use scikit-learn. This stack that includes:
- NumPy: Base n-dimensional array package
- SciPy: Fundamental library for scientific computing
- Matplotlib: Comprehensive 2D/3D plotting
- IPython: Enhanced interactive console
- Sympy: Symbolic mathematics
- Pandas: Data structures and analysis
Extensions or modules for SciPy care conventionally named SciKits. As such, the module provides learning algorithms and is named scikit-learn.
The vision for the library is a level of robustness and support required for use in production systems. This means a deep focus on concerns such as easy of use, code quality, collaboration, documentation and performance.
What are the features?
The library is focused on modeling data. It is not focused on loading, manipulating and summarizing data. For these features, refer to NumPy and Pandas.
Some popular groups of models provided by scikit-learn include:
- Clustering: for grouping unlabeled data such as KMeans.
- Cross Validation: for estimating the performance of supervised models on unseen data.
- Datasets: for test datasets and for generating datasets with specific properties for investigating model behavior.
- Dimensionality Reduction: for reducing the number of attributes in data for summarization, visualization and feature selection such as Principal component analysis.
- Ensemble methods: for combining the predictions of multiple supervised models.
- Feature extraction: for defining attributes in image and text data.
- Feature selection: for identifying meaningful attributes from which to create supervised models.
- Parameter Tuning: for getting the most out of supervised models.
- Manifold Learning: For summarizing and depicting complex multi-dimensional data.
- Supervised Models: a vast array not limited to generalized linear models, discriminate analysis, naive bayes, lazy methods, neural networks, support vector machines and decision trees.
Example: Classification and Regression Trees
I want to give you an example to show you how easy it is to use the library.
In this example, we use the Classification and Regression Trees (CART) decision tree algorithm to model the Iris flower dataset.
This dataset is provided as an example dataset with the library and is loaded. The classifier is fit on the data and then predictions are made on the training data.
Finally, the classification accuracy and a confusion matrix is printed.
# Sample Decision Tree Classifier
from sklearn import datasets
from sklearn import metrics
from sklearn.tree import DecisionTreeClassifier
# load the iris datasets
dataset = datasets.load_iris()
# fit a CART model to the data
model = DecisionTreeClassifier()
# make predictions
expected = dataset.target
predicted = model.predict(dataset.data)
# summarize the fit of the model
Running this example produces the following output, showing you the details of the trained model, the skill of the model according to some common metrics and a confusion matrix.
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
max_features=None, max_leaf_nodes=None, min_samples_leaf=1,
presort=False, random_state=None, splitter='best')
precision recall f1-score support
0 1.00 1.00 1.00 50
1 1.00 1.00 1.00 50
2 1.00 1.00 1.00 50
avg / total 1.00 1.00 1.00 150
[[50 0 0]
[ 0 50 0]
[ 0 0 50]]
Who is using it?
The scikit-learn testimonials page lists Inria, Mendeley, wise.io , Evernote, Telecom ParisTech and AWeber as users of the library.
If this is a small indication of companies that have presented on their use, then there are very likely tens to hundreds of larger organizations using the library.
It has good test coverage and managed releases and is suitable for prototype and production projects alike.
If you are interested in learning more, checkout the Scikit-Learn homepage that includes documentation and related resources.
I recommend starting out with the quick-start tutorial and flicking through the user guide and example gallery for algorithms that interest you.
Ultimately, scikit-learn is a library and the API reference will be the best documentation for getting things done.
- Quick Start Tutorial http://scikit-learn.org/stable/tutorial/basic/tutorial.html
- User Guide http://scikit-learn.org/stable/user_guide.html
- API Reference http://scikit-learn.org/stable/modules/classes.html
- Example Gallery http://scikit-learn.org/stable/auto_examples/index.html
If you interested in more information about how the project started and it’s vision, there are some papers you may want to check-out.
- Scikit-learn: Machine Learning in Python (2011)
- API design for machine learning software: experiences from the scikit-learn project (2013)
If you are looking for a good book, I recommend “Building Machine Learning Systems with Python”. It’s well written and the examples are interesting.