Archive | Data Preparation

Histogram of Skewed Gaussian Data After Power Transform

How to Use Power Transforms for Machine Learning

Machine learning algorithms like Linear Regression and Gaussian Naive Bayes assume the numerical variables have a Gaussian probability distribution. Your data may not have a Gaussian distribution and instead may have a Gaussian-like distribution (e.g. nearly Gaussian but with outliers or a skew) or a totally different distribution (e.g. exponential). As such, you may be […]

Continue Reading
Box Plot of LDA Number of Components vs. Classification Accuracy

Linear Discriminant Analysis for Dimensionality Reduction in Python

Reducing the number of input variables for a predictive model is referred to as dimensionality reduction. Fewer input variables can result in a simpler predictive model that may have better performance when making predictions on new data. Linear Discriminant Analysis, or LDA for short, is a predictive modeling algorithm for multi-class classification. It can also […]

Continue Reading
Singular Value Decomposition for Dimensionality Reduction in Python

Singular Value Decomposition for Dimensionality Reduction in Python

Reducing the number of input variables for a predictive model is referred to as dimensionality reduction. Fewer input variables can result in a simpler predictive model that may have better performance when making predictions on new data. Perhaps the more popular technique for dimensionality reduction in machine learning is Singular Value Decomposition, or SVD for […]

Continue Reading
Box Plot of PCA Number of Components vs. Classification Accuracy

Principal Component Analysis for Dimensionality Reduction in Python

Reducing the number of input variables for a predictive model is referred to as dimensionality reduction. Fewer input variables can result in a simpler predictive model that may have better performance when making predictions on new data. Perhaps the most popular technique for dimensionality reduction in machine learning is Principal Component Analysis, or PCA for […]

Continue Reading
A Gentle Introduction to Dimensionality Reduction for Machine Learning

Introduction to Dimensionality Reduction for Machine Learning

The number of input variables or features for a dataset is referred to as its dimensionality. Dimensionality reduction refers to techniques that reduce the number of input variables in a dataset. More input features often make a predictive modeling task more challenging to model, more generally referred to as the curse of dimensionality. High-dimensionality statistics […]

Continue Reading
Bar Chart of XGBClassifier Feature Importance Scores

How to Calculate Feature Importance With Python

Feature importance refers to techniques that assign a score to input features based on how useful they are at predicting a target variable. There are many types and sources of feature importance scores, although popular examples include statistical correlation scores, coefficients calculated as part of linear models, decision trees, and permutation importance scores. Feature importance […]

Continue Reading
How to Transform Target Variables for Regression With Scikit-Learn

How to Transform Target Variables for Regression in Python

Data preparation is a big part of applied machine learning. Correctly preparing your training data can mean the difference between mediocre and extraordinary results, even with very simple linear algorithms. Performing data preparation operations, such as scaling, is relatively straightforward for input variables and has been made routine in Python via the Pipeline scikit-learn class. […]

Continue Reading