Last Updated on August 16, 2020
Machine learning algorithms learn from data. It is critical that you feed them the right data for the problem you want to solve.
Even if you have good data, you need to make sure that it is in a useful scale, format and even that meaningful features are included.
In this post you will learn how to prepare data for a machine learning algorithm. This is a big topic and you will cover the essentials.
Kick-start your project with my new book Data Preparation for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.

Lots of Data
Photo attributed to cibomahto, some rights reserved
Data Preparation Process
The more disciplined you are in your handling of data, the more consistent and better results you are like likely to achieve. The process for getting data ready for a machine learning algorithm can be summarized in three steps:
- Step 1: Select Data
- Step 2: Preprocess Data
- Step 3: Transform Data
You can follow this process in a linear manner, but it is very likely to be iterative with many loops.
Want to Get Started With Data Preparation?
Take my free 7-day email crash course now (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
Step 1: Select Data
This step is concerned with selecting the subset of all available data that you will be working with. There is always a strong desire for including all data that is available, that the maxim “more is better” will hold. This may or may not be true.
You need to consider what data you actually need to address the question or problem you are working on. Make some assumptions about the data you require and be careful to record those assumptions so that you can test them later if needed.
Below are some questions to help you think through this process:
- What is the extent of the data you have available? For example through time, database tables, connected systems. Ensure you have a clear picture of everything that you can use.
- What data is not available that you wish you had available? For example data that is not recorded or cannot be recorded. You may be able to derive or simulate this data.
- What data don’t you need to address the problem? Excluding data is almost always easier than including data. Note down which data you excluded and why.
It is only in small problems, like competition or toy datasets where the data has already been selected for you.
Step 2: Preprocess Data
After you have selected the data, you need to consider how you are going to use the data. This preprocessing step is about getting the selected data into a form that you can work.
Three common data preprocessing steps are formatting, cleaning and sampling:
- Formatting: The data you have selected may not be in a format that is suitable for you to work with. The data may be in a relational database and you would like it in a flat file, or the data may be in a proprietary file format and you would like it in a relational database or a text file.
- Cleaning: Cleaning data is the removal or fixing of missing data. There may be data instances that are incomplete and do not carry the data you believe you need to address the problem. These instances may need to be removed. Additionally, there may be sensitive information in some of the attributes and these attributes may need to be anonymized or removed from the data entirely.
- Sampling: There may be far more selected data available than you need to work with. More data can result in much longer running times for algorithms and larger computational and memory requirements. You can take a smaller representative sample of the selected data that may be much faster for exploring and prototyping solutions before considering the whole dataset.
It is very likely that the machine learning tools you use on the data will influence the preprocessing you will be required to perform. You will likely revisit this step.

So much data
Photo attributed to Marc_Smith, some rights reserved
Step 3: Transform Data
The final step is to transform the process data. The specific algorithm you are working with and the knowledge of the problem domain will influence this step and you will very likely have to revisit different transformations of your preprocessed data as you work on your problem.
Three common data transformations are scaling, attribute decompositions and attribute aggregations. This step is also referred to as feature engineering.
- Scaling: The preprocessed data may contain attributes with a mixtures of scales for various quantities such as dollars, kilograms and sales volume. Many machine learning methods like data attributes to have the same scale such as between 0 and 1 for the smallest and largest value for a given feature. Consider any feature scaling you may need to perform.
- Decomposition: There may be features that represent a complex concept that may be more useful to a machine learning method when split into the constituent parts. An example is a date that may have day and time components that in turn could be split out further. Perhaps only the hour of day is relevant to the problem being solved. consider what feature decompositions you can perform.
- Aggregation: There may be features that can be aggregated into a single feature that would be more meaningful to the problem you are trying to solve. For example, there may be a data instances for each time a customer logged into a system that could be aggregated into a count for the number of logins allowing the additional instances to be discarded. Consider what type of feature aggregations could perform.
You can spend a lot of time engineering features from your data and it can be very beneficial to the performance of an algorithm. Start small and build on the skills you learn.
Summary
In this post you learned the essence of data preparation for machine learning. You discovered a three step framework for data preparation and tactics in each step:
- Step 1: Data Selection Consider what data is available, what data is missing and what data can be removed.
- Step 2: Data Preprocessing Organize your selected data by formatting, cleaning and sampling from it.
- Step 3: Data Transformation Transform preprocessed data ready for machine learning by engineering features using scaling, attribute decomposition and attribute aggregation.
Data preparation is a large subject that can involve a lot of iterations, exploration and analysis. Getting good at data preparation will make you a master at machine learning. For now, just consider the questions raised in this post when preparing data and always be looking for clearer ways of representing the problem you are trying to solve.
Resources
If you are looking to dive deeper into this subject, you can learn more in the resources below.
- From Data Mining to Knowledge Discovery in Databases, 1996
- Data Analysis with Open Source Tools, Part 1
- Machine Learning for Hackers, Chapter 2: Data Exploration
- Data Mining: Practical Machine Learning Tools and Techniques, Chapter 7: Transformations: Engineering the input and output
Do you have some data preparation process tips and tricks. Please leave a comment and share your experiences.
I enjoyed your concise overview, Jason.
Perhaps you can delve a little into the dangers/opportunities in your Step 2: Cleaning stage.
It has been my experience that those data you may want to remove contain the more interesting data to the client (perhaps only after the requested client questions are addressed).
Fraser
Hi Fraser, good question.
Indeed, it can difficult to know if data is bad and you may not always have a domain expert at hand to comment. Sometimes it is obvious though, like 0 values that are impossible in the domain like a blood pressure. I’ve also seen -999 used to signal “not provided”. In these cases we can mark attributes as missing and think about possible rules for imputing if we so desire.
Where do you draw the line though? Should severe outliers be marked as missing? Sometimes. I like to try a lot of stuff, for example, I would try removing instances with large outliers in one dimension and see what that did to my models, I’d also try removing instances with missing values and try models on variations of the data with imputed value. Almost always, modeling ground truth is not the goal, there are performance metrics like classification accuracy or AUC that we are being optimized.
You’re right though, sometimes the broken data can represent something very interesting – anomalies that signal something useful in and of themselves in the domain.
Yes, indeed. Is it an outlier, or a poorly encoded result, or a result with atypical calibration, or does it represent a distinct and real combination of natural conditions …
I work a lot with chemical concentration data in water and sediment and I run into censored data routinely. Mostly from the 1000 mg/L. Censored data of this particular type is handled differently by different people and as you suggest values need to be imputed (with an appropriate sampling distribution) if the rest of a multi-parameter time-sample result is to remain in the analysis.
For me this is what makes data analysis fun.
I just arrived at your site, and I see so many articles of interest. Thank you for making this available.
Fraser
The use of the angle brackets got lost in my post above.
“Mostly of the type “less than” .01 pg/L but occasionally the other side, say “greater than” 1000 mg/L.”
Insightful comments Fraser, thanks. Reach out any time if you want kick around some ideas on a tough problem.
Thanks, Jason. I will do that. Fraser
I like “Getting good at data preparation will make you a master at machine learning”. This is indeed a good post.
Thanks Dr Jason.
Can you please share the link of this article
I believe Surajit was quoting from this article.
Great set of articles!
One issue that I run into is that the data sometimes lacks semantic integrity. This is not an issue of missing values, but just having improper values. When values are of different data types within a column, it is easy to detect and fix.
However, when the data type is the same but the meaning changes, then it’s much more difficult. For example, I’ve seen sales data where a column named ‘marketing plan code’ would have string data type denoting marketing plan codes, except in a few cases where the users put in vendor codes because they didn’t have any other field to record that information.
Any insights and anecdotes about this issue?
Hey can you send ma data for CFST column under axial load in machine learning ?
Jason, does it affect an algoritm if, during the preparation process I transform the list of rows (like tables, where the key columns repeats) to pivot tablee, where the key colums shows once and a lot of columns (say hundreds) have parcial sums or counts for the different conditions (let’s say sales of january in one column, sales of february in a second column and so on).
Does it would make muiltcorrelation as some columns could be aggregated to one?
Thank you for your valuable information for this important area Machine learning, started with data structure and going further to build it complete.
how can i make one attribute as decision attribute in the data set in order to classification model depend on the selected attribute
Hi Ali,
Different algorithms will chose which variables to use and how to use them. You can force a model to use one variable by deleting all of the other variables.
Hi Jason,
Appreciate the effort you put into the great article.
I am currently working on a project on a government data set to find if an entity(person or an individual) were involved in a a positive or a negative way. I took a flat file containing some test data and prepared the code to perform sentiment analysis using Naive Bayes algorithm using NLTK python modules.
– In most cases we have a defined trained data set tagged as ‘positive’ or ‘negative’ (e.g movie reviews, twitter data set). In my case there is no existing trained government data set.
– The training data is available but I need to categorize the training data set as ‘positive’ or ‘negative’.
– My question here is, how do we go about classifying my government data as ‘positive’ or ‘negative’.
I’m looking forward on your advice on how to categorize my government training data as positive or negative. This is very important for me to get my sentiment analysis with best possible accuracy.
Hi Avin, I would advise you locate a subject matter expert to prepare a high-quality training dataset for you (manual classifications).
What is the best way to process large amount of data for machine learning?
Hi Mayur,
That depends on the problem and how the data is currently represented and stored. No silver bullets, sorry.
My current and first ML project has natural language as it’s input and I spent a huge chunk of time on preparing it.
I stopped once the data reached a “reasonable” level so that I could continue with the project, i.e. I’m dropping the hard to parse cases and might return to them later once the whole pipeline is ready for testing.
Keeping the 80/20 rule in mind.
Nice work Ivan.
Thank you for your valuables posts, my question is how to apply machine learning to Cancer Registry data set?
I have two datasets:
1. Dataset1 :
About 18K observation and 22 variables: the five years data set includes
Demographic, Diagnoses, and treatments,
2. Dataset2:
aggregate vitals based on race grouping on: regions, stages, vitals
Thank you for your help
Ted
Great question ted, see this process to help you work through your problem:
https://machinelearningmastery.com/start-here/#process
Hi Jason, thank you for the great effort and knowledge put into all these posts!
My question will probably be silly, but since I’m a complete n00b I’ll do it just the same.
Data prep, feature analysis and engineering will get you a set of data in a format completely different from original data. These data transformation steps may be very hard to do automatically. My problem is related to classification, I am using NN which may not be the best choice, but hey, humor me 😉
So, cutting short. Originally, I get raw data, I prep and transform it. The transformed data will train and test “my” NN. Now, the “real world” will challenge my model with raw data, presumably with the same format as my original training set, minus the classification ( of course…). Now, I suppose I’ll have to go through the same data transformation of the data before the trained model can be fed with it. Right? Doesn’t this mean extra care must be taken to make the data transformation process (at least ideally) automatic itself?
Sorry for the long question, hope to hear your thoughts on these points. And thank you once again!
Very good question José!
Yes. Any data transformation performed on data used to fit your model must be performed on data when making predictions.
This means we need a very clear recipe for this transform, ideally automatic and also in the case of regression problems it must be reversible so that we can convert predictions back into their predictions scale for use.
Thank you very much Jason. And keep up the excellent job you are doing!
Thanks José.
what is the best book to learn how to prepare the datasets for machine learning models
I would like to offer that within your topic of “Select Data” you offer a bit more explicit guidance on the topic of assessing and characterizing data quality. It’s cliché, but garbage-in-garbage-out is a fundamental concept. I so often come across advanced analytic initiatives that have started out with Assumptions for quality of “selected” data and moved on – only to find out months later that everything has to reset to basic principles of data acquisition and management.
What transforms have been applies to source data by systems that precede the database you are selecting from?
If sensor data is involved, what formatting, precision, transformations, signal processing, etc. have been applied?
If data is being acquired from multiple, disparate systems what formatting, scale, and precision differences are being masked by the database system you are selecting from?
Just a few examples.
Really good points Eric.
It’s hard to give general advice on data prep because of all of the detail in specific data matters.
It’s not like algorithms where you can say “try everything and see what works on your data”.
how we save our preprocessed data into database and how we train model using this data.
You could save it to CSV file. Pandas and Python standard libraries offer functions to do this.
is csv is the better way to save sentiment data or use some nosql database to store.
There is no best way. Perhaps choose an approach that will work best for your project.
Sir suggest me an approach for twitter sentiment analysis using deepl learning.
Perhaps this tutorial will get you started:
https://machinelearningmastery.com/predict-sentiment-movie-reviews-using-deep-learning/
Hi Jason,
when I go through UCI Machine Learning Repository following doubts have occured:
1. in bike sharing dataset, I saw two .csv files(one is day.csv and another is hour.csv). So,i can’t understand how to make this dataset suitable for me to apply machine learning algorithm on it to make predictive model by splitting the whole dataset into train and test sets?
2. in this repository, I saw dataset characteristics as multivariate and univariate, what does this mean?
3. in this repository, whenever I explore any of the dataset, there is no statement present there to mention which is the feature to be predict by applying machine learning algorithms?
4. what if both numeric types(float as well as integer) values in any of the feature exist in a dataset? Should we scale the feature values(integer) to float in order to get good predictive model?
Please help…….
Each dataset is different. You will need to take care and discover how to prepare each one.
Univariate means one variable, feature or column (all the same thing), multivariate means many.
You might have to check the data or read the associated paper.
Depending on the algorithm used, you might need to convert all features to numeric.
So, this means that we have to convert the integer values of all feature exist in a dataset into float values in order to increase the accuracy of our model? Correct me if I am wrong.
What do you think if there are two .csv files in a dataset, how should we prepare this type of dataset? Please recommend me a way to do this.
Thanks…
Perhaps, it depends on the algorithms being used. I would recommend trying it.
If there are two files, I would recommend combining the data into one file.
Hello Jason,
When I saw bike sharing dataset in UCI Machine Learning Dataset, UCI mention it’s dataset characteristics as univariate despite having total of 16 features(columns). Why is this so? Shouldn’t it be multivariate, instead.
Secondly, as you have recommended to join two .csv files into one, but when I use this dataset, I noticed that both of the files have same features(except hr(hour) available only in hour.csv file not in day.csv file) with different values in each of the same features available in both the files. In this kind of situation, if I join both the files, values get redundant and even features as well. So, what do you recommend, how to prepare my dataset in this type of situation?
Thanks for your quick response to previous question….
Perhaps they define univariate in terms of output variable only.
Sorry, perhaps I don’t have enough information to give you good advice on how to prepare your data.
Thanks for your help on this topic but please whenever you will come to know about how to prepare this type of dataset, tell me or recommend me as well at that moment of time.
Thank you so much for guiding me how to prepare any dataset by creating this amazing post.
What happens if I use a data that does not have a normal distribution?
Are some ML algorithms only suitable for data that are assumed to be normal?
How can I identify whether an algorithm works with normal/non-normal or just normal data?
In practice, you can often get good results by breaking these rules.
I would recommend testing a suite of algorithms on your data and double down on what seems to be working.
Thanks for your help. Can you please suggest me what is the best way to deal with a dataset that contains a lot of text columns. Also the values of these columns too have a huge set of different values.
I have some ideas here that may help:
https://machinelearningmastery.com/start-here/#nlp
Hi Dr. Jason,
Thank you for your work, I really appreciate your efforts in helping us.
I am a BIG fan.
First off all, I’m planning to use a LSMT-RNN in multivariate time series problem.
I’m beginning my studies in machine learning and probably my question is very silly, but to me is a big issue.
I have a time series database with 221 features not supervisioned yet, wich I would to transform to a input with to 6 up to 10 features. After this, I would like to supervise the output up to 10 time-steps with 1 feature.
I had preprocessing my database by: cleaning, detrending, normalizing, correlating and clustering by affinty. I got 27 clusters from my 221 features.
My question is:
Now I think I can choose my input data, but how? Should I pick from the same cluster that my output have affinity with output, or should I pick from other clusters that don’t have affinty with my output?
Thx for your time, sry about the big text.
Perhaps try a few methods and see which is easier to model.
Sorry, but I didn’t get the answer.
Good article, Jason.
Another data processing technique that is commonly used today, particularly in computer vision, is data augmentation where basically we introduce small changes such as rotations, coloring, and translations to images in order to emulate different conditions.
Here are some examples:
https://machinelearningmastery.com/image-augmentation-deep-learning-keras/
hello,
Actually, I am new toML, I want to know that when we apply data preprocessing on a dataset, whether we have to change the existing dataset or we have to create a new dataset for the modified data? Means after preprocessing is done will we be having two datasets, one the actual dataset and the other preprocessed dataset or there will be only one dataset with preprocessed data?
Create and save a new dataset or views on your raw data.
hi sir actually i want to prepare a data set for speaker recognition project for that i would like to prepare audio recorded data will you please mention the best procedure for that.
Sorry, I don’t have material on preparing audio data. I hope to cover it in the future.
Hi ,
I am vikash I want to know about the assumptions means about the pre-validation and post validation of data.for example for linear regression we have pre-validation or diagnosis like
1.normal distribution of data
2.No multicollinearity
3.Linear relationship
and
4.Missing values
for Post validation or diagnosis after creating the linear regression mode there are
1.Normality of errors
2.Homoschedasticity
3ouliers and levrges
5auto corelation.
these are the assumptions for Linear Regression .What about the rest of algorithms assumptions ? can you guide me the assumptions for other algorithms .
Thank you.
Often you can get good results or even better results if you ignore these type of assumptions. The reason is that is in predictive modeling, model skill is more important than theoretical correctness.
I have a CSV file timestamp, hostname, metric (CPU,MEM,PAGESCAN), metric vaule (0.7). I need to find the increase in metric value due to cpu or mem or pagescan.
If CPU is increased then which host is maximum utilizing CPU like that finding the root cause.
The data set contains both categorical values and numerical values. Do I want to convert the categorical data like hostname and metric value to numerical.
Do i need to do data transformation?
What machine learning techniques will well predict the root cause ? which algol.
I am trying to use spark ML.
Any suggestions.
Thanks
Yes, I would recommend converting categorical data to integer or even one hot encoding prior to modeling.
I would recommend testing a suite of methods on your data to see what works best. Then double down on that.
Sorry, I don’t have examples for Spark.
Hello,
I’m new in Machine Learning so I have a question. Input data have to be the same size?
I mean, I have 10 matrixs with data, but matrixs have size for example [60, 120], [60, 460], [60, 340] and so on. I want to use Tensorflow Engine.
I would be grateful if you could answer my question.
Regards!
Yes, generally data must have the same shape.
Bonjour Jason,
moi je sollicite votre soutient documentaire par rapport à mon projet “Techniques pour la préparation des données pour des projets de science de données”. j’ai lu les différents commentaires, mais mon projet en demande plus d’avantages sur les différentes et méthodes. on me demande de:
Faire un état de l’art des techniques et outils pour la préparation des données et regrouper les approches en fonction des méthodes et techniques utilisées.
Faire une synthèse des avantages et inconvénients des méthodes les plus pertinentes de l’état de l’art et proposer un processus pour la préparation des données.
je suis vraiment dans le besoin d’orientation et de documentation. Merci
Hi Adama, I think if you’re having trouble with your homework project that my best advice is to talk your professor and teaching staff. You are paying them to teach you.
Data preparation is really specific to aa given type of data and predictive modeling problem. Perhaps you can focus your attention more to make the project easier.
Hi Jason,
i really like the site and there is a lot of really useful things here, i’m presented at the moment with a problem.
I’m attempting to classify a number of scanned PDFs based on the machine read text within them, i’ve got to the point where i have a relatively large test set.
The documents themselves have extremely predictable sentences which tie in very closely with the classification however all i’ve managed to really find on this is using the BoW model.
Would using a neural net to achieve this be a viable option? Also i’m having some problems with the pre-processing of the data. I’m not 100% on the best way to remove ‘\n’ characters and other punctuation from the large text strings.
any help or pointers would be greatly appreciated.
Many thanks,
Cheyne
It is hard for me to tell. I would generally recommend testing a suite of methods to see what works best for your specific data.
Let me know how you go.
hi Jason , i like a lot your way to explain machine learning.
i am working on combining machine learning techniques , and my question : there is ML problems where there are enough datasets to validate my work.
Here are some places to get datasets:
https://machinelearningmastery.com/faq/single-faq/where-can-i-get-a-dataset-on-___
Hi DR Jason,
It is a very good guide.
I have a question, I am writing a neural network from scratch (back propagation algorithm) using sigmoid function so I have scaled my data in a range between -1 and 1 ]-1,1[ but sigmoid function give results between 0 and 1. So I would like to know if I must scale my data in a range between 0 and 1 [0,1] for sigmoid function?. Or would DR Jason please make me clear if there is a recommended scale of data when using a sigmoid function? or what is the recommended scale for sigmoid function?
Best Regards.
Perhaps this post will help:
https://machinelearningmastery.com/implement-backpropagation-algorithm-scratch-python/
Thank you.
Best regards.
Hi DR Jason,
I’ve read the recommended post was really helpful thank you so much.
Best regards.
I’m glad it helped.
Hi Dear Jason,
Thanks four this overview. I would like to know in which format I should prepare my data for Non-dominated Sorting Genetic Algorithm 2 MATLAB. Thanks!
I believe that is an optimization algorithm, not a supervised learning algorithm. I don’t know what you mean exactly?
Hi Jason, I am working with a dataset that has a lot of similar data items (e.g. mobile phone data). So, I would like to do the diversity-based sampling. What is the best way and tools to do it?
Perhaps clustering and filter based on distance to cluster centroids?
Perhaps check the literature?
Hi Jason,
I’m trying to create a classification LSTM model. I have three categorical variables apart from my predictor variable. I have label encoded all the three variables. Do i need to scale the variables or I could use them as is .
Try both and see what works best.
Hi jason
I need to fetch questions from question and answer datasets using any one of the ml algorithm..can you tell me which algorithm is best..? and procedure..?
This is a common question that I answer here:
https://machinelearningmastery.com/faq/single-faq/what-algorithm-config-should-i-use
Dear Jason,
Thank you first of all for this amazing website.
I am working on Sentiment Analysis application for my MSc and I am pretty beginner in this feild
I have collected the data from twitter but I want to know shall I clean the data before or after Annotation ? will the order make a difference ?
A good place to get started is here:
https://machinelearningmastery.com/start-here/#nlp
Dear Jason,
I have some unstructured json files that I need to preprocess as input to my machine learning algorithm, please any help with how to create feature vectors with unstructured json files
You might have to write some custom code.
Hello Jason thank’s for your tutorial, i have a question for softmax function , the derivation of this function
Perhaps this will help:
https://en.wikipedia.org/wiki/Softmax_function
Hi Jason,
Thanks again for a good read. In cases when we don’t have an inherent category/class backed up by literature, do you think its okay to use the mean value as a cut-off for classes?. For example, say we’re trying to separate between high performers and low performers in a workplace based on a survey outcome. Now that survey doesn’t have an exact cut-off saying anybody who gets above 10 out of 20 is high performer and below is low-performer. One thing that I guess we could do is just use clustering first to divide the dataset into two clusters and use that as classes. Other than that, would it be okay to calculate the mean score among all the participants and then use that as a cut-off to divide the sample to high and low class. And then use that for train/test? Does it make sense, you think? This is assuming that the data is normally distributed. If not, a percentile based approach might be good. Anyway, do you think it is okay to create classes based on the average score? If not, what might be some other ways to divide the classes based on a numerical value if there’s no inherent category? The reason this comes up is because I’m trying to convert a regression problem to a classification problem but I am not sure if classifying based on mean is a good idea.
Thanks!
You’re approach sounds like a good start.
I recommend testing a few diffrent framings of the problem and discover what works well/best for your specific dataset.
Hi Jason,
How do you deal with missing data in your data set, do you just make them NA? I am using movie data and the variable that has missing data is the actors name. I put NA in this variable and it made 2829 NA out of 14800 records, I believe this could be a problem but wasn’t sure how to address it.
Thanks
Jennifer
You have many options, such as removing records with missing data (columns or rows) and imputing values via mean/median or via a model, or mark with a special value and ignore during modeling.
Perhaps try modeling with a few variations and see what works best for your specific data.
hi Jason , thank you very much for your amazing websites it wasso usefull for me .
i have training dataset (labled )includes many instances i’ll use it in classification methods. now i want to know if can i use test dataset includes just one instance (of course unlabled ) ??
thanks
Halima
A testset with one instance does not make sense.
Alternately, if you just want to make a prediction on new data, then perhaps this will help you:
https://machinelearningmastery.com/make-predictions-scikit-learn/
thanks a lot . exactly ia want to make a prediction on new data . but in weka not python
how can i do this in weka ?
Here is an example:
https://machinelearningmastery.com/save-machine-learning-model-make-predictions-weka/
Hi Jason,
1. I have the Application dataset in json format, I convert to flat csv to work in r because it has nested fields, Is that a right approach?
2. In the data, some records have more than one applicants, but maximum two or three applicants,few have more than 3, in that case in the flat file the variable columns that doesn’t belong to more than 3 applicants are mostly NULL as only few records have that, May I know how to handle that please? (example record information is: Applicant Names, Phone, Company, Salary, Asset Amount, Age, Gender, ….)
Thanks in Advance
Sounds like the right approach.
Perhaps you can mark the values as missing:
https://machinelearningmastery.com/faq/single-faq/how-do-i-handle-missing-data
I am in the right path then, Thanks Jason
Hi Dr.
am an MSc. student working on machine learning specifically convolution neural network to predict phenotypes using genomic data . my data set is coded {0 1 2}. my question are;
1. is it possible to use the same format code to make prediction or am suppose to transform it to {0 1}
2 if am to transform how can i go about
Yes, you can model a problem as multi-class classification, perhaps this will help:
https://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/
On the point of Data Cleaning:
“Additionally, there may be sensitive information in some of the attributes and these attributes may need to be anonymized or removed from the data entirely.”
We have ML running on a data lake containing raw system data.
How common is the issue of needing to provide anonymised data sets for building data models dependent on business function accessing and their identified ‘Legal Basis for Processing’, for example under GDPR
Case by case really, based on the type of consent users gave.
The world is very different now compared to the 90s/2000s when data was scarce and it was a free for all.
Hi Jason, How to do the preprocess of single test record to follow the same procedure used while training the model?
If you have a pipeline, it will perform the preparation for you.
Hi Dr.Jason
Can you mention to learn decomposition and aggregation transformation. I am not getting clear insight for this.
Thanks
Sure.
Decomposition: a date can be split into date/month/year.
Aggregation: customer transactions can be aggregated to give sums and averages.
Hy Jason,
Thank you for this helpful post.
I have a question about data preparation. I work on Text-to-Code task and i hava a csv file that contain my dataset, it contains 2 columns : Text and Code.
In order to use data in training, i encode X-Train into integer and pad it into max length. But for Y-Train, how can i process it? with the same way as X-train?
Same idea, encode symbols as integers and zero pad to fixed length. The map integers back to symbol to give the final output.
ah okay, thanks again for helping !!
No problem.
Hello Jason first of all I am really thankfully for informations, which you share with us. I want to ask you ; I am a beginner for Machine learning .
How can we convert the genomes into features in order to feed a Machine Learning algorithm?
and in case of having a heavily unbalanced training list, how would it affect my results? how can i solve it?
Thank you very much
I don’t know about representing genomes, sorry. Perhaps check the literature to see how this is done?
There are many approaches to addressing class imbalance, perhaps start here:
https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/
Thank you very much I will keep searching….
Hi Jason, I’m really enjoying your books and emails about machine learning. I’ve started a project for my final year at university on face recognition but really struggling to source a large database of photos to train a network from scratch. How many photos of each person would you recommend for accuracy? And do you know a source that I can access the face images?
Kind regards
Michael
Thanks!
Perhaps test the sensitive of the model to the number of faces for each person?
Also, for faces, use a facenet or vggface2 to get the embedding, then another model to do the actual classification.
hi Jason,
I have a doubt whether if i preprocess my training data ,how can i preprocess my test data …provided that for example if i have different labels in case for encoding in test data what should i do? >>
Test data must be prepared using the same methods as were used to prepare the training data.
Thank you for your reply Jason,
if both train and test have different labels for a common column
eg: col1 in train has unique values of a,b,c
also col1 in test has unique values of a,b,d
how can this be encoded if i recieve unknown label error occurs..
If you know the extent of categorical values beforehand, you can specify them to the onehotencoer so that it can handle all possible cases.
Or if you don’t you can set “handle_unknown” to “ignore”:
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html
Thanks for your reply Jason, it was informative!!!
You’re welcome.
what if I have high cardinal(more unique values) categorical variable, that needs to be encoded ,
which is the best encoding method to use? Can you help with this !
I recommend this tutorial as a first step:
https://machinelearningmastery.com/how-to-prepare-categorical-data-for-deep-learning-in-python/
Then this:
https://machinelearningmastery.com/faq/single-faq/how-do-i-handle-a-large-number-of-categories
But those hashing and other techniques are applicable only for text data..what if i have a data for numerical analysis of classification or clustering?
The same methods can be used. Words are just many categories.
Thank you !!!
You’re welcome.
Hello, Dr. Jason,
I have a question, can sampling be also referred to as feature selection/extraction. I have seen some papers, where they have taken some characteristics of a feature (like AUC, Maximum value, curve fitting coefficients) instead of the original data.
I have been working on classifications of sensor responses of 9 sensors taken for 240 seconds. i.e for each sample of my experiment, I have a data matrix of 240 observations and 9 features (240*9). Thereafter, I selected some representative points, like (the maximum value, the 75th, 50th, and 25th percentile values) to make the system fast, while keeping the performance of the classification at par with the same when using the entire dataset.
How do I present this work as a section in a paper (feature extraction or sampling)
Regards
This sounds like feature engineering:
https://machinelearningmastery.com/discover-feature-engineering-how-to-engineer-features-and-how-to-get-good-at-it/
Thank you somuch, your explanations are really helpfull.
Regards
You’re welcome.
Sir, I still have a doubt, with my earlier question in this thread. The original dataset had a dimension of 240*9 for each sample and for 46 samples the size of the entire data set was 46*(240*9), ie approx. 11000*9. After doing the said feature engineering, my dataset is reduced to 188*9. I have applied a classifier algorithm for classifying the new dataset and achieved good accuracy.
My question is whether this size difference has to be accounted for. The size of the training set is small now and so is the testing set. Since I did use bagging KNN and AdaBoost Decision tree classifier (decision stumps) for the original dataset, the same has been applied here also.
Sir, I have to say that you are the only accessible expert in the field and I am indebted for your guidance.
Regards
Not sure I can help. Sounds like you need to debug your data preparation procedures to understand what they are doing.
Thankyou
You’re welcome.
Hi Jason. Do you have an article that describes how to prepare normalized data for machine learning? If a parent observation has many child values for a given feature, how do we represent that in a single row? If we just assign each value to a different column, the model would consider each column as a different feature and each observation may have their values in a different order than the other observations. Thanks for your insight.
Yes many, perhaps start here:
https://machinelearningmastery.com/faq/single-faq/when-should-i-standardize-and-normalize-data
Yes, that might be a good first thing to try, other ideas are here:
https://machinelearningmastery.com/how-to-prepare-categorical-data-for-deep-learning-in-python/
Thanks for your response, Jason. I think I was using the wrong terminology in my question. By normalized, I meant in terms of relational database structure. In other words, a categorical feature where an observation can have multiple instances of the different category values. It seems that the solution would still be to use dummy encoding but in this case an observation could have a 1 in more than one column. I just need to find the Pandas method that can take multiple tables into account during dummy encoding.
Perhaps de-normalize data to row row per example prior to data prep?
sir when we have to perform outlier detection please upload a post for how to remove outliers in multivariate classification and regression using python.
Perhaps start here:
https://machinelearningmastery.com/how-to-use-statistics-to-identify-outliers-in-data/
My Data-set contains drived attribute which is calculated by subtracting two attributes , so is it good or should i remove this attribute while building the model.
Compare model performance with and without the variable.
Hello, I still wonder how I can work with my data. I prepared a data file in excel to use it in a deep learning model. I don’t know how I can label all the features as ‘data’ and the class feature as ‘target’!
please help me with this issue.
Best
This will help:
https://machinelearningmastery.com/how-to-define-your-machine-learning-problem/
Thank you for your response but It didn’t have any notes about my question! I’ve prepared a database regarding the defined problem.
The linked article will help you to identify the input and output parts of your predictive modeling problem.
Thank you for all your articles, very helpful 🙂
You’re welcome.
Hi Jason,
I dont know if this question has been posed, but when in this process
do you recommend to split the dataset into training and validation set
in terms of avoiding data leakage?
Good question, split first.
Also, this will help:
https://machinelearningmastery.com/difference-test-validation-datasets/
Hi Jason
I found your article very interesting. I was making a model for the famous ‘Titanic’ dataset. I found out that even after using XGBoost I wasn’t getting accuracy above 77%. I realised that this maybe because of poorly prepared data, so I began searching for the best ways to prepare data. As I am new to this field I didn’t knew about decomposition and aggregation.
My question is how to tell when to decompose/aggregate a feature? Also do you have any link to articles or tutorials about data preparation, it would be a great help.
Piyush
Good question – try it and see if it improves performance. If it does keep it and continue to try other things.
Hi Jason,
Very clear and nice post! I have two confused points:
1. what is the relationship between data preprocessing and feature engineering? From your post, if I understand correctly, it seems that data preprocessing has bigger scope than feature engineering, and feature engineering is included in the data preprocessing, am I right?
2. I found from some websites that some people first do data splitting to the training and test datasets, and then do data preprocessing (e.g. scaling and centering) separately. I wonder whether it is a correct order to do like this? Because I understand we should first do data preprocessing (including feature engineering) for the entire dataset, and then do data splitting, right?
Thank you in advance and look forward to your answers and suggestions!
Thanks!
Great questions!
Some refer to data prep as “feature engineering”. Some refer to feature engineering as a subset of data prep focused on creating new inputs form existing inputs. I like the latter definition.
It is correct to prepare data prep methods on train, then apply them to train, test, val, others. This is to avoid data leakage.
Thank you Jason for your reply! So to avoid data leakage, we should first split data to training and test dataset; then do data preprocessing (e.g. scaling, centering) on training, and pass this preprocessing method and apply on test dataset, right?
Yes, or if using cross-validation, do data prep on the train folds and apply to train/test folds.
Got it, many thanks!
You’re welcome.
Hi Jason!
Very good article, I am working on a project measuring pressure from a device, If i get the raw data from that device in binary form (a lot of values for one measurement), do you think I should pre-process this data? I mean, like get a number for the total pressure or so, or can I use that raw data as one feature (column) to store in the database to be used as train data?
thank you in advance! 🙂
Thanks!
Perhaps try fitting a model on the raw data and compare results with different data preparation methods to see if you can lift the performance of the model.
Thank you sir for such a nice content
I Always wait for your help email and your content gives me a step by step process for machine learning
Thank you!
I do relate to running machine learning algorithms on subsets of data while building models. I remember my first Machine learning project. I was following a tutorial, and I used a dataset with 100M rows. Took me forever to complete the tutorial 😛
Thanks.