[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

Feature Selection in Python with Scikit-Learn

Not all data attributes are created equal. More is not always better when it comes to attributes or columns in your dataset.

In this post you will discover how to select attributes in your data before creating a machine learning model using the scikit-learn library.

Kick-start your project with my new book Machine Learning Mastery With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

Update: For a more recent tutorial on feature selection in Python see the post:

feature selection

Cut Down on Your Options with Feature Selection
Photo by Josh Friedman, some rights reserved

Select Features

Feature selection is a process where you automatically select those features in your data that contribute most to the prediction variable or output in which you are interested.

Having too many irrelevant features in your data can decrease the accuracy of the models. Three benefits of performing feature selection before modeling your data are:

  • Reduces Overfitting: Less redundant data means less opportunity to make decisions based on noise.
  • Improves Accuracy: Less misleading data means modeling accuracy improves.
  • Reduces Training Time: Less data means that algorithms train faster.

Two different feature selection methods provided by the scikit-learn Python library are Recursive Feature Elimination and feature importance ranking.

Recursive Feature Elimination

The Recursive Feature Elimination (RFE) method is a feature selection approach. It works by recursively removing attributes and building a model on those attributes that remain. It uses the model accuracy to identify which attributes (and combination of attributes) contribute the most to predicting the target attribute.

This recipe shows the use of RFE on the Iris floweres dataset to select 3 attributes.

For a more extensive tutorial on RFE for classification and regression, see the tutorial:

Feature Importance

Methods that use ensembles of decision trees (like Random Forest or Extra Trees) can also compute the relative importance of each attribute. These importance values can be used to inform a feature selection process.

This recipe shows the construction of an Extra Trees ensemble of the iris flowers dataset and the display of the relative feature importance.

For a more extensive tutorial on feature importance with a range of algorithms, see the tutorial:

Summary

Feature selection methods can give you useful information on the relative importance or relevance of features for a given problem. You can use this information to create filtered versions of your dataset and increase the accuracy of your models.

In this post you discovered two feature selection methods you can apply in Python using the scikit-learn library.

Discover Fast Machine Learning in Python!

Master Machine Learning With Python

Develop Your Own Models in Minutes

...with just a few lines of scikit-learn code

Learn how in my new Ebook:
Machine Learning Mastery With Python

Covers self-study tutorials and end-to-end projects like:
Loading data, visualization, modeling, tuning, and much more...

Finally Bring Machine Learning To
Your Own Projects

Skip the Academics. Just Results.

See What's Inside

115 Responses to Feature Selection in Python with Scikit-Learn

  1. Avatar
    Harsh October 9, 2014 at 4:51 pm #

    Nice post, how does RFE and Feature selection like chi2 are different. I mean, finally they are achieving the same goal, right?

    • Avatar
      jasonb October 10, 2014 at 6:52 am #

      Both seek to reduce the number of features, but they do so using different methods. chi squared is a univariate statistical measure that can be used to rank features, whereas RFE tests different subsets of features.

      • Avatar
        Enny November 29, 2018 at 8:04 am #

        Is there any benchmarks, for example, P value, F score, or R square, to be used to score the importance of features?

        • Avatar
          Jason Brownlee November 29, 2018 at 2:33 pm #

          No, the scores are relative and specific to a given problem.

    • Avatar
      mitillo September 2, 2017 at 7:27 pm #

      Hello,

      I read and view a lot about machine learning but you are amazing,
      You are able to explain everything in a simple way and write code that everyone can understand and ‘play’ with it. and you give good resource for anyone who wants to deep in the topic

      you are good teacher

      Thank you for your work

  2. Avatar
    Bozhidar June 26, 2015 at 11:04 pm #

    Hello,

    Can you tell me which feature selection methods you suggest for time-series data?

    • Avatar
      Alex January 19, 2017 at 8:55 am #

      Please see tsfresh – it’s a new approach for feature selection designed for TS

  3. Avatar
    Max January 30, 2016 at 7:22 pm #

    Great site Jason!

  4. Avatar
    Alan February 24, 2016 at 9:48 am #

    Thanks for that good post. Just wondering whether RFE is also usable for linear regression? How it the model accuracy measured?

  5. Avatar
    Carmen January 4, 2017 at 1:31 am #

    Jason, quick question that may help someone else stumbling across this post.

    The example above does RFE using an untuned model. When would/would not make sense to find some optimised hyperparameters of the model using grid search *first*, and THEN doing RFE. In your experience, is this a good idea/helpful thing to do? If not, then why?

    • Avatar
      Jason Brownlee January 4, 2017 at 8:58 am #

      Hi Carmen, nice catch.

      Short answer: we are interested in relative difference of feature subsets, not absolute best performance.

      Generally, it a good idea to use a robust method for feature selection – that is a method that performs well on most problems with little or no tuning. This provides a baseline and a wrapper method like RFE can focus on the relative difference in the feature subsets rather than on the optimized best performance of each subset.

      There are those cases where your general method (say a random forest) falls down. In those cases, you may want to try RFE with a suite of 3-5 different wrapped methods and see what falls out. I expect that is this is overkill on most problems.

      Does that help?

  6. Avatar
    Carmen January 6, 2017 at 7:58 pm #

    Thanks that helps. The only reason I’d mentioned tuning a model first (light tuning) is that as you mentioned in your “spot checking” post, you want to give algorithms a chance to put their best step forward. If that applies there, I don’t see why it shouldn’t apply to RFE.

    So I figured light tuning (only on the most common hyperparameter with the most common grid values) may help here. But I see your point. Once I’ve got my code all sorted out I may try both and report back 🙂

    • Avatar
      Jason Brownlee January 7, 2017 at 8:30 am #

      You’re absolutely right Carmen.

      There is a cost/benefit here and ultimately it will come down to experience and the “taste” of the practitioner.

      In fact, much of industrial machine learning comes down to taste 🙂
      Most top methods perform just as well say at the 90-95% effort-result level. The really hard work is trying to get above that, kaggle comps are good case in point.

  7. Avatar
    akram June 13, 2017 at 3:38 am #

    thanks so much for your post Jason

    i’am a beginner in scikit-learn and i’ve a little problem when using feature selection module VarianceThreshold, the problem is when i set the variance Var[X]=.8*(1-.8)

    it is supposed to remove all features (that have the same value in all samples) which have the probability p>0.8.
    in my case the fifth column should be removed, p=8/10>(threshold=0,7).

    #####################################

    from sklearn.feature_selection import VarianceThreshold
    X=[[0,1,1,1,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.00,0.00,0.00,0.00,0.00,0.00],
    [0,1,1,1,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.00,0.00,0.00,0.00,0.00,0.00],
    [0,1,1,1,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.00,0.00,0.00,0.00,0.00,0.00],
    [0,1,1,1,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.00,0.00,0.00,0.00,0.00,0.00],
    [0,1,1,1,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.01,0.00,0.00,0.00,0.00,0.00],
    [0,1,1,1,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,255,1.00,0.00,0.01,0.00,0.00,0.00,0.00,0.00],
    [0,1,2,1,29,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0.00,0.00,0.00,0.00,0.50,1.00,0.00,10,3,0.30,0.30,0.30,0.00,0.00,0.00,0.00,0.00],
    [0,1,1,1,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,253,0.99,0.01,0.00,0.00,0.00,0.00,0.00,0.00],
    [0,1,1,1,105,146,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,255,254,1.00,0.01,0.00,0.00,0.00,0.00,0.00,0.00],
    [0,2,3,1,223,185,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,4,4,0.00,0.00,0.00,0.00,1.00,0.00,0.00,71,255,1.00,0.00,0.01,0.01,0.00,0.00,0.00,0.00]]
    sel=VarianceThreshold(threshold=(.7*(1-.7)))

    and this is what i get when running the script

    >>> sel.fit_transform(X)

    array([[ 1., 105., 146., 1., 1., 255., 254.],
    [ 1., 105., 146., 1., 1., 255., 254.],
    [ 1., 105., 146., 1., 1., 255., 254.],
    [ 1., 105., 146., 2., 2., 255., 254.],
    [ 1., 105., 146., 2., 2., 255., 254.],
    [ 1., 105., 146., 2., 2., 255., 255.],
    [ 2., 29., 0., 2., 1., 10., 3.],
    [ 1., 105., 146., 1., 1., 255., 253.],
    [ 1., 105., 146., 2., 2., 255., 254.],
    [ 3., 223., 185., 4., 4., 71., 255.]])
    the second column here should not apear.
    thanks;)

    • Avatar
      Jason Brownlee June 13, 2017 at 8:24 am #

      It is not clear to me what the fault could be. Consider posting to stackoverflow or similar?

  8. Avatar
    Ishaan July 4, 2017 at 10:12 pm #

    Hi Jason,

    I am performing feature selection ( on a dataset with 1,00,000 rows and 32 features) using multinomial Logistic Regression using python.Now, what would be the most efficient way to select features in order to build model for multiclass target variable(1,2,3,4,5,6,7,8,9,10)? I have used RFE for feature selection but it gives Rank=1 to all features. Do I consider all features for building model? Is there any other method for this?
    Thanks in advance.

    • Avatar
      Jason Brownlee July 6, 2017 at 10:15 am #

      Try a suite of methods, build models based on the features and compare the performance of those models.

  9. Avatar
    Hemalatha S November 17, 2017 at 6:50 pm #

    can you tell me how to select features for clinical datasets from a csv file??

    • Avatar
      Jason Brownlee November 18, 2017 at 10:13 am #

      Try a suite of feature selection methods, build models based on selected features, use the set of features + model that results in the best model skill.

  10. Avatar
    Sufian November 26, 2017 at 4:35 am #

    Hi Jason, How can I print the feature name and the importance side by side?

    Thanks,
    Sufian

    • Avatar
      Jason Brownlee November 26, 2017 at 7:35 am #

      es, if you have an array of feature or column names you can use the same index into both arrays.

  11. Avatar
    Hemalatha December 1, 2017 at 2:03 am #

    what are the feature selection methods?? and how to build models based on the selected features??
    can you help me in this? because I am new to machine learning and python

  12. Avatar
    Praveen January 2, 2018 at 6:42 pm #

    i want to remove columns which are highly correlated like caret package pre processing method does in R. how can i remove them using sklearn?

    • Avatar
      Jason Brownlee January 3, 2018 at 5:32 am #

      You might need to implement it yourself – e.g. calculate the correlation matrix and remove selected columns.

  13. Avatar
    Shabnam January 5, 2018 at 8:15 am #

    Deas Keras have similar functionality like FRE that we can use?

    I am using Keras for my models. I created a model. Then, I wanted to use RFE for it. The first line (rfe=FRE(model, 3)) is fine, but as soon as I want to fit the data, I get following error:

    TypeError: Cannot clone object ” (type ): it does not seem to be a scikit-learn estimator as it does not implement a ‘get_params’ methods.

    • Avatar
      Jason Brownlee January 5, 2018 at 11:37 am #

      You may be able to use the sklearn wrappers in Keras and then put the wrapped model within RFE.

      I have posts on using the wrappers on the blog, for example:
      https://machinelearningmastery.com/use-keras-deep-learning-models-scikit-learn-python/

      • Avatar
        Shabnam January 6, 2018 at 7:21 am #

        That is awesome! I’ll read it. Thanks a lot for your reply and sharing the link.

        • Avatar
          Jason Brownlee January 7, 2018 at 5:01 am #

          No problem.

          • Avatar
            Deep saxena April 12, 2019 at 8:18 pm #

            After using your suggestion keras model does not support or ranking attribute

          • Avatar
            Jason Brownlee April 13, 2019 at 6:27 am #

            No it does not.

          • Avatar
            Deep saxena April 15, 2019 at 5:01 pm #

            Then how can we RFE test on keras model ?

          • Avatar
            Jason Brownlee April 16, 2019 at 6:46 am #

            Perhaps you can use the Keras wrapper for the model, then use it as part of RFE?

          • Avatar
            Deep saxena April 16, 2019 at 9:42 pm #

            I did that, but no suceess, I am pasting the code for reference
            def create_model():
            # create model
            model = Sequential()
            model.add(Dense(1000, input_dim=v.shape[1], activation=’relu’))
            model.add(Dropout(0.2))
            model.add(Dense(3, activation=’softmax’))
            model.compile(loss=’sparse_categorical_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
            return model

            by_name=True)
            seed = 7
            np.random.seed(seed)
            keras_model = KerasClassifier(build_fn=create_model, epochs=10, batch_size=10, verbose=1)

            rfe = RFE(keras_model, 3)
            rfe = rfe.fit(v, all_label_encoded)
            print(rfe.support_)
            print(rfe)

            model does not support support and ranking. Can you tell me exactly how to get the ranking and the support?

          • Avatar
            Jason Brownlee April 17, 2019 at 7:00 am #

            I’m eager to help, but I don’t have the capacity to debug code.

            I have some suggestions here:
            https://machinelearningmastery.com/faq/single-faq/can-you-read-review-or-debug-my-code

          • Avatar
            Deep saxena April 17, 2019 at 5:59 pm #

            Your answer justifies the stuff, thanks for the reply.

    • Avatar
      Deep saxena April 23, 2019 at 4:29 pm #

      @Shubham Just to clarify Keras classifier will not work with RFE. Answer mentioned by Jason Brownlee will not work.

      • Avatar
        Jason Brownlee April 24, 2019 at 7:52 am #

        Perhaps you can try running a manual search over subsets of features with the model?

        Perhaps you can run RFE with a sklearn model and use the results to motivate a Keras model?

        • Avatar
          Deep saxena April 25, 2019 at 8:44 pm #

          OK

  14. Avatar
    Smitha January 16, 2018 at 12:33 am #

    Hi Jason,

    Can Random Forest’s feature importance be considered as a wrapper based approach?

  15. Avatar
    Beytullah January 20, 2018 at 9:40 pm #

    Hi Jason,

    Do you know how is feature importance calculated?

  16. Avatar
    Fawad January 26, 2018 at 4:52 pm #

    I feel in recursive feature selection it is more prudent to use cv and let the algo decide how many features to retain

    • Avatar
      Jason Brownlee January 27, 2018 at 5:54 am #

      Yes. I often keep all features and use subspaces or ensembles of feature selection methods.

  17. Avatar
    kumar February 26, 2018 at 4:19 pm #

    i need to select the best features from my own data set…using feature selection wrapper approach the learning algorithm is ant colony optimization and the classifier is svm …any one have any idea…

  18. Avatar
    Kagne March 23, 2018 at 8:30 pm #

    Nice post!

    But I still have a question.

    I entered the kaggle competition recently, and I evaluate my dataset by using the methods you have posted(the model is

    RandomForest).

    Then I deleted the worst feature. And my score decreased from 0.79904 to 0.78947. Then I was confused. Should I build more

    features? And What should I do to get a higher score(change model? expand features or more?) or where I can learn those ?

    Thanks a lot.

  19. Avatar
    Rimi March 29, 2018 at 7:38 pm #

    Hi Jason,

    I wanted to know if there are any existing python library/libraries that can be used to rank all the features in a specific dataset based on a specific attribute for various methods like Gain Ratio, Infomation Gain, Chi2,rank correlation, linear correlation, symmetric uncertainty . If not, can you please provide some steps to proceed with the same?

    Thanks

    • Avatar
      Jason Brownlee March 30, 2018 at 6:35 am #

      Perhaps?

      Each method will have a different “view” on what is important in the data. You can test each view to see what is real/useful to developing a skilful model.

  20. Avatar
    Abbas April 11, 2018 at 11:48 pm #

    What about the feature importance attribute from the decision tree classifier? Could it be used for feature selection?
    http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html

  21. Avatar
    Chris May 13, 2018 at 10:52 pm #

    Could this method be used to perform feature subset selection on groups of subsets that have to be considered together? For instance, after performing a FeatureHasher transformation you have a fixed length hash which takes up say 256 columns which have to be considered as a group. Do you have any resources for this case?

    • Avatar
      Jason Brownlee May 14, 2018 at 6:36 am #

      Perhaps. Try it. Sorry,I don’t have material on this topic. Try a search on scholar.google.com

  22. Avatar
    Aman May 18, 2018 at 5:15 am #

    Regarding ensemble learning model, I used it to reduce the features. But, how i can get to know that how many features I need to select?

  23. Avatar
    Jeremy Dohmann July 14, 2018 at 9:12 am #

    How large can your feature set before the efficacy of this algorithm breaks down?

    Or, because it uses subsets, it returns a reasonable feature ranking even if you fit over a large number of features?

    Thanks!

  24. Avatar
    Junaid July 22, 2018 at 12:50 pm #

    I am using the tree classifier on my dataset and it gives different values each time I run the script. Is this a problem? or it differentiates because different ways the features are linked by the tree?

  25. Avatar
    sajid nawaz October 15, 2018 at 2:15 am #

    classification and regression analysis feature selection python code???if any one have

  26. Avatar
    hwanhee October 26, 2018 at 6:53 pm #

    Is there a way to find the best number of features for each data set?

    • Avatar
      Jason Brownlee October 27, 2018 at 5:57 am #

      Yes, try a suite of feature selection methods, and a suite of models and use the combination of features and model that give the best performance.

      • Avatar
        hwanhee October 27, 2018 at 12:06 pm #

        For example, which algorithm can find the optimal number of features?

      • Avatar
        hwanhee October 27, 2018 at 12:09 pm #

        For example, there are 500 features. Is there any way to know the number of features that show the highest classification accuracy when performing a feature selection algorithm?

        • Avatar
          Jason Brownlee October 28, 2018 at 6:06 am #

          Test different subsets of features by building a model from them and evaluate the performance of the model. The features that lead to a model with the best performance are the features that you should use.

  27. Avatar
    Harshali Patel December 17, 2018 at 9:37 pm #

    Hey Jason,
    Again a great post, I have followed several of your posts.
    I want your opinion on the type of Machine learning algorithm that I can use my project on Supervised Learning.

  28. Avatar
    Vaibhav January 27, 2019 at 4:28 pm #

    Hello Jason,

    Thank you for all your content. Big fan of all your posts.

    I am now stuck in deciding when to use which feature selection method ( Filter, Wrapper & Embedded ) for my problem.

    Can you please help or provide any reference links where I can get the required info.

    Thanks in advance. !

    Vaibhav

  29. Avatar
    nandini February 6, 2019 at 4:59 pm #

    Hi Jason,

    I have a requirement about model predictions for text classification using keras.
    suppose if i entered any unrelated texts for model prediction,the entered texts which is not trained in model, instantly to give your entered query is invalid .

    Please suggest me any methods are available .
    thanks in advance 🙂

    • Avatar
      Jason Brownlee February 7, 2019 at 6:37 am #

      Sorry, I don’t follow, perhaps you can elaborate?

  30. Avatar
    ofer February 10, 2019 at 9:27 am #

    Hi,
    There are many different methods for feature selection. It depends on the algorithm i use. For example, if i use logistic regression for prediction then i can not use random forest for feature selection (the subset of features from random forest can be non significant in logistic regression model).
    Is the method you suggest suitable for logistic regression?

  31. Avatar
    Shreya April 27, 2019 at 7:45 pm #

    After using logistic regression for feature selection can we apply different models such as knn, decision tree, random forest etc to get the accuracy?

    • Avatar
      Jason Brownlee April 28, 2019 at 6:56 am #

      Perhaps your problem is too easy or too hard and all models find the same solution?

  32. Avatar
    Sydney Wu May 2, 2019 at 1:26 pm #

    hi, Jason,

    Thanks for your post, it’s clear and useful.

    But I still have some questions.

    1. Should I eliminate collinearity of variables before feature selection? Some posts says collinearity is not a problem for nonlinear model. but I am afraid that it will affect the result of feature selection.

    2. There are several feature selection method in scikit-learn, different method may select different subset, how do I know which subset or method is more suitable?

    3. When I build a machine learning model, the performance of the model seems more related to the number of features. No matter what features I use, the accuracy will increase when a certain threshold is reached. How do I explain this?

    Again, thanks a lot for your patient answer.

  33. Avatar
    Ronak May 9, 2019 at 12:29 pm #

    Thanks for the great posts. I have a problem for feature selection and parameter tuning.
    Thanks in advance for the help,

    I would like to do feature selection with recursive feature elimination and cross-validated selection of the best number of features. So I use RFECV:

    But I am passing an untuned model, svm.SVC(kernel=’linear’), to RFECV(), to find a subset of best features. So I have not addressed the tuning of hyperparameters within the model.

    Does this make sense to find some optimised hyperparameters of the model using grid search first, and THEN doing RFE? (However, parameter tuning has performed on un-optimized feature set.)
    How about doing vise versa,i.e. first feature selection and then parameter tuning? (However, selected features has chosen based on the untuned model)

    Although, either gridsearchCV and RFECV perform feature selection independently in each fold of the cross-validation, and I can use different splitting criteria for RFECV and gridsearchCV,
    I still suspect that as I have to use the same dataset for parameter tuning as well as for RFECV selection, Dose it cause overfiting?

    Do I have to take out a portion of the training set to do feature selection on. Next start model selection on the remaining data in the training set?

    • Avatar
      Jason Brownlee May 9, 2019 at 2:09 pm #

      It might make sense to use standalone rfe within a pipeline with a given algorithm.

  34. Avatar
    Tarun Gangil May 27, 2019 at 7:25 pm #

    Hi,
    Will Recursive Feature Elimination works good for categorical input datasets also ?

  35. Avatar
    Benjamin June 5, 2019 at 1:49 am #

    Hi Jason, thanks for your hard work !

    How do you explain the following behavior ? Feature importance doesn’t tell you to keep the same features as RFE… which one should we trust ?

    The code :

    # Feature Importance
    from sklearn import datasets
    from sklearn import metrics
    from sklearn.ensemble import RandomForestClassifier
    # load the iris datasets
    dataset = datasets.load_iris()
    # fit an Extra Trees model to the data
    model = RandomForestClassifier()
    model.fit(dataset.data, dataset.target)
    # display the relative importance of each attribute
    print(model.feature_importances_)

    rfe = RFE(model, 1)
    rfe = rfe.fit(dataset.data, dataset.target)
    # summarize the selection of the attributes
    print(rfe.support_)
    print(rfe.ranking_)

    Output:

    [0.02029219 0.01598919 0.57190818 0.39181044]
    [False False False True]
    [3 4 2 1]

  36. Avatar
    Kushal Ghimire June 17, 2019 at 6:34 pm #

    Great explanation but i want to extract feature from videos for human activity recognition (walk,sleep,jump). But i dont know how to load the datasets. Any help will be appreciated.

    • Avatar
      Jason Brownlee June 18, 2019 at 6:36 am #

      Sorry, i don’t have a tutorial on loading video.

  37. Avatar
    Suganya July 26, 2019 at 5:21 pm #

    Hello Jason,
    I am trying to select the best features among 80 features in my dataset. My dataset contains integer as well as string values. I got an issue while trying to select the features using SelectKBest method. Why such issue happened. Could you help me in understanding this?

  38. Avatar
    DHILSATH FATHIMA. M August 6, 2019 at 7:30 pm #

    What is the role of p-value in machine learning algorithm?Why to use that?

  39. Avatar
    Anushka August 22, 2019 at 9:16 pm #

    Hello Jason,
    Thank you for the descriptive article.
    I am working with microbiome data analysis and would like to use machine learning to pick a set of genera which can classify samples between two categories (for examples, healthy and disease).
    i used the following code:

    from sklearn.feature_selection import SelectKBest
    from sklearn.feature_selection import chi2
    from sklearn.feature_selection import SelectFpr
    from sklearn.feature_selection import GenericUnivariateSelect
    X = df_n #dataset with 131 columns and 51 rows
    y = list(map(lambda x : x[:2], df_n.index))

    bestfeatures = GenericUnivariateSelect(chi2, ‘k_best’)
    fit = bestfeatures.fit(X,y)
    pvalues = -np.log10(bestfeatures.pvalues_) #convert pvalues into log format

    dfscores = pd.DataFrame(fit.scores_)
    dfcolumns = pd.DataFrame(X.columns)
    dfpvalues = pd.DataFrame(pvalues)

    #concat two dataframes for better visualization
    featureScores = pd.concat([dfcolumns,dfscores,dfpvalues],axis=1)
    featureScores.columns = [‘Specs’,’Score’,’pvalues’] #naming the dataframe columns
    FS = featureScores.loc[featureScores[‘pvalues’] < 0.05, :]

    print(FS.nlargest(10, 'pvalues')) #top 10 features
    Specs Score pvalues
    41 a1 0.206076 0.044749
    22 a2 0.193496 0.042017
    11 a3 0.153464 0.033324
    117 a4 0.143448 0.031149
    20 a5 0.143214 0.031099
    45 a6 0.136450 0.029630
    67 a7 0.132488 0.028769
    0 a8 0.122946 0.026697
    80 a9 0.120120 0.026084
    123 a10 0.118977 0.025836

    Now I would like to use these list of features to make a PCoA plot with Bray-curtis because I want to visualize how these features can distinguish the 40 samples into two different categories (already known).

    Can you help me by guiding in this regard?

  40. Avatar
    Prerna April 22, 2020 at 2:56 am #

    Hi,

    After rfe.fit and getting the rakings of the features how do we get the feature names according to rankings. Also, which rankings would we choose to go ahead and train the model

    • Avatar
      Jason Brownlee April 22, 2020 at 6:05 am #

      The ranking has the indexes of each feature, you can use these indexes to access the column names from an array or from your dataframe.

  41. Avatar
    Andrew May 1, 2020 at 5:46 pm #

    Hi Jason,

    RFE selects the feature set based on train data.
    Although in general, lesser features tend to prevent overfitting. So how does it ensure that the best performing features were not due to overfitted training data, since there is no validation set in place?

    Also, how does RFE differ from the importance_plot from XGboost or random forest or Gradient Boosting which shows the list of features based on gain importance?

    • Avatar
      Jason Brownlee May 2, 2020 at 5:40 am #

      RFE cannot help you prevent overfitting.

      The are very different. RFE is calculated using any model you like and selects features based on how it impacts model performance. Feature importance from ensembles of trees is calculated based on how much the features are used in the trees.

  42. Avatar
    Henrique June 3, 2020 at 6:36 pm #

    Hi,

    thank you for the tutorial.

    Something that is not clear for me is if the RFE is only used for classification or if it can be used for regression problems as well.
    When adapting the tutorial above to another dataset, it keeps alerting that the data is continuous. This is normally associated with classifiers, isn’t it?

    Thank you once more.

  43. Avatar
    Jaime Lannister June 11, 2020 at 1:49 am #

    Hey there,

    Can we extract features name from model only?
    Like you just have a fitted model and now you have to calculate its score, but the problem is you dont have list of features used in it. You just have the model and train dataset.
    If yes, them please help me because i am stuck at this!

    Thanks

    • Avatar
      Jason Brownlee June 11, 2020 at 6:01 am #

      It will suggest feature/column indexes, you can then relate these to the names of the features in the original dataset directly.

  44. Avatar
    umesh kumar baburao sherkhane July 19, 2020 at 3:02 pm #

    hi Jason,
    its a good article.

    I have one doubt, if i dont know the no of features to select. How should i go about on selecting the optimum number of feaures required for rfe ?

    Thanks and regards

Leave a Reply