How to Run Your First Classifier in Weka

Weka makes learning applied machine learning easy, efficient, and fun. It is a GUI tool that allows you to load datasets, run algorithms and design and run experiments with results statistically robust enough to publish.

I recommend Weka to beginners in machine learning because it lets them focus on learning the process of applied machine learning rather than getting bogged down by the mathematics and the programming — those can come later.

In this post, I want to show you how easy it is to load a dataset, run an advanced classification algorithm and review the results.

If you follow along, you will have machine learning results in under 5 minutes, and the knowledge and confidence to go ahead and try more datasets and more algorithms.

1. Download Weka and Install

Visit the Weka Download page and locate a version of Weka suitable for your computer (Windows, Mac, or Linux).

Weka requires Java. You may already have Java installed and if not, there are versions of Weka listed on the download page (for Windows) that include Java and will install it for you. I’m on a Mac myself, and like everything else on Mac, Weka just works out of the box.

If you are interested in machine learning, then I know you can figure out how to download and install software into your own computer. If you need help installing Weka, see the following post that provides step-by-step instructions:

Start and Practice Machine Learning With Weka
...without writing a single line of code

Machine Learning Mastery With Weka Mini-CourseWeka is the best platform for beginners getting started in applied machine learning. 

The graphical user interface and state-of-the-art algorithms makes machine learning fun!

Take a FREE 14-Day Mini Course in
Applied Machine Learning With Weka

Download Your FREE Mini-Course

Download your PDF containing all 14 lessons.

Get your daily lesson via email with tips and tricks.

 

2. Start Weka

Start Weka. This may involve finding it in program launcher or double clicking on the weka.jar file. This will start the Weka GUI Chooser.

The Weka GUI Chooser lets you choose one of the Explorer, Experimenter, KnowledgeExplorer and the Simple CLI (command line interface).

Weka GUI Chooser

Weka GUI Chooser

Click the “Explorer” button to launch the Weka Explorer.

This GUI lets you load datasets and run classification algorithms. It also provides other features, like data filtering, clustering, association rule extraction, and visualization, but we won’t be using these features right now.

3. Open the data/iris.arff Dataset

Click the “Open file…” button to open a data set and double click on the “data” directory.

Weka provides a number of small common machine learning datasets that you can use to practice on.

Select the “iris.arff” file to load the Iris dataset.

Weka Explorer Interface with the Iris dataset loaded

Weka Explorer Interface with the Iris dataset loaded

The Iris Flower dataset is a famous dataset from statistics and is heavily borrowed by researchers in machine learning. It contains 150 instances (rows) and 4 attributes (columns) and a class attribute for the species of iris flower (one of setosa, versicolor, and virginica). You can read more about Iris flower dataset on Wikipedia.

4. Select and Run an Algorithm

Now that you have loaded a dataset, it’s time to choose a machine learning algorithm to model the problem and make predictions.

Click the “Classify” tab. This is the area for running algorithms against a loaded dataset in Weka.

You will note that the “ZeroR” algorithm is selected by default.

Click the “Start” button to run this algorithm.

Weka Results for the ZeroR algorithm on the Iris flower dataset

Weka Results for the ZeroR algorithm on the Iris flower dataset

The ZeroR algorithm selects the majority class in the dataset (all three species of iris are equally present in the data, so it picks the first one: setosa) and uses that to make all predictions. This is the baseline for the dataset and the measure by which all algorithms can be compared. The result is 33%, as expected (3 classes, each equally represented, assigning one of the three to each prediction results in 33% classification accuracy).

You will also note that the test options selects Cross Validation by default with 10 folds. This means that the dataset is split into 10 parts: the first 9 are used to train the algorithm, and the 10th is used to assess the algorithm. This process is repeated, allowing each of the 10 parts of the split dataset a chance to be the held-out test set. You can read more about cross validation here.

The ZeroR algorithm is important, but boring.

Click the “Choose” button in the “Classifier” section and click on “trees” and click on the “J48” algorithm.

This is an implementation of the C4.8 algorithm in Java (“J” for Java, 48 for C4.8, hence the J48 name) and is a minor extension to the famous C4.5 algorithm. You can read more about the C4.5 algorithm here.

Click the “Start” button to run the algorithm.

Weka J48 algorithm results on the iris flower dataset

Weka J48 algorithm results on the Iris flower dataset

5. Review Results

After running the J48 algorithm, you can note the results in the “Classifier output” section.

The algorithm was run with 10-fold cross-validation: this means it was given an opportunity to make a prediction for each instance of the dataset (with different training folds) and the presented result is a summary of those predictions.

Just the results of the J48 algorithm on the Iris flower dataset in Weka

Just the results of the J48 algorithm on the Iris flower dataset in Weka

Firstly, note the Classification Accuracy. You can see that the model achieved a result of 144/150 correct or 96%, which seems a lot better than the baseline of 33%.

Secondly, look at the Confusion Matrix. You can see a table of actual classes compared to predicted classes and you can see that there was 1 error where an Iris-setosa was classified as an Iris-versicolor, 2 cases where Iris-virginica was classified as an Iris-versicolor, and 3 cases where an Iris-versicolor was classified as an Iris-setosa (a total of 6 errors). This table can help to explain the accuracy achieved by the algorithm.

Summary

In this post, you loaded your first dataset and ran your first machine learning algorithm (an implementation of the C4.8 algorithm) in Weka. The ZeroR algorithm doesn’t really count: it’s just a useful baseline.

You now know how to load the datasets that are provided with Weka and how to run algorithms: go forth and try different algorithms and see what you come up with.

Leave a note in the comments if you can achieve better than 96% accuracy on the Iris dataset.

Want Machine Learning Without The Code?

Develop Your Own Models and Predictions in Minutes

...with just a few a few clicks

Discover how in my new Ebook: Machine Learning Mastery With Weka

It covers self-study tutorials and end-to-end projects on topics like:
Loading data, visualization, build models, algorithm tuning, and much more...

Finally Bring The Machine Learning To
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.

115 Responses to How to Run Your First Classifier in Weka

  1. Sandra March 1, 2014 at 7:55 am #

    Well, just learning the tool etc, but using the above setup, I changed the test option to ‘Use Training Set’ and got 98% accuracy.

    === Detailed Accuracy By Class ===

    TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
    1.000 0.000 1.000 1.000 1.000 1.000 1.000 1.000 Iris-setosa
    0.980 0.020 0.961 0.980 0.970 0.955 0.990 0.969 Iris-versicolor
    0.960 0.010 0.980 0.960 0.970 0.955 0.990 0.970 Iris-virginica
    Weighted Avg. 0.980 0.010 0.980 0.980 0.980 0.970 0.993 0.980

    === Confusion Matrix ===

    a b c <– classified as
    50 0 0 | a = Iris-setosa
    0 49 1 | b = Iris-versicolor
    0 2 48 | c = Iris-virginica

    I also got 97.3% out of Multilayer Perceptron, with the same cross validation setting of 10 folds.

    • jasonb March 1, 2014 at 8:24 am #

      Really nice work Sandra!

      Changing the test option to “use training set” changes the nature of the experiment and the results are not really comparable. This change tells you how well the model performed on the data to which was trained (already knows the answers).

      This is good if you are making a descriptive model, but not helpful if you want to use that model to make predictions. To get an idea at how good it is at making predictions, we need to test it on data that it has not “seen” before where it must make predictions that we can compare to the actual results. Cross validation does this for us (10 times in fact).

      Great work on Multilayer Perceptron! That’s a complicated algorithm that has a lot of parameters you can play with.

      Maybe you could try some other datasets from the “data” directory in Weka.

      • riyas June 16, 2016 at 8:18 am #

        Execute Weka Nearest Neighbor with K=1 (only 1 nearest neighbor) on the IRIS data set and answer the following questions:

        (a) look at the confusion matrix. How many total misclassified instances were there for iris sets?

        (b) What is the weighted precision for this classifier?

        (c) what is the weighted recall for this classifier?

        (d) how many incorrectly classified instances were there in total for this classifier?

        (e) which classes had incorrect classifications?

        • riyas June 16, 2016 at 8:55 am #

          can you answer the above question ASAP please

    • SQLMan October 21, 2014 at 3:00 pm #

      Just wondering why my j48 is disabled?

      • jasonb October 21, 2014 at 6:04 pm #

        It may be because it cannot be used on the dataset you have loaded.

        • Oluwole Olorunleke October 24, 2016 at 10:30 pm #

          How do we handle the choice of Classifier to be used for a problem.

          Considering there is an increased accuracy with MultilayerPerceptron as against J48.

          MultilayerPerceptron = 97.33% against J48=96%

          • Jason Brownlee October 25, 2016 at 8:25 am #

            Great question Oluwole.

            We need to evaluate a suite of algorithms and see what works best. This means that we need a robust test harness (so we cannot be fooled by the results). This involves careful selection of a metric and a resampling method like cross validation.

            We can then choose an algorithm that both performs well and has low complexity (easy to understand and explain to others/use in production).

      • RK Khatri May 5, 2016 at 4:34 pm #

        because your class is numerical, not nominal. It works on nominal classes only.

        • Oluwole Olorunleke October 24, 2016 at 10:32 pm #

          Hi, I am new to machine learning and would like to know how to select a model for predicting housing prices. Can I use the weka explorer for this? and if it is possible, Please how do I go about it?

          MY GOAL
          I want to predict housing prices based on some criteria.

    • Dheeraj June 4, 2016 at 12:30 am #

      functions.SMO 96.27 % with default settings

      • Danyal Sandeelo June 4, 2016 at 6:56 pm #

        can you example what do you mean by functions.SMO 96.27 % with default settings?

        • Jason Brownlee June 14, 2016 at 8:27 am #

          SMO is the implementation of Support Vector Machines for classification in Weka.

          Default settings mean that none of the parameters of the algorithm were modified before the algorithm was run on the dataset.

  2. Mark Gems March 9, 2014 at 4:56 am #

    What’s up, I have seen that occasionally this page renders a 403 server error message. I thought that you would be keen to know. Best wishes

    • jasonb March 9, 2014 at 8:18 am #

      Thanks Mark, much appreciated. I have not seen this myself, but I’ll look into setting up some monitoring.

  3. JoshM March 20, 2014 at 3:21 pm #

    Great start to what I hope is a newfound joy in machine learning!

  4. NoelE March 26, 2014 at 6:17 pm #

    An excellent article indeed! Given the classifier (prediction for numeric class) model as well as an instance from a training or test set, do you have any idea about the steps as to how predicted values are calculated specifically under the “Predictions ontest data” sectionof WEKA’s output?

    • jasonb March 27, 2014 at 9:42 am #

      A trained classifier is evaluated by providing it the input attributes (without the numeric output attribute), and the prediction is made. This is repeated for all instances in the test dataset. Weka then provides a summary of the error in the predictions using a number of measures such as mean absolute error and root mean squared error.

      Does that answer your question @NoelE?

  5. Shuai May 27, 2014 at 10:42 pm #

    Really great first lesson, Jason! Salute

    By the way, I used MultilayerPerceptron and achieved a 97.333% accuracy.

  6. Hamideh July 9, 2014 at 11:00 pm #

    I’m new in weka,I want use simple CLI,i want delete some attribute from arff file and i inter :
    [java weka.filters.unsupervised.attribute.Remove –R 5-34 -i data/kdd_01.arff -o data/kdd_02.arff]
    But it doesn’t work!!!
    I hope You can help me…

  7. Tushar August 6, 2014 at 10:20 pm #

    Hi do your tutorials cover the forecast tab? Also how do we narrow down to which algorithm used be used in a particular scenario?

    • jasonb August 7, 2014 at 7:59 am #

      Hi Tushar, I don’t believe there is a forecast tab. What are you referring to exactly.

      For finding the best algorithm I teach a process of spot checking with follow-up algorithm tuning.

  8. Soto October 6, 2014 at 4:31 am #

    Hallo Jason,

    I am trying an .arff file of my own and i don’t get over 65% correct answers for the model that I built.Is there a way to make it better or does that mean that the data maybe don’t relate with the outcome class?
    Thanks.

    • jasonb October 6, 2014 at 6:17 am #

      You could try some different algorithms. You could try some further preparation of your data such as normalization, standardization and feature engineering. You could also try tuning the parameters of an algorithm that is doing well.

      Be scientific and methodical in your approach, devise specific questions and investigate them and record what you discover.

      • Jyoti Sharma December 28, 2014 at 2:24 am #

        How do I normalize my data of arff extension..??? I want to do Naivebayes on my dataset but before this normalization is needed for better result.

  9. Tobias Mattsson October 10, 2014 at 9:26 pm #

    I tried it out and with some tweeking of the MultilayerPerceptron (just set the hidden layers to 10) which gave me a 98% success rate:

    Correctly Classified Instances 147 98 %
    Incorrectly Classified Instances 3 2 %
    Kappa statistic 0.97
    Mean absolute error 0.0304
    Root mean squared error 0.1296
    Relative absolute error 6.8454 %
    Root relative squared error 27.4907 %
    Total Number of Instances 150

    I must say that is is addictive. And quite exhilarating.

    • jasonb October 11, 2014 at 7:52 am #

      Nice work Tobias, try exploring some of the other datasets that come with Weka.

    • Supriya August 17, 2016 at 2:08 am #

      Hi! Where do I modify the hidden layers? Thanks!

      • Jason Brownlee August 17, 2016 at 9:51 am #

        In the MultilayerPerceptron algorithm. Take a look at the algorithm properties.

    • Minh, Hoang October 31, 2016 at 3:54 am #

      I get the same result (98%) with hidden layers = i (equal number of features)

      • Jason Brownlee October 31, 2016 at 5:33 am #

        Nice one, see if the result translates to other problem types from the UCI ML Repo.

        I often use the heuristic of the number of nodes in the hidden layer equal to the number of input features.

  10. Soto October 11, 2014 at 8:18 pm #

    Another one Jason:

    Can Weka,given that a data belongs to a certain class, output a dataset that belongs to that class?Of course, there maybe many datasets that satisfy that class, but there are also certain problems that have unique solutions.
    I want to try that with the 8-queens problem.
    Thanks a lot for your answers.

    • jasonb October 12, 2014 at 7:43 am #

      Not in the GUI I think Soto. You might need to write a program using the WEKA API to handle this case. I believe I’ve read examples of neural nets being used to solve 8 queens and TSP problems. It all comes down to representation of the problem.
      Good luck.

  11. vincent November 22, 2014 at 1:56 pm #

    ” 3 cases where a Iris-versicolor was classified as a Iris-setosa” in your explanation of the confusion matrix in the post should be ” 3 cases where a Iris-versicolor was classified as a Iris-virginica”. Great post by the way, cheers.

  12. Holly December 1, 2014 at 7:40 pm #

    Hi a bit off topic for this lesson but im in need of help! Im using J48-cross validation but i want to change the amount of times the model can run and adjust the weights given to each variable- if that makes sense. also known as the number of epochs/iterations. i have done this before and im sure its a simple fix but i cant remember where or what this is called in weka.

    Many thanks
    Holly

  13. shuga December 4, 2014 at 10:10 am #

    hello Everyone,

    hello Jason, I must say this is exciting, i absolutely have no foundation in computer science or programming and neither was i very good at mathematics but somehow i

    am in love with the idea of machine learning, probably because i have a real life scenario i want to experiment with.

    I have up to 20 weekends and more of historical data of matches played and i would like to see how weka can predict the outcome of matches played within that 20 week

    period.

    My data is in tabular form and it is stored in microsoft word.
    It is a forecast of football matches played in the past.

    Pattern detection is the key, By poring over historical data of matches played in the past, patterns begin to emerge and i use this to forecast what the outcome of

    matches will be for the next game.

    I use the following attributes for detecting patterns and making predictions which on paper is always 80-100% accurate but when i make a bet, it fails.
    (results, team names, codes, week’s color, row number)

    Results= Matches that result in DRAWS

    Team names = Believe it or not, teams names are used as parameters to make predictions, HOW? They begin with Alphabets.

    Codes= These are 3-4 strings either digits or a combo of letters and digits, depending on where they are strategically placed in the table, they offer insight into

    detecting patterns.

    Weeks Color= In the football forecasting world, there are 4 colours used to represent each week in a month. RED, BLUE, BROWN and PURPLE. These also allows the

    forecaster to see emerging patterns.

    Row Number= Each week, the data is presented in a table form with two competing teams occupying a row and a number is associated with that row. These numbers are used

    to make preditions.

    So i would like to TEACH WEKA how i detect these patterns so that my task can be automated and tweaked anyhow i like it.

    In plain english, how do i write out my “pattern detecting style” for weka to understand and how do i get this information loaded into weka for processing into my

    desired results.
    Going by my scenario, What will be my attributes?
    What will be my instances?
    What will be the claasifiers?
    What algorithms do i use to achieve my aim or will i need to write new algorithms?

    I sincerely hope someone will come to my rescue.

    Thanks

  14. Suat ATAN January 19, 2015 at 6:29 am #

    Before reading this post data mining and machine learning was such a celestial intangible things. But this post changed it. Very thanks

  15. sarbu February 10, 2015 at 1:38 pm #

    I am wondering how can i know C4.5 average height and average accuracy. Thanks

    • Jason Brownlee February 19, 2015 at 8:43 am #

      Use cross validation and collect statistics on your tree such as depth and accuracy from each fold, then average the results.

  16. susan abraham February 28, 2015 at 8:58 pm #

    can you please help regarding working with multi labels.

  17. porkodi March 5, 2015 at 12:55 pm #

    Can you pls help me. i actually new to this datamining concepts. i want to know how to extract a features and accuracy of a given url name. for eg: if the url name is http://www.some@url_name.com it will extract the feature is _ and @ in it and i also tells the age of the url and also some feature extraction like ip address, long or short url,httos and ssl ,hsf,redirect page ,anchor tag like that it should extract and it will tell the accuracy too.and then implement using c4.5 classifier algorithm to find whether the given url name is malicious or benign url.

    pls some one help me to do this process.

    • Jason Brownlee March 6, 2015 at 5:48 am #

      Sounds like a great project. As with any project, you need to start by building up a dataset for you to analyse.

      I suspect the words in the URL will be useful, SSL cert or not (https) may be useful, and so on. It is hard to know a priori what will be most useful, I’d recommend brain storming and trying a lot of different features in your model. Also consider using an importance measure or correlation with the output variable to see which features look promising and which appear redundant.

  18. Chitralekha March 19, 2015 at 7:15 pm #

    Hi!
    I’m trying to use libsvm for classification (2 class) in 10-fold cross-validation mode. The output predictions that I get have an instance#, but I dont know which instances of my dataset do these correspond to. For example, my output predictions looks like this:

    inst#, actual, predicted, error, probability distribution
    1 2:R 2:R 0 *1
    2 2:R 2:R 0 *1
    3 2:R 2:R 0 *1
    4 2:R 2:R 0 *1
    5 1:S 1:S *1 0
    6 1:S 1:S *1 0
    1 2:R 2:R 0 *1
    2 2:R 1:S + *1 0
    3 2:R 2:R 0 *1
    4 2:R 2:R 0 *1
    5 1:S 1:S *1 0
    6 1:S 2:R + 0 *1
    ….

    How does my dataset get divided into 10 parts, which files do these instances correspond to?
    I’m interested in knowing which files get incorrectly classified. Is there some other/better way to do this?

  19. Lawrence March 24, 2015 at 9:28 pm #

    Hi Jason. I’ve been playing around for a while with WEKA, and now I get good prediction results. But I still wonder how to apply the model built further?
    I mean, I train and tune algorithms and get better results, but then?
    When I try to input, say, a set of four attributes corresponding to those of the IRIS set, it doesn’t recognize it as something that it can use in the model.
    If I put these four attributes and an empty column, it accepts this, but I don’t know how to predict the class then? How should I set the parameters in WEKA to do that, please?
    Thanks by advance.

  20. Sweta March 25, 2015 at 3:51 pm #

    I am trying to use multilayer Perceptron in WEKA. I want to understand of the 4 predictors, how do I decide which one is the best?

  21. Azad March 26, 2015 at 12:52 am #

    Hi,
    I am trying to use different types of classification rules with numeric data (both independent and dependent variables). It will be great if you please let me know about the applicable classification rules and results interpretation using numerical data in Weka or R.
    Thanks in advance.

  22. Zamri March 26, 2015 at 8:10 pm #

    Hello Jason. I have arff file with the skin pixels data. The arff file is large (>500MB). Once I load the data in the WEKA, its give me “Not enough memory …” Is there any solution for this matter? Tq

  23. george May 7, 2015 at 1:49 pm #

    hi
    im just downloading the accepted papers dataset in csv format.
    but it cannot be opened throught weka
    it says “unable to determined structure as csv. wrong number of values”

    what should i do? it will be a big appreciation if you reply as soon as possible.
    THANK YOU

    link for accepted papers dataset : https://archive.ics.uci.edu/ml/datasets/AAAI+2013+Accepted+Papers#

  24. Ghada Butrus May 15, 2015 at 7:03 pm #

    Hi Jason,
    Great website and great efforts 🙂
    I’ve just would like to know how to understand the confusion matrix please?

  25. vignesh June 18, 2015 at 4:55 pm #

    hi i need to know see the algorithm embedded in the software how can i see it and how can i modify or insert a neww one

  26. mahmood June 30, 2015 at 6:06 pm #

    hi
    i want to use data set from uci repository site but when i run the data on weka all attribute show into one attribute.
    how should i do?

  27. Túlio July 10, 2015 at 8:49 am #

    Once I have trained the model, how do I run it?
    For example, I have trained a model to make predictions on bodyfat % according to age, height, weight and abdominal circumference. Now, I want to input parameters about myself too see what the model predicts.
    Any help would be appreciated.

  28. Lord Carmichael July 27, 2015 at 11:23 am #

    I got ~97.3% accuracy with the multilayer perceptron.

  29. ryan August 18, 2015 at 6:54 am #

    Trees.FT with 20 folds produces 98% accuracy (147 correct, 3 incorrect)

  30. Ziad October 3, 2015 at 4:48 am #

    Hello,I have data set with numeric prediction.I would like to apply rules learning algorithm in Weka. I noticed that Weka does not support rule based algorithm for numerical prediction as it does for nominal prediction. I do not know why. also, why the accuracy is not shown in the same way as nominal prediction please? also I noticed the forms of the rules are also different? How to calculate the accuracy of the correct classification please? Many thanks

  31. Geetika October 15, 2015 at 8:57 pm #

    Hi. Can I apply j48 even if dont have any field like “class” in my dataset

  32. Mia October 24, 2015 at 3:48 am #

    Hi! Thanks for the nice guide! I wanted to know whether Weka normalizes the data while using a classifier, or do we need to input normalized data itself? Thankyou very much.

  33. viraj November 6, 2015 at 11:20 am #

    Hi, this is a great work. Thanks for this. One thing. Could you pls elaborate little more about “test options.” Got to know what is the purpose of the ‘use training set option’. But what about the others. Cross validation is already being used. What about other two. ‘supplied test set and percentage split’??
    Thank you!

  34. Kay November 20, 2015 at 12:37 am #

    and finally someone explains ML in plain english, i’ve been bashing my head off the desk for weeks working through training material on this. Very well written and explained in a few simple steps. Many thanks for this 🙂

  35. Austin Rogers November 24, 2015 at 1:41 am #

    Under Review Results wouldn’t the last comparison be 3 iris-versicolor classified as iris-virginica? I’m having trouble comprehending the pattern in this table, and using the example from Wikipedia it appears to not be the same as you mentioned? Totally new to this so I have no idea what I’m talking about but I’d appreciate some clarification.

  36. azhar December 14, 2015 at 2:06 am #

    Hi. My name is azhar from Peshawar Pakistan..i am phd scholar. I need to work in machine learning in practical manner.
    WEKA is a best software but dont knw how to use.

  37. Devanshi December 16, 2015 at 8:04 am #

    Hi, i have just trained a Random forest classifier for a text classification. I have got the results too. When they say- train your classifier and then test it with different data for evaluation, i want to know what should i do next to evaluate this classifier? Can anyone help me with steps to be followed ? I am not sure of how to save this trained classifier model and upload a test data.

    Thanks!

  38. Asma December 21, 2015 at 11:07 pm #

    Hi , i want to know how to chose a data base to use it in classification , please help me

  39. Prachi January 15, 2016 at 8:13 pm #

    Hi,I have Weka installed on ubuntu .I am trying to get dataset in explorer but i am not getting data folder.How can get data folder?

    • Dadi October 11, 2016 at 6:19 pm #

      I have same problem… Its my first day in Weka.. and i’m trying to load the German credit data, available as credit-g.arff in the Weka 3.8 distribution. I cant load this dataset because when I open the “open file” it only shows me the folders that are in my computer, and not the weka dataset….. Please if someone has an idea how to solve this thing…

  40. Pankaj January 16, 2016 at 4:53 am #

    got 96% with NaiveBayes without any modification

  41. Mithun Mohan K January 16, 2016 at 5:59 pm #

    Really thankful to u sir. That tutorial was really cool 🙂

  42. Dan Brooks January 21, 2016 at 10:55 am #

    Just a quick note that I love the whole site and have never had such an easy time establishing a new direction of endeavor with a high degree of confidence and understanding! This post had me up, running valid data, and evaluating the output from the classifier in under 10 minutes! Simply amazing!

    One small issue of note, though. The last paragraph of section 5, just before the Summary, covers the Confusion Matrix. In that paragraph, the third case cited refers to the three instances in which “Iris-versicolor was classified as a Iris-setosa.” My understanding of the table, however, is that there were three instances in which the Iris-versicolor was classified as an Iris-verginica, not Iris-setosa as stated. Naturally Mr. Murphy would select the most ironic location for this confusion of interpretation.

  43. chirag January 29, 2016 at 8:20 pm #

    can i use kdd 99 dataset in weka ..? can i detect dos attack using weka ? any suitable classification algorithm ?..reply soon …i dont have enough time…

  44. Pedram February 20, 2016 at 10:53 pm #

    Thanks for your helpful information.
    I have a specific question. Using the steps that you have mentioned we can train a machine learning model in WEKA and test its accuracy. I am wondering how we can classify new instances, with no class labels, using a model that we have trained in WEKA. For example, lets say that we have 1000 instances of positive and negative sentences. We train a machine learning model using an algorithm. Afterwards, we want to label 100 new sentences that have not already been classified with either positive or negative labels. How can we do such a work using WEKA?

  45. Bilge February 23, 2016 at 5:32 am #

    Hi,
    This is great tutorial to start and understand Weka.
    Thanks a lot.

  46. Charles Bauer February 24, 2016 at 10:28 pm #

    Amazinggggg!

    Very useful. My father once tried to define a template to define flowers quality and now I have an idea.
    And now I have run my first Machine Learning experimente!

  47. Martini February 27, 2016 at 11:16 pm #

    Hi,
    I’m new to Weka and I’m trying to figure out how to predict the value of a variable based on the values of other independent variables, in classification. I was wondering how to go about it.

    Thanks!

  48. Arpit Agrawal March 2, 2016 at 4:42 am #

    I am making an application based on handwritten digit recognition, it will be an android app. User will click picture of a digit, it will be send to the server then text will be recognized using machine learning.

    I will write back-end of my application in java language so does WEKA provide interfacing with java.

  49. Ebenezer March 10, 2016 at 6:16 pm #

    Nice post. I got your book on “Master Machine Learning Algorithm”, It is very good but i want to know how to replicate the example on “Logistic Regression” using weka pakage and java but could not. Please assist. Thank you.

  50. Nikhil K S March 30, 2016 at 2:24 am #

    Hey,
    So we provide only the data set and select a classifier and it automatically classifies is it?
    I mean if we provide a new training set?

  51. Hi to every single one, it’s really a good for me to pay a quick visit this website, it includes priceless Information.

  52. Carlos May 4, 2016 at 1:38 am #

    A quick question: I ran the SMO classifier, as I needed a Support Vector Machine, and got a set of results that included a list of the features used, under a line that reads, “Machine Linear: showing attribute weights, not support vectors”. Each feature has a value to the left and the label “(normalised)” next to it.

    What does this mean, please? Values for each feature was used in the classification so I assume the numbers refer to some sort of weighting i.e. how heavily each feature impacted on the results. Is this the case?

    Any chance someone can please explain this in simple terms as I am a beginner, or at least point me to a website with a detailed explanation of the SMO classifier and ALL its results section contents.

  53. Tarak June 17, 2016 at 10:24 am #

    I am not seeing the “data” directory when i open Weka console. It only shows me the files on my local drive.

  54. Emmanuel June 24, 2016 at 11:23 am #

    Hi, am quite new to weka. I am faced with a problem of clustering my data. I have a dataset of 13 distinct attributes, though only 12 is significant to the dataset. Am interested in using the kmeans algorithm and the euclidean distance for the distance function.

    The problem faced is to cluster the data belonging to 1 class along among the dataset.
    How do i go about it.
    Please answer is need ASAP.

  55. Rana Sobhy June 30, 2016 at 11:10 pm #

    i’m new in weka, i’ve use my own dataset to run the Libsvm classifier.but the libsvm get error message”it cannot handle numeric class”

    i turn it into nominal in the header of .arff file but it make the .arff file unreadable,
    How can i fix this problem

    • Jason Brownlee July 1, 2016 at 5:40 am #

      Maybe you can use a data filter to convert the numeric class to nominal? Perhaps the Discretize filter?

      Also, consider support vector regression.

  56. salma rafat July 10, 2016 at 7:54 pm #

    I am beginner in Weka. i use weka for ECG classification not for a dataset but classify record by record … firstly i convert .mat file to .csv and by weka i convert it to .arff as i can not get relation and header for my file >>>> i have question here in ECG i use .mat file or use annotation files also ????

  57. rafe July 28, 2016 at 7:21 pm #

    Hi am beginning of the weka tool. i used audio classification….but result is always below 50%

  58. Gagan August 6, 2016 at 6:36 pm #

    Great Post!!

  59. Ros August 17, 2016 at 8:02 pm #

    Hi,

    I am training data set of posts from Facebook on Naive bayes multinomial,the data gets more classified if i use the üse training set test option but if i use cross folds and percentage split the percentage of the correctly classified instances dropss drastically(i get 40% or below).Why is this happening.i have tried to search for this problem and I cannot find solutions.I repeated with 3 diffrent classifiers
    (SMO,IBk anns J45) but the same problem persists.what could be the problem and how do i solve it.

    thank you.

  60. gramcha August 24, 2016 at 3:56 pm #

    Hi Jason,

    I have created the model on train data using weka.

    For Example:

    AGE, SEX, Status
    15, male, Reject
    19, male, Approve

    I tried to run model on test data, it saying the type mismatch for status attribute.

    For Example:

    AGE, SEX, Status
    10, male, ?
    21, male, ?

    I tried to leave that status attribute without ‘?’ mark and just empty. Still it is saying same error

    What I am doing wrong?

  61. tejas zarekar September 12, 2016 at 7:29 am #

    how to apply the model on test data?

  62. Mark September 17, 2016 at 4:58 pm #

    Weka is awesome. I believe every statistician should know this!

  63. Justin September 21, 2016 at 11:13 pm #

    Can you please provide a link to example/demo code to classify unlabeled data. I have a set of unlabeled data, training data and test data. Using weka, I am not able to get the label for unlabeled data.

    Thanks in anticipation

  64. Nono September 23, 2016 at 1:52 am #

    Hi everyone
    Am getting this error class index is negative (not set)! from WEKA, please can you throw more light on how it can be resolve?

  65. Deena October 6, 2016 at 7:57 pm #

    Can you help to do web mining classification using weka tool..

    • Jason Brownlee October 7, 2016 at 7:54 am #

      Sorry, I do not have any examples of web mining Deena.

  66. Rajesh October 7, 2016 at 1:09 am #

    I am dealing with multi class problem weka. Here my doubt is variations in accuracy results with different classifiers with same attributes. EX: With SMO accuracy -84%, Random Forest- 92%. How this much variation comes? and is there any option to enhance smo performance in weka. Let me know as early as possible. thank you.

    • Jason Brownlee October 7, 2016 at 7:57 am #

      Yes, different algorithms will get different performance Rajesh.

      The goal of applied machine learning is to find the models and model parameters that give the best performance.

  67. Nor November 15, 2016 at 2:27 pm #

    Can you guide me how to change the parameters in WEKA? I want to change the classification technique with different parameters values (at least 3 parameters with 3 different value for each), AND their classification results.

    Thanks

  68. Viorel Stolea November 17, 2016 at 8:15 am #

    function.MultilayerPerceptron 98% with momentum of 0.11

  69. mtokhy December 13, 2016 at 8:31 pm #

    I am using AutoWEKA Tab to classify my Owen data-set. Now I want to run the obtained best conditions over another data-set for comparison task. how to do this.

    Best

    • Jason Brownlee December 14, 2016 at 8:25 am #

      Hi mtokhy, sorry I am not familiar with AutoWEKA.

  70. chintan zaveri January 4, 2017 at 4:47 am #

    98% accuracy by choosing training set in test option. Could you please elaborate it ?

    • Jason Brownlee January 4, 2017 at 8:58 am #

      Sorry, I don’t understand. Could you restate your question please?

Leave a Reply