Logistic Regression Tutorial for Machine Learning

Logistic regression is one of the most popular machine learning algorithms for binary classification. This is because it is a simple algorithm that performs very well on a wide range of problems.

In this post you are going to discover the logistic regression algorithm for binary classification, step-by-step. After reading this post you will know:

  • How to calculate the logistic function.
  • How to learn the coefficients for a logistic regression model using stochastic gradient descent.
  • How to make predictions using a logistic regression model.

This post was written for developers and does not assume a background in statistics or probability. Open a spreadsheet and follow along. If you have any questions about Logistic Regression ask in the comments and I will do my best to answer.

Let’s get started.

Update Nov/2016: Fixed a small typo in the update equation for b0.

Logistic Regression Tutorial for Machine Learning

Logistic Regression Tutorial for Machine Learning
Photo by Brian Gratwicke, some rights reserved.

Tutorial Dataset

In this tutorial we will use a contrived dataset.

This dataset has two input variables (X1 and X2) and one output variable (Y). In input variables are real-valued random numbers drawn from a Gaussian distribution. The output variable has two values, making the problem a binary classification problem.

The raw data is listed below.

Below is a plot of the dataset. You can see that it is completely contrived and that we can easily draw a line to separate the classes.

This is exactly what we are going to do with the logistic regression model.

Logistic Regression Tutorial Dataset

Logistic Regression Tutorial Dataset

Logistic Function

Before we dive into logistic regression, let’s take a look at the logistic function, the heart of the logistic regression technique.

The logistic function is defined as:

transformed = 1 / (1 + e^-x)

Where e is the numerical constant Euler’s number and x is a input we plug into the function.

Let’s plug in a series of numbers from -5 to +5 and see how the logistic function transforms them:

You can see that all of the inputs have been transformed into the range [0, 1] and that the smallest negative numbers resulted in values close to zero and the larger positive numbers resulted in values close to one. You can also see that 0 transformed to 0.5 or the midpoint of the new range.

From this we can see that as long as our mean value is zero, we can plug in positive and negative values into the function and always get out a consistent transform into the new range.

Logistic Function

Logistic Function

Get your FREE Algorithms Mind Map

Machine Learning Algorithms Mind Map

Sample of the handy machine learning algorithms mind map.

I've created a handy mind map of 60+ algorithms organized by type.

Download it, print it and use it. 

Download For Free


Also get exclusive access to the machine learning algorithms email mini-course.

 

 

Logistic Regression Model

The logistic regression model takes real-valued inputs and makes a prediction as to the probability of the input belonging to the default class (class 0).

If the probability is > 0.5 we can take the output as a prediction for the default class (class 0), otherwise the prediction is for the other class (class 1).

For this dataset, the logistic regression has three coefficients just like linear regression, for example:

output = b0 + b1*x1 + b2*x2

The job of the learning algorithm will be to discover the best values for the coefficients (b0, b1 and b2) based on the training data.

Unlike linear regression, the output is transformed into a probability using the logistic function:

p(class=0) = 1 / (1 + e^(-output))

In your spreadsheet this would be written as:

p(class=0) = 1 / (1 + EXP(-output))

Logistic Regression by Stochastic Gradient Descent

We can estimate the values of the coefficients using stochastic gradient descent.

This is a simple procedure that can be used by many algorithms in machine learning. It works by using the model to calculate a prediction for each instance in the training set and calculating the error for each prediction.

We can apply stochastic gradient descent to the problem of finding the coefficients for the logistic regression model as follows:

Given each training instance:

  1. Calculate a prediction using the current values of the coefficients.
  2. Calculate new coefficient values based on the error in the prediction.

The process is repeated until the model is accurate enough (e.g. error drops to some desirable level) or for a fixed number iterations. You continue to update the model for training instances and correcting errors until the model is accurate enough orc cannot be made any more accurate. It is often a good idea to randomize the order of the training instances shown to the model to mix up the corrections made.

By updating the model for each training pattern we call this online learning. It is also possible to collect up all of the changes to the model over all training instances and make one large update. This variation is called batch learning and might make a nice extension to this tutorial if you’re feeling adventurous.

Calculate Prediction

Let’s start off by assigning 0.0 to each coefficient and calculating the probability of the first training instance that belongs to class 0.

B0 = 0.0

B1 = 0.0

B2 = 0.0

The first training instance is: x1=2.7810836, x2=2.550537003, Y=0

Using the above equation we can plug in all of these numbers and calculate a prediction:

prediction = 1 / (1 + e^(-(b0 + b1*x1 + b2*x2)))

prediction = 1 / (1 + e^(-(0.0 + 0.0*2.7810836 + 0.0*2.550537003)))

prediction = 0.5

Calculate New Coefficients

We can calculate the new coefficient values using a simple update equation.

b = b + alpha * (y – prediction) * prediction * (1 – prediction) * x

Where b is the coefficient we are updating and prediction is the output of making a prediction using the model.

Alpha is parameter that you must specify at the beginning of the training run. This is the learning rate and controls how much the coefficients (and therefore the model) changes or learns each time it is updated. Larger learning rates are used in online learning (when we update the model for each training instance). Good values might be in the range 0.1 to 0.3. Let’s use a value of 0.3.

You will notice that the last term in the equation is x, this is the input value for the coefficient. You will notice that the B0 does not have an input. This coefficient is often called the bias or the intercept and we can assume it always has an input value of 1.0. This assumption can help when implementing the algorithm using vectors or arrays.

Let’s update the coefficients using the prediction (0.5) and coefficient values (0.0) from the previous section.

b0 = b0 + 0.3 * (0 – 0.5) * 0.5 * (1 – 0.5) * 1.0

b1 = b1 + 0.3 * (0 – 0.5) * 0.5 * (1 – 0.5) * 2.7810836

b2 = b2 + 0.3 * (0 – 0.5) * 0.5 * (1 – 0.5) * 2.550537003

or

b0 = -0.0375

b1 = -0.104290635

b2 = -0.09564513761

Repeat the Process

We can repeat this process and update the model for each training instance in the dataset.

A single iteration through the training dataset is called an epoch. It is common to repeat the stochastic gradient descent procedure for a fixed number of epochs.

At the end of epoch you can calculate error values for the model. Because this is a classification problem, it would be nice to get an idea of how accurate the model is at each iteration.

The graph below show a plot of accuracy of the model over 10 epochs.

Logistic Regression with Gradient Descent Accuracy versus Iteration

Logistic Regression with Gradient Descent Accuracy versus Iteration

You can see that the model very quickly achieves 100% accuracy on the training dataset.

The coefficients calculated after 10 epochs of stochastic gradient descent are:

b0 = -0.4066054641

b1 = 0.8525733164

b2 = -1.104746259

Make Predictions

Now that we have trained the model, we can use it to make predictions.

We can make predictions on the training dataset, but this could just as easily be new data.

Using the coefficients above learned after 10 epochs, we can calculate output values for each training instance:

These are the probabilities of each instance belonging to class=0. We can convert these into crisp class values using:

prediction = IF (output < 0.5) Then 0 Else 1

With this simple procedure we can convert all of the outputs to class values:

Finally, we can calculate the accuracy for the model on the training dataset:

accuracy = (correct predictions / num predictions made) * 100

accuracy = (10 /10) * 100

accuracy = 100%

Summary

In this post you discovered how you can implement logistic regression from scratch, step-by-step. You learned:

  • How to calculate the logistic function.
  • How to learn the coefficients for a logistic regression model using stochastic gradient descent.
  • How to make predictions using a logistic regression model.

Do you have any questions about this post or logistic regression?
Leave a comment and ask your question, I’ll do my best to answer.


Frustrated With Machine Learning Math?

Mater Machine Learning Algorithms

See How Algorithms Work in Minutes

…with just arithmetic and simple examples

Discover how in my new Ebook: Master Machine Learning Algorithms

It covers explanations and examples of 10 top algorithms, like:
Linear Regression, k-Nearest Neighbors, Support Vector Machines and much more…

Finally, Pull Back the Curtain on
Machine Learning Algorithms

Skip the Academics. Just Results.

Click to learn more.


39 Responses to Logistic Regression Tutorial for Machine Learning

  1. Sean April 16, 2016 at 4:58 am #

    Hi Jason,

    Great blog! Nice job. Just wondering how to determine the form of the update equation? In this case, it’s in the form of b = b + alpha * (y – prediction) * prediction * (1 – prediction) * x. How do I know it should be like this?

    Thanks.

    • Chris Edlinger May 8, 2016 at 8:30 pm #

      Actually I have the same question. Plus I tried to reproduce your example but I can’t get the same (b0,b1,b2) after 10 iterations. I have instead (-0.02,0.312,-0.095)
      thank you

      • Van Le May 28, 2016 at 5:29 pm #

        I got the wrong value too, but i found my error.I don’t know how to descible so i leave my code, it’s coded in C++.It maybe helpful for you. Here is it:

        int k = 0;
        for (int j = 0; j < 100; j++) { // I forgot this loop, i added it then i got the right value.
        k = 0;
        while (k < 10) {
        for (int i = 0; i < 3; i++){
        b[i] = b[i] + alpha * (y[k] – prediction) * prediction * (1 – prediction)* x[k][i];
        cout << b[i]<<"\n";
        }
        k++;
        output = 0;
        for (int i = 0; i < 3; i++)
        output += b[i] * x[k][i];

        prediction = 1 / (1 + exp(-output));
        }

        }

      • Dylan November 6, 2016 at 6:13 am #

        Hi I managed to reproduce the same result after 10 epochs. There is a typo in the code provided here. The updating equation for:

        b0 = 0.0 + 0.3 * (0 – 0.5) * 0.5 * (1 – 0.5) * 1.0

        should be changed as:

        b0 = b0 + 0.3 * (0 – 0.5) * 0.5 * (1 – 0.5) * 1.0

        Hope this helps 🙂

    • Atlas August 18, 2016 at 6:59 am #

      I have the same question

  2. EDLINGER May 9, 2016 at 12:44 am #

    Hi,

    I can’t reproduce the value of your 10th iteration of vector (b0,b1,b2)
    can you explain again the iteration process.
    thank you

  3. Gyanendro Loitongbam August 10, 2016 at 2:48 pm #

    Hi all,
    In this blog it is written that the coefficients calculated after 10
    epochs of stochastic gradient descent are:

    b0 = -0.4066054641

    b1 = 0.8525733164

    b2 = -1.104746259

    Through my understanding I have a matrix of inputs of size (3×10,
    x1,x2,y1) and weight matrix (3×10, b0,b1,b2) after 10 epochs.

    My doubts is that this learned coefficients will only be for that
    particular input vector (say first input) or for all the rest of the
    input?

    1) If it is for single input vector then, while testing the model that were
    build using above described method, how to pass any unseen input? Will every
    input pass through whole neurons?

    2) If not then how to compute the coefficient?

  4. Tom September 30, 2016 at 12:51 pm #

    This article was very well-written and helped me immensely in understanding logistic regression. Thank you so much.

  5. Karthik October 6, 2016 at 2:53 pm #

    Jason, elucidated it very well! Clean and neat, I have come across many Blogs on ML. But yours always keep me hooked to your Knowledge Materials.

  6. Dylan November 14, 2016 at 11:12 pm #

    Hi Jason, I have some questions for you.

    [1] How can we produce the line separating these two classes? I have tried using ‘B0’ as the y-intercept, and ‘B1’ as the gradient. I am not sure if this is the case as ‘B2’ is left out of the equation.

    Example:
    Assuming the separator line will be from (x0,y0) to (x1,y1)

    **Using the values of B0 and B1 after 10 epochs**

    when x0 = minX1 = 1.38807
    y0 = 1.38807(0.85257334) + (-0.40660548) = 0.77682614

    when x1 = maxX1 = 8.675419
    y1 = 8.675419(0.8527334) + (-0.40660548) = 6.9898252

    This line (1.38807, 0.77682614) to (8.675419,6.9898252) perfectly separates the two classes, however upon increasing the number of epochs, the gradient changes and therefore produces false positives.

    Any ideas please?

    Thanks in advance for your help

    • Jason Brownlee November 15, 2016 at 7:56 am #

      Hi Dylan, I agree with your intuition that the regression line best separates the two classes. Drawing it should draw the class boundary for the classifier I would expect.

      It is possible that additional training is overfitting or that online gradient descent is resulting in noisy changes to the line.

  7. Harish November 16, 2016 at 4:27 pm #

    what’s the main use of logistic regression Algorithm which was explained by you ….?

    • Jason Brownlee November 17, 2016 at 9:50 am #

      Great question Harish,

      Logistic regression is for binary classification problems, ideally where we can separate the 2 classes by a line or hyperplane.

      If logistic regression works well on your problem, use it. If not, you may need to move on to more advanced methods, even non-linear methods.

  8. Harish November 16, 2016 at 4:49 pm #

    1)whats the use of calculating new coefficients i am unable to understand where these coefficients used??
    2)How do you draw the accurate graph and values of MAKE PREDICTION??

    • Jason Brownlee November 17, 2016 at 9:52 am #

      Hi Harish, coefficients are used to make predictions on new data.

      We use the prediction equation with the coefficients and the input values in order to make predict the class of a new piece of data.

      If we make predictions on test data, then calculate the error of those predictions we can make a graph of accuracy each time the coefficients are updated. That is how the graph is created.

  9. Jack Paterson February 17, 2017 at 3:25 am #

    Hi Jason – Thanks for the article. It has been helpful but can you please explain the iteration process for each Epoch?

    I don’t get the same values for the 10th Epoch. It isn’t clear if you are moving to the corresponding x1, x2 and Y values for each training instance in the Prediction and the calculation of the coefficients. For example the 2nd training instance I am assuming:

    prediction = 1 / (1 + e^(-(-0.0375 + -1.043*2.7810836 + -0.0956*2.550537003)))

    =0.3974

    b0 = b0 + 0.3 * (0 – 0.0375) * 0.0375 * (1 – 0.0375) * 1.0

    b1 = b1 + 0.3 * (0 – 0.0375) * 0.0375 * (1 – 0.0375) * 1.4655

    b2 = b2 + 0.3 * (0 – 0.0375) * 0.5 * (1 – 0.0375) * 2.3621

    I repeat this step another 8 times updating the y,X1 and X2 values but they dont match yours. I’d be grateful if you could point out there’s something I’m missing or post your results for each training instance.

    • Jack Paterson February 17, 2017 at 3:29 am #

      Sorry the prediction line should read:

      prediction = 1 / (1 + e^(-(-0.0375 + -1.043*1.4655+ -0.0956*2.3621)))

      • Alexandre CHIROUZE March 9, 2017 at 3:05 am #

        You are right, but you missing the understanding of one epoch.

        The first time, i did like you did : Updating 10 times b0, b1 and b2 and i did not have the result expected.

        You have to update 10 * 10 times your b0, b1 and b2 :

        One epoch is a complet turn of your data training.

        Do what you did in your post, but repeat it 10 times and you’ll have the same result as the tutorial.

        I did a C Program, I post the code here :

        #include
        #include
        #include

        float x1[10] = {2.7810836,
        1.465489372,
        3.396561688,
        1.38807019,
        3.06407232,
        7.627531214,
        5.332441248,
        6.922596716,
        8.675418651,
        7.673756466
        };
        float x2[10] = {2.550537003,
        2.362125076,
        4.400293529,
        1.850220317,
        3.005305973,
        2.759262235,
        2.088626775,
        1.77106367,
        -0.2420686549,
        3.508563011
        };
        int y_1[10] = {0,0,0,0,0,1,1,1,1,1};

        float b0 = 0.00f;
        float b1 = 0.00f;
        float b2 = 0.00f;

        float b[3];

        float prediction = 0;
        float output = 0;

        int main()
        {
        int epoch = 0;
        float alpha = 0.3;

        while (epoch < 10)
        {
        int i = 0;

        while (i < 10)
        {
        // CALCULATE PREDICTION
        output = b0 + (b1 * x1[i]) + (b2 * x2[i]);
        prediction = 1/(1 + exp(-output));
        printf("Prediciton = %lf\n", prediction);

        // AFFINEMENT DES COEFFICIENTS
        b0 = b0 + alpha * (y_1[i] – prediction) * prediction * (1 – prediction) * 1.00f;
        b1 = b1 + alpha * (y_1[i] – prediction) * prediction * (1 – prediction) * x1[i];
        b2 = b2 + alpha * (y_1[i] – prediction) * prediction * (1 – prediction) * x2[i];
        printf("New : b0 = %lf | b1 = %lf | b2 = %lf\n\n", b0, b1, b2);
        i++;
        }
        epoch++;
        }

        int i = 0;
        while (i < 10)
        {
        output = b0 + (b1 * x1[i]) + (b2 * x2[i]);
        prediction = 1/(1 + exp(-output));
        printf("Prediciton = %lf\n", prediction);
        i++;
        }

        return 0;
        }

    • Jason Brownlee February 17, 2017 at 9:59 am #

      Hi Jack,

      Yes, the process is repeated, enumerating each training example. The coefficients (b0, b1, b2) output from the previous iteration are used as inputs in the subsequent iteration.

      Does that help? Is there a specific aspect I can make clearer?

      I provide the fully working example and spreadsheet with my book if that helps.

  10. Ntate Ndaba March 10, 2017 at 11:03 pm #

    Hi Jason,

    Thanks for this, it’s really helpful. I just have two questions.

    1. About the meaning of probability,

    In the beginning you said, ‘If the probability is > 0.5 we can take the output as a prediction for the default class (class 0)’; however the IF statement states that ‘prediction = IF (output 0.5 means that the instance belongs to class 0. May you please correct me :).

    2. ‘At the end of epoch you can calculate error values for the model’

    Do we do this by testing each instance in the training set using coefficients obtained from the last instance during training, and check how many are correctly classified vs those incorrectly classified to get the error error of the model ?.

    Thanks for your help.

    • Jason Brownlee March 11, 2017 at 8:00 am #

      Hi Ntate,

      Yes, here the default class is class 1. If P > 0.5, then predict class 1.

      Models are generally trained on a training dataset and evaluated on a test or validation dataset.

      Does that help?

  11. Ntate Ndaba March 12, 2017 at 11:06 am #

    Hi Jason

    Thank you for the feedback. Yes it helps. I love you articles.

    Regards,
    Ntate

  12. Simon April 19, 2017 at 12:48 pm #

    Hi Jason,

    I noticed you assigned alpha as 0.3 and stated that a “good” value is between 0.1 – 0.3.

    b = b + alpha * (y – prediction) * prediction * (1 – prediction) * x

    Can you define why the range between 0.1 – 0.3 are good values? also, is there another way to determine alpha?

    regards,

    • Jason Brownlee April 20, 2017 at 9:21 am #

      Trial and error is the best away to configure alpha.

      My suggestion is based on observations.

  13. Rohan July 9, 2017 at 8:43 pm #

    Thank you.Helepd me in understanding it easily

  14. Monjur July 10, 2017 at 3:47 am #

    I’ve some categorical and ordinal variables as independent variables. Do I need to normalise them for logistic regression? If yes, which normalisation?

    • Jason Brownlee July 11, 2017 at 10:21 am #

      I would recommend converting the categorical variables to integer encoded or one hot encoded first. Then consider standardizing or normalizing the real-valued variables.

  15. Robert July 24, 2017 at 4:42 am #

    Hey Jason,

    I hope that you can help me out. I just loaded the dataset into a pandas dataframe and run over the values with the given functions in this article. But somehow I do not get the same results. Can you tell me why?

    b0 = 0.0
    b1 = 0.0
    b2 = 0.0
    alpha = 0.3
    for i in range(10):
    print(‘epoche ‘+str(i))
    for j in range(10):
    prediction = 1/(1+math.exp(-(b0+b1*df.X1[j]+0*df.X2[j])))
    b0 = b0 + alpha*(df.Y[j]-prediction)*prediction*(1-prediction)*1
    b1 = b1 + alpha*(df.Y[j]-prediction)*prediction*(1-prediction)*df.X1[j]
    b2 = b2 + alpha*(df.Y[j]-prediction)*prediction*(1-prediction)*df.X2[j]
    print(j,’b0:’+str(b0),’b1:’+str(b1),’b2:’+str(b2),’prediction:’,str(prediction))

    epoche 0
    0 b0:-0.0375 b1:-0.104290635 b2:-0.0956451376125 prediction: 0.5
    1 b0:-0.0711363539015 b1:-0.153584354155 b2:-0.175098412628 prediction: 0.4525589342855867
    2 b0:-0.0956211308566 b1:-0.2367484095 b2:-0.282838618223 prediction: 0.3559937888383731
    3 b0:-0.12398808955 b1:-0.276123739244 b2:-0.335323741529 prediction: 0.3955015163005356
    4 b0:-0.140423954937 b1:-0.326484419432 b2:-0.384718545948 prediction: 0.27487029790669615
    5 b0:-0.122884891209 b1:-0.192704663377 b2:-0.336323669765 prediction: 0.06718893790753155
    6 b0:-0.0812720241683 b1:0.0291935052754 b2:-0.24940992148 prediction: 0.24040302936619257
    7 b0:-0.0461629882724 b1:0.27223920187 b2:-0.187229583516 prediction: 0.5301690178124752
    8 b0:-0.0439592934139 b1:0.291357177347 b2:-0.187763028966 prediction: 0.9101629406368437
    9 b0:-0.041234497312 b1:0.312266599052 b2:-0.178202910151 prediction: 0.8995147707414556
    epoche 1
    0 b0:-0.0854175615843 b1:0.189389803607 b2:-0.290893450483 prediction: 0.6957636219000934
    1 b0:-0.126132084903 b1:0.129723102398 b2:-0.387066246971 prediction: 0.5478855806515489
    2 b0:-0.168426127803 b1:-0.013931223349 b2:-0.573172450262 prediction: 0.5779785060018552
    3 b0:-0.202118039235 b1:-0.0606979612511 b2:-0.635509909311 prediction: 0.4531965138273989
    4 b0:-0.231317727075 b1:-0.150167916514 b2:-0.723263905585 prediction: 0.40417453404597653
    5 b0:-0.192771357535 b1:0.143845720337 b2:-0.616904363818 prediction: 0.20153497961346012
    6 b0:-0.16786327908 b1:0.2766665853 b2:-0.564880684243 prediction: 0.6397495977782179
    7 b0:-0.162238550103 b1:0.315604315639 b2:-0.554918931099 prediction: 0.8516230388309892
    8 b0:-0.160844460322 b1:0.327698628134 b2:-0.555256396538 prediction: 0.9292852140544192
    9 b0:-0.158782126878 b1:0.34352447273 b2:-0.548020569702 prediction: 0.913238571936509
    epoche 2
    0 b0:-0.203070195453 b1:0.220355651541 b2:-0.660978927394 prediction: 0.6892441807751586
    1 b0:-0.242672456027 b1:0.162318959562 b2:-0.754524420162 prediction: 0.5299288459930025
    2 b0:-0.284900459421 b1:0.0188889410738 b2:-0.940340030239 prediction: 0.5765566602275273
    3 b0:-0.317036444837 b1:-0.0257180623077 b2:-0.999798683361 prediction: 0.43568790553413944
    4 b0:-0.346058130501 b1:-0.114642606031 b2:-1.08701772863 prediction: 0.4023126069119149
    5 b0:-0.305303900258 b1:0.196211557251 b2:-0.974566120208 prediction: 0.22784879089424026
    6 b0:-0.284135730542 b1:0.309089578586 b2:-0.930353714163 prediction: 0.6772107084944106
    7 b0:-0.279392082974 b1:0.341927937662 b2:-0.921952412292 prediction: 0.8647793818147672
    8 b0:-0.278250718608 b1:0.351829751376 b2:-0.922228700829 prediction: 0.9362537346812636
    9 b0:-0.276418730838 b1:0.365887979368 b2:-0.915801056304 prediction: 0.9184600339928451
    epoche 3
    0 b0:-0.320829240308 b1:0.242378639814 b2:-1.02907170403 prediction: 0.6772464757372858
    1 b0:-0.358962425423 b1:0.186494862307 b2:-1.11914705682 prediction: 0.5085926740259167
    2 b0:-0.400784178807 b1:0.0444446970373 b2:-1.30317504761 prediction: 0.5681921310493052
    3 b0:-0.431106414978 b1:0.00235530491457 b2:-1.35927786503 prediction: 0.4160301017601509
    4 b0:-0.459481564543 b1:-0.0845882054421 b2:-1.4445538715 prediction: 0.39558638097213306
    5 b0:-0.417358455079 b1:0.236707126821 b2:-1.32832516633 prediction: 0.24886389211209164
    6 b0:-0.398407914119 b1:0.337759773107 b2:-1.28874455908 prediction: 0.6994895638042464
    7 b0:-0.394265226466 b1:0.36643792905 b2:-1.28140759549 prediction: 0.8743265198068555
    8 b0:-0.393309643975 b1:0.374728007212 b2:-1.28163891205 prediction: 0.9418454578963239
    9 b0:-0.391663361645 b1:0.387361176894 b2:-1.27586282676 prediction: 0.9228889145563345
    epoche 4
    0 b0:-0.436106966113 b1:0.26375979738 b2:-1.38921788451 prediction: 0.6649919679911412
    1 b0:-0.472655267001 b1:0.210198650866 b2:-1.47554954252 prediction: 0.4876100903262518
    2 b0:-0.514052259808 b1:0.0695912110969 b2:-1.65770846209 prediction: 0.5600333525763922
    3 b0:-0.542575863527 b1:0.029998447064 b2:-1.7104834132 prediction: 0.39712596044478815
    4 b0:-0.570332853627 b1:-0.0550509779888 b2:-1.79390166134 prediction: 0.38920422454205744
    5 b0:-0.52713220805 b1:0.274463294611 b2:-1.67469975148 prediction: 0.2708654841988902
    6 b0:-0.510039576896 b1:0.365608746016 b2:-1.63899962439 prediction: 0.7183774008591356
    7 b0:-0.506411991601 b1:0.390721056064 b2:-1.63257493987 prediction: 0.8829763460926162
    8 b0:-0.505614321279 b1:0.397641180051 b2:-1.63276803085 prediction: 0.9470125245639989
    9 b0:-0.504143617348 b1:0.408927003853 b2:-1.62760797344 prediction: 0.9272899884165315
    epoche 5
    0 b0:-0.548534355846 b1:0.285472649024 b2:-1.74082819456 prediction: 0.6531957979683319
    1 b0:-0.583448848755 b1:0.234305830737 b2:-1.82330059378 prediction: 0.4675015748559041
    2 b0:-0.624452043388 b1:0.095035950763 b2:-2.00372668579 prediction: 0.5528976488885597
    3 b0:-0.651241447361 b1:0.0578503777005 b2:-2.0532929853 prediction: 0.379296457131698
    4 b0:-0.678459443818 b1:-0.0255475318494 b2:-2.13509139263 prediction: 0.38367378819364967
    5 b0:-0.634483421462 b1:0.309880951336 b2:-2.0137500149 prediction: 0.29456310937959235
    6 b0:-0.618957686991 b1:0.392671018235 b2:-1.98132255018 prediction: 0.734570858647913
    7 b0:-0.615773112207 b1:0.414716545178 b2:-1.97568246547 prediction: 0.8908395402598338
    8 b0:-0.615108589413 b1:0.420481558613 b2:-1.97584332561 prediction: 0.9517573456578646
    9 b0:-0.613801104729 b1:0.430514877661 b2:-1.97125593321 prediction: 0.9316021522525871
    epoche 6
    0 b0:-0.658065683935 b1:0.307411382371 b2:-2.0841543804 prediction: 0.6418716134977803
    1 b0:-0.691328362642 b1:0.258665280242 b2:-2.16272498787 prediction: 0.4482960919573821
    2 b0:-0.731971685172 b1:0.120617728064 b2:-2.34156753699 prediction: 0.5466747645320021
    3 b0:-0.757102763343 b1:0.0857340276128 b2:-2.38806556841 prediction: 0.3624963002614126
    4 b0:-0.783848933768 b1:0.00378182714664 b2:-2.46844599415 prediction: 0.3788558237431147
    5 b0:-0.739460741679 b1:0.342354147839 b2:-2.34596733203 prediction: 0.3197321661982434
    6 b0:-0.725177602262 b1:0.418518149617 b2:-2.31613518462 prediction: 0.747650886073883
    7 b0:-0.722359808144 b1:0.438024601926 b2:-2.31114469182 prediction: 0.8977118003459105
    8 b0:-0.721803648551 b1:0.442849519233 b2:-2.31127932063 prediction: 0.9559629146792477
    9 b0:-0.72064049539 b1:0.451775273321 b2:-2.30719832447 prediction: 0.935626585073733
    epoche 7
    0 b0:-0.764713491327 b1:0.329204587117 b2:-2.41960813144 prediction: 0.630831196571219
    1 b0:-0.796322040457 b1:0.282882594302 b2:-2.49427147796 prediction: 0.4298979150158073
    2 b0:-0.836626296602 b1:0.145986702019 b2:-2.67162203546 prediction: 0.54103403758093
    3 b0:-0.86017519983 b1:0.113299171441 b2:-2.71519269466 prediction: 0.34660715971969225
    4 b0:-0.886491802455 b1:0.0326631977814 b2:-2.79428213772 prediction: 0.37448622204910614
    5 b0:-0.842093735283 b1:0.371310840978 b2:-2.67177622766 prediction: 0.34584531374424465
    6 b0:-0.828710920588 b1:0.442673914071 b2:-2.64382452257 prediction: 0.7572937776171192
    7 b0:-0.826182553165 b1:0.460176782093 b2:-2.63934662288 prediction: 0.9034135590263473
    8 b0:-0.825711234308 b1:0.464265670495 b2:-2.6394607144 prediction: 0.9595362539490104
    9 b0:-0.82466874439 b1:0.472265484239 b2:-2.63580307284 prediction: 0.9391721159610058
    epoche 8
    0 b0:-0.868485382532 b1:0.350407750494 b2:-2.74755902977 prediction: 0.6198098107703024
    1 b0:-0.898445083971 b1:0.306502126447 b2:-2.8183275918 prediction: 0.4121785664149056
    2 b0:-0.938410924898 b1:0.170755682328 b2:-2.99418902301 prediction: 0.535591774903723
    3 b0:-0.960450226361 b1:0.140163584958 b2:-3.03496658635 prediction: 0.3315041073863981
    4 b0:-0.986352835601 b1:0.0607961169703 b2:-3.11281185262 prediction: 0.37028861216547454
    5 b0:-0.942344793064 b1:0.396468835092 b2:-2.99138212281 prediction: 0.37223779515431543
    6 b0:-0.92953081713 b1:0.464798608913 b2:-2.96461850958 prediction: 0.7634705770276337
    7 b0:-0.927219758977 b1:0.480797132495 b2:-2.96052547844 prediction: 0.9078852161314516
    8 b0:-0.926812709449 b1:0.484328457556 b2:-2.96062401238 prediction: 0.9624531770487973
    9 b0:-0.925865930278 b1:0.491593810342 b2:-2.957302178 prediction: 0.9421224630749467
    epoche 9
    0 b0:-0.96935657639 b1:0.370642687688 b2:-3.06822668019 prediction: 0.6085681683087178
    1 b0:-0.997678784367 b1:0.329136792905 b2:-3.13512727786 prediction: 0.39503800634778957
    2 b0:-1.03728747577 b1:0.19460342917 b2:-3.30941714634 prediction: 0.53002748178722
    3 b0:-1.05788693505 b1:0.166009933818 b2:-3.34753068441 prediction: 0.3170928534429609
    4 b0:-1.08336985067 b1:0.087928437422 b2:-3.42411464294 prediction: 0.3660452797076954
    5 b0:-1.0401081702 b1:0.417908255596 b2:-3.30474432179 prediction: 0.39826657846895
    6 b0:-1.02756606041 b1:0.484788319168 b2:-3.27854853547 prediction: 0.7664481691662279
    7 b0:-1.0254106916 b1:0.499709068196 b2:-3.27473124008 prediction: 0.9112042239475246
    8 b0:-1.02505131574 b1:0.502826804247 b2:-3.27481823372 prediction: 0.9647626216394803
    9 b0:-1.02417732025 b1:0.509533632813 b2:-3.27175176546 prediction: 0.9444604863065394

    • Robert July 24, 2017 at 4:51 am #

      I fixed it myself.

      error: prediction = 1/(1+math.exp(-(b0+b1*df.X1[j]+0*df.X2[j])))
      info: 0 = b2

      correction: prediction = 1/(1+math.exp(-(b0+b1*df.X1[j]+b2*df.X2[j])))

      Thx for this great article.

    • Jason Brownlee July 24, 2017 at 6:57 am #

      Machine learning algorithms are stochastic and give different results each time they are run on different platforms, even sometimes when the random seed is fixed.

      See this post:
      http://machinelearningmastery.com/randomness-in-machine-learning/

  16. vidya pati July 28, 2017 at 1:20 am #

    I see all the logistic regression has below formula for updating weight

    wj:=wj+η∑i=1n(y(i)−ϕ(z(i)))x(i

    But in this post we have this formula

    b = b + alpha * (y – prediction) * prediction * (1 – prediction) * x

    how is these two formula different??

  17. NweWin August 31, 2017 at 10:13 pm #

    First if all, thanks you so much for this blog. This really helps me to understand what and how regression works which is my thesis topic. I have more thing to know about How to predict the class ( 0 or 1) for new data (new x1 and new x2) in this example. Please kindly give me sample calculations for prediction of class value for new data by learning trained and valid datasets. Thanks you so much in advance.

    • Jason Brownlee September 1, 2017 at 6:47 am #

      Multiply each weight/coefficient by the input value and add the values together to get the prediction.

Leave a Reply