Linear regression is a prediction method that is more than 200 years old.

Simple linear regression is a great first machine learning algorithm to implement as it requires you to estimate properties from your training dataset, but is simple enough for beginners to understand.

In this tutorial, you will discover how to implement the simple linear regression algorithm from scratch in Python.

After completing this tutorial you will know:

- How to estimate statistical quantities from training data.
- How to estimate linear regression coefficients from data.
- How to make predictions using linear regression for new data.

Let’s get started.

## Description

This section is divided into two parts, a description of the simple linear regression technique and a description of the dataset to which we will later apply it.

### Simple Linear Regression

Linear regression assumes a linear or straight line relationship between the input variables (X) and the single output variable (y).

More specifically, that output (y) can be calculated from a linear combination of the input variables (X). When there is a single input variable, the method is referred to as a simple linear regression.

In simple linear regression we can use statistics on the training data to estimate the coefficients required by the model to make predictions on new data.

The line for a simple linear regression model can be written as:

1 |
y = b0 + b1 * x |

where b0 and b1 are the coefficients we must estimate from the training data.

Once the coefficients are known, we can use this equation to estimate output values for y given new input examples of x.

It requires that you calculate statistical properties from the data such as mean, variance and covariance.

All the algebra has been taken care of and we are left with some arithmetic to implement to estimate the simple linear regression coefficients.

Briefly, we can estimate the coefficients as follows:

1 2 |
B1 = sum((x(i) - mean(x)) * (y(i) - mean(y))) / sum( (x(i) - mean(x))^2 ) B0 = mean(y) - B1 * mean(x) |

where the i refers to the value of the ith value of the input x or output y.

Don’t worry if this is not clear right now, these are the functions will implement in the tutorial.

### Swedish Insurance Dataset

We will use a real dataset to demonstrate simple linear regression.

The dataset is called the “Auto Insurance in Sweden” dataset and involves predicting the total payment for all the claims in thousands of Swedish Kronor (y) given the total number of claims (x).

This means that for a new number of claims (x) we will be able to predict the total payment of claims (y).

Here is a small sample of the first 5 records of the dataset.

1 2 3 4 5 |
108,392.5 19,46.2 13,15.7 124,422.2 40,119.4 |

Using the Zero Rule algorithm (that predicts the mean value) a Root Mean Squared Error or RMSE of about 72.251 (thousands of Kronor) is expected.

Below is a scatter plot of the entire dataset.

You can download the raw dataset from here or here.

Save it to a CSV file in your local working directory with the name “**insurance.csv**“.

Note, you may need to convert the European “,” to the decimal “.”. You will also need change the file from white-space-separated variables to CSV format.

## Tutorial

This tutorial is broken down into five parts:

- Calculate Mean and Variance.
- Calculate Covariance.
- Estimate Coefficients.
- Make Predictions.
- Predict Insurance.

These steps will give you the foundation you need to implement and train simple linear regression models for your own prediction problems.

### 1. Calculate Mean and Variance

The first step is to estimate the mean and the variance of both the input and output variables from the training data.

The mean of a list of numbers can be calculated as:

1 |
mean(x) = sum(x) / count(x) |

Below is a function named **mean()** that implements this behavior for a list of numbers.

1 2 3 |
# Calculate the mean value of a list of numbers def mean(values): return sum(values) / float(len(values)) |

The variance is the sum squared difference for each value from the mean value.

Variance for a list of numbers can be calculated as:

1 |
variance = sum( (x - mean(x))^2 ) |

Below is a function named **variance()** that calculates the variance of a list of numbers. It requires the mean of the list to be provided as an argument, just so we don’t have to calculate it more than once.

1 2 3 |
# Calculate the variance of a list of numbers def variance(values, mean): return sum([(x-mean)**2 for x in values]) |

We can put these two functions together and test them on a small contrived dataset.

Below is a small dataset of x and y values.

**NOTE**: delete the column headers from this data if you save it to a .CSV file for use with the final code example.

1 2 3 4 5 6 |
x, y 1, 1 2, 3 4, 3 3, 2 5, 5 |

We can plot this dataset on a scatter plot graph as follows:

We can calculate the mean and variance for both the x and y values in the example below.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
# Estimate Mean and Variance # Calculate the mean value of a list of numbers def mean(values): return sum(values) / float(len(values)) # Calculate the variance of a list of numbers def variance(values, mean): return sum([(x-mean)**2 for x in values]) # calculate mean and variance dataset = [[1, 1], [2, 3], [4, 3], [3, 2], [5, 5]] x = [row[0] for row in dataset] y = [row[1] for row in dataset] mean_x, mean_y = mean(x), mean(y) var_x, var_y = variance(x, mean_x), variance(y, mean_y) print('x stats: mean=%.3f variance=%.3f' % (mean_x, var_x)) print('y stats: mean=%.3f variance=%.3f' % (mean_y, var_y)) |

Running this example prints out the mean and variance for both columns.

1 2 |
x stats: mean=3.000 variance=10.000 y stats: mean=2.800 variance=8.800 |

This is our first step, next we need to put these values to use in calculating the covariance.

### 2. Calculate Covariance

The covariance of two groups of numbers describes how those numbers change together.

Covariance is a generalization of correlation. Correlation describes the relationship between two groups of numbers, whereas covariance can describe the relationship between two or more groups of numbers.

Additionally, covariance can be normalized to produce a correlation value.

Nevertheless, we can calculate the covariance between two variables as follows:

1 |
covariance = sum((x(i) - mean(x)) * (y(i) - mean(y))) |

Below is a function named **covariance()** that implements this statistic. It builds upon the previous step and takes the lists of x and y values as well as the mean of these values as arguments.

1 2 3 4 5 6 |
# Calculate covariance between x and y def covariance(x, mean_x, y, mean_y): covar = 0.0 for i in range(len(x)): covar += (x[i] - mean_x) * (y[i] - mean_y) return covar |

We can test the calculation of the covariance on the same small contrived dataset as in the previous section.

Putting it all together we get the example below.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
# Calculate Covariance # Calculate the mean value of a list of numbers def mean(values): return sum(values) / float(len(values)) # Calculate covariance between x and y def covariance(x, mean_x, y, mean_y): covar = 0.0 for i in range(len(x)): covar += (x[i] - mean_x) * (y[i] - mean_y) return covar # calculate covariance dataset = [[1, 1], [2, 3], [4, 3], [3, 2], [5, 5]] x = [row[0] for row in dataset] y = [row[1] for row in dataset] mean_x, mean_y = mean(x), mean(y) covar = covariance(x, mean_x, y, mean_y) print('Covariance: %.3f' % (covar)) |

Running this example prints the covariance for the x and y variables.

1 |
Covariance: 8.000 |

We now have all the pieces in place to calculate the coefficients for our model.

### 3. Estimate Coefficients

We must estimate the values for two coefficients in simple linear regression.

The first is B1 which can be estimated as:

1 |
B1 = sum((x(i) - mean(x)) * (y(i) - mean(y))) / sum( (x(i) - mean(x))^2 ) |

We have learned some things above and can simplify this arithmetic to:

1 |
B1 = covariance(x, y) / variance(x) |

We already have functions to calculate **covariance()** and **variance()**.

Next, we need to estimate a value for B0, also called the intercept as it controls the starting point of the line where it intersects the y-axis.

1 |
B0 = mean(y) - B1 * mean(x) |

Again, we know how to estimate B1 and we have a function to estimate **mean()**.

We can put all of this together into a function named **coefficients()** that takes the dataset as an argument and returns the coefficients.

1 2 3 4 5 6 7 8 |
# Calculate coefficients def coefficients(dataset): x = [row[0] for row in dataset] y = [row[1] for row in dataset] x_mean, y_mean = mean(x), mean(y) b1 = covariance(x, x_mean, y, y_mean) / variance(x, x_mean) b0 = y_mean - b1 * x_mean return [b0, b1] |

We can put this together with all of the functions from the previous two steps and test out the calculation of coefficients.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
# Calculate Coefficients # Calculate the mean value of a list of numbers def mean(values): return sum(values) / float(len(values)) # Calculate covariance between x and y def covariance(x, mean_x, y, mean_y): covar = 0.0 for i in range(len(x)): covar += (x[i] - mean_x) * (y[i] - mean_y) return covar # Calculate the variance of a list of numbers def variance(values, mean): return sum([(x-mean)**2 for x in values]) # Calculate coefficients def coefficients(dataset): x = [row[0] for row in dataset] y = [row[1] for row in dataset] x_mean, y_mean = mean(x), mean(y) b1 = covariance(x, x_mean, y, y_mean) / variance(x, x_mean) b0 = y_mean - b1 * x_mean return [b0, b1] # calculate coefficients dataset = [[1, 1], [2, 3], [4, 3], [3, 2], [5, 5]] b0, b1 = coefficients(dataset) print('Coefficients: B0=%.3f, B1=%.3f' % (b0, b1)) |

Running this example calculates and prints the coefficients.

1 |
Coefficients: B0=0.400, B1=0.800 |

Now that we know how to estimate the coefficients, the next step is to use them.

### 4. Make Predictions

The simple linear regression model is a line defined by coefficients estimated from training data.

Once the coefficients are estimated, we can use them to make predictions.

The equation to make predictions with a simple linear regression model is as follows:

1 |
y = b0 + b1 * x |

Below is a function named **simple_linear_regression()** that implements the prediction equation to make predictions on a test dataset. It also ties together the estimation of the coefficients on training data from the steps above.

The coefficients prepared from the training data are used to make predictions on the test data, which are then returned.

1 2 3 4 5 6 7 |
def simple_linear_regression(train, test): predictions = list() b0, b1 = coefficients(train) for row in test: yhat = b0 + b1 * row[0] predictions.append(yhat) return predictions |

Let’s pull together everything we have learned and make predictions for our simple contrived dataset.

As part of this example, we will also add in a function to manage the evaluation of the predictions called **evaluate_algorithm()** and another function to estimate the Root Mean Squared Error of the predictions called **rmse_metric()**.

The full example is listed below.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
# Standalone simple linear regression example from math import sqrt # Calculate root mean squared error def rmse_metric(actual, predicted): sum_error = 0.0 for i in range(len(actual)): prediction_error = predicted[i] - actual[i] sum_error += (prediction_error ** 2) mean_error = sum_error / float(len(actual)) return sqrt(mean_error) # Evaluate regression algorithm on training dataset def evaluate_algorithm(dataset, algorithm): test_set = list() for row in dataset: row_copy = list(row) row_copy[-1] = None test_set.append(row_copy) predicted = algorithm(dataset, test_set) print(predicted) actual = [row[-1] for row in dataset] rmse = rmse_metric(actual, predicted) return rmse # Calculate the mean value of a list of numbers def mean(values): return sum(values) / float(len(values)) # Calculate covariance between x and y def covariance(x, mean_x, y, mean_y): covar = 0.0 for i in range(len(x)): covar += (x[i] - mean_x) * (y[i] - mean_y) return covar # Calculate the variance of a list of numbers def variance(values, mean): return sum([(x-mean)**2 for x in values]) # Calculate coefficients def coefficients(dataset): x = [row[0] for row in dataset] y = [row[1] for row in dataset] x_mean, y_mean = mean(x), mean(y) b1 = covariance(x, x_mean, y, y_mean) / variance(x, x_mean) b0 = y_mean - b1 * x_mean return [b0, b1] # Simple linear regression algorithm def simple_linear_regression(train, test): predictions = list() b0, b1 = coefficients(train) for row in test: yhat = b0 + b1 * row[0] predictions.append(yhat) return predictions # Test simple linear regression dataset = [[1, 1], [2, 3], [4, 3], [3, 2], [5, 5]] rmse = evaluate_algorithm(dataset, simple_linear_regression) print('RMSE: %.3f' % (rmse)) |

Running this example displays the following output that first lists the predictions and the RMSE of these predictions.

1 2 |
[1.1999999999999995, 1.9999999999999996, 3.5999999999999996, 2.8, 4.3999999999999995] RMSE: 0.693 |

Finally, we can plot the predictions as a line and compare it to the original dataset.

### 5. Predict Insurance

We now know how to implement a simple linear regression model.

Let’s apply it to the Swedish insurance dataset.

This section assumes that you have downloaded the dataset to the file **insurance.csv** and it is available in the current working directory.

We will add some convenience functions to the simple linear regression from the previous steps.

Specifically a function to load the CSV file called **load_csv()**, a function to convert a loaded dataset to numbers called **str_column_to_float()**, a function to evaluate an algorithm using a train and test set called **train_test_split()** a function to calculate RMSE called **rmse_metric()** and a function to evaluate an algorithm called **evaluate_algorithm()**.

The complete example is listed below.

A training dataset of 60% of the data is used to prepare the model and predictions are made on the remaining 40%.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 |
# Simple Linear Regression on the Swedish Insurance Dataset from random import seed from random import randrange from csv import reader from math import sqrt # Load a CSV file def load_csv(filename): dataset = list() with open(filename, 'r') as file: csv_reader = reader(file) for row in csv_reader: if not row: continue dataset.append(row) return dataset # Convert string column to float def str_column_to_float(dataset, column): for row in dataset: row[column] = float(row[column].strip()) # Split a dataset into a train and test set def train_test_split(dataset, split): train = list() train_size = split * len(dataset) dataset_copy = list(dataset) while len(train) < train_size: index = randrange(len(dataset_copy)) train.append(dataset_copy.pop(index)) return train, dataset_copy # Calculate root mean squared error def rmse_metric(actual, predicted): sum_error = 0.0 for i in range(len(actual)): prediction_error = predicted[i] - actual[i] sum_error += (prediction_error ** 2) mean_error = sum_error / float(len(actual)) return sqrt(mean_error) # Evaluate an algorithm using a train/test split def evaluate_algorithm(dataset, algorithm, split, *args): train, test = train_test_split(dataset, split) test_set = list() for row in test: row_copy = list(row) row_copy[-1] = None test_set.append(row_copy) predicted = algorithm(train, test_set, *args) actual = [row[-1] for row in test] rmse = rmse_metric(actual, predicted) return rmse # Calculate the mean value of a list of numbers def mean(values): return sum(values) / float(len(values)) # Calculate covariance between x and y def covariance(x, mean_x, y, mean_y): covar = 0.0 for i in range(len(x)): covar += (x[i] - mean_x) * (y[i] - mean_y) return covar # Calculate the variance of a list of numbers def variance(values, mean): return sum([(x-mean)**2 for x in values]) # Calculate coefficients def coefficients(dataset): x = [row[0] for row in dataset] y = [row[1] for row in dataset] x_mean, y_mean = mean(x), mean(y) b1 = covariance(x, x_mean, y, y_mean) / variance(x, x_mean) b0 = y_mean - b1 * x_mean return [b0, b1] # Simple linear regression algorithm def simple_linear_regression(train, test): predictions = list() b0, b1 = coefficients(train) for row in test: yhat = b0 + b1 * row[0] predictions.append(yhat) return predictions # Simple linear regression on insurance dataset seed(1) # load and prepare data filename = 'insurance.csv' dataset = load_csv(filename) for i in range(len(dataset[0])): str_column_to_float(dataset, i) # evaluate algorithm split = 0.6 rmse = evaluate_algorithm(dataset, simple_linear_regression, split) print('RMSE: %.3f' % (rmse)) |

Running the algorithm prints the RMSE for the trained model on the training dataset.

A score of about 38 (thousands of Kronor) was achieved, which is much better than the Zero Rule algorithm that achieves approximately 72 (thousands of Kronor) on the same problem.

1 |
RMSE: 38.339 |

## Extensions

The best extension to this tutorial is to try out the algorithm on more problems.

Small datasets with just an input (x) and output (y) columns are popular for demonstration in statistical books and courses. Many of these datasets are available online.

Seek out some more small datasets and make predictions using simple linear regression.

**Did you apply simple linear regression to another dataset?**

Share your experiences in the comments below.

## Review

In this tutorial, you discovered how to implement the simple linear regression algorithm from scratch in Python.

Specifically, you learned:

- How to estimate statistics from a training dataset like mean, variance and covariance.
- How to estimate model coefficients and use them to make predictions.
- How to use simple linear regression to make predictions on a real dataset.

**Do you have any questions?**

Ask your question in the comments below and I will do my best to answer.

Hi Jason,

i have downloaded the csv file, but when i try to run the script against the file, i get the following error

” could not convert string to float: ‘X’ ”

this script stops at function def train_test_split(dataset, split)

can you confirm how your csv file is structured ?

Regards

Vineeth

Sorry to hear that Vineeth.

Totally my error, do not include the column headers in the small contrived dataset. Delete the first row.

I will update the example.

Hi Jason…..i have deleted the column headers X and Y along with all other descriptive info in the file but i kee getting this error:

” ValueError: could not convert string to float: i”

here are the first 5 values in my csv file after removing the white space(replacing it with commas) and changing from european “,” to decimal “.”

108,392.5

19,46.2

13,15.7

124,422.2

40,119.4

Your file looks perfect.

Confirm that you do not have any empty rows on the end of the file.

This is brilliant!

Thanks for talking the time to go through all the steps and explain literally… everything.

You’re welcome Adrian, I’m glad you found it valuable.

Hello Jason,

great tutorial!

It would be great if you also provided the code for the respective plots in python!

Especially the plot for the dataset 🙂

Thank you.

Great suggestion Nelson, thanks.

I was aiming to keep the use of libs to a minimum (e.g. no matplotlib or seaborn).

Hi Nelson, You can use pyplotlib library to create this kinf of scatter plot:

Pls use this code to implement scatter plot:

import pyplotlib.pyplot as py

py.scatter(x_axis_value,y_axis_value,color=’black’)

py.show()

I hope this helps !

predicted = algorithm(dataset, test_set)

where is algorithm defined???

Great question Venkat.

The “algorithm” argument in the evaluate_algorithm() function is a name of a function. We pass in the name of the function as “simple_linear_regression”. This means that when we execute algorithm() to make predictions in evaluate_algorithm(), we are in fact calling the simple_linear_regression() function.

I did this to separate algorithm evaluation from algorithm implementation, so that the same test harness can be used for many different algorithms.

under section 2. Calcuating covairiance i think the two meaning there is not quiet a clear. Pls check it.

“In fact, covariance is a generalization of correlation that is limited to two variables. Whereas covariance can be calculate between two or more variables.”???????

Thanks En-wai, I have updated the language.

I was trying to comment on how covariance is an abstraction of correlation to go from 2 groups of numbers to more than 2 groups of numbers.

Hi,

I got clear idea on linear regression. Thank You.

We do calculate linear regression with SciPi library as below.

regr = linear_model.LinearRegression()

regr.fit(X_train, y_train).

Please clarify whether all this calculation will happen behind the scenes when we call the above code.

Hi Ram,

There are more efficient approaches to implement these algorithms using linear algebra. I expect this these more efficient approaches are being used behind the scenes.

Implementing algorithms is great for learning how they work, but it is not a good idea to use these from scratch implementations in production.

Hi Jason,

Many thanks for this easy to follow LR from scratch. I have noticed Line 9

file = open(filename, “rb”)

is opening the file in text mode and causing the “Error: iterator should return strings, not bytes (did you open the file in text mode?)”

Changing ‘rb’ to ‘rt’ or ‘r’

file = open(filename, “rt”)

fixes the error.

Best regards

Great, thanks Aliyu.

It does work on my platform, but I will make the example more portable.

Hi,

Jason Brownlee

Thanks a lot for such an amazing post on simple linear regression. This post is the best tutorial to get the clear picture about simple linear regression analysis and I felt this post is the must read before learning the multi-regression analysis.

Thanks saimadhu, I’m glad you found it useful.

Another great one and I love these foundation ones. Also, you get right into the steps/meat of it and you do not leave out cosmetics – just wrap those up neatly at the end. Thank you sir.

I would like to see/study this same type of process for datasets pertaining to the basic types of business. Specifically, how to produce good dataset and properly frame up problem areas, for business. Do you recommend any books?

Thanks Johnny.

Sorry, I don’t know of good books like that. It is an empirical pursuit – more of a craft. The best education is practice.

I am a beginner and found this very useful.

Thank you sir !

I’m glad to hear it!

How go we plot the graph using code

You can use matplotlib:

Hy, how can we plot a line of regression on our graph? And what we can do to reduce a rmse?Thanks

You can evaluate the RMSE each epoch/iteration, save the RMSE values in an array and plot the array using matplotlib.

what is the relationship between numpy.cov() , numpy.var() methods and your covariance() , variance() calculations ? I get very different results between the two.

Thanks

Its a great article thankyou for helping us…

Thanks Abhishek, I’m glad that you found it useful.

Hi Jason,

Thank you for another great tutorial.

What does the Zero Based algorithm do and why it use in her?

Thank you

Do you mean the Zero Rule algorithm?

See this post for a description and worked example:

http://machinelearningmastery.com/implement-baseline-machine-learning-algorithms-scratch-python/

Nice work

Maybe tiny typo:

covariance = sum((x(i) – mean(x)) * (y – mean(y)))

should be

covariance = sum((x(i) – mean(x)) * (y(i) – mean(y)))

You have it correct in the actual code

Thanks John. Fixed.

Good day Jason

My model is y = b0 + (b1 * x) – (b2 / (b3+x)), which gives an asymptotic approach in a flocculation process. While I get a good data fit using the scipy curve_fit routine, I do not know how to get the leverage, the diagonal elements of the hat matrix H. Whereas in your model, the X system matrix would be formulated as:

^y = H.y

and H is X(XT.X)**-1.XT, where XT is the transpose of X

In your model X.^b would be:

[ 1 x0 ] [b0]

[ 1 x1 ] [b1]

[ 1 x2 ] .

[ 1 x3 ]

[ .. .. ]

But what would it be in my case?

Another problem is how to solve for H, so I can get the diagonal elements hii.

Any help would be greatly appreciated.

I removed columns header from csv file(Insurance CSV)

then Iam getting this following error:

ValueError: could not convert string to float: female

suguna , you need to remove all the empty cells in your csv, if any are present. That is what is causing this error

Hi Jason,

As per the derivation : https://en.wikipedia.org/wiki/Standard_deviation

Variance = Avg (xi – xMean)^2

But here in algorithm you have used it as : sum([(x-mean)**2 for x in values])

which is not average but only some of squared difference. Is this some kind of modification?

Hi Jason. Can you please clarify this doubt.

Thankyou very much Sir,

I had been looking for someplace to start implenting algos myself. This is best tutorial i have read by far. Waiting fo other algorithm’s simple implementations.

Thanks, I’m glad to hear that.

I have many right here:

https://machinelearningmastery.com/machine-learning-algorithms-from-scratch/

too late to board the ML bus ..Digvijay..

Never too late.

Thanks a lot sir ! . Its a best description so far .

I’m glad to hear it.

I’m confused about your definition of covariance. Generally it’s finally divided by (n – 1) where n is the number of samples, where as there is no such operation carried out through out the code. Can you please clarify ?

I am unable to download the dataset as a csv file. Can someone please help me???

Here is the raw file:

You will need to convert the “,” to “.” and replace the space between columns with “,”.

hi jason

can you tell how do we implement the linear regression on image dataset

Perhaps linear regression is a bad fit image data.

Convolutional neural networks are very popular for image data:

http://machinelearningmastery.com/crash-course-convolutional-neural-networks/

Hi Jason,

Great stuff! Thanks for the exposition.

I implemented a no-shuffling version of train_test_split which always takes the first 38 entries as training data and the last 25 entries as test data. The program gives RMSE of 45.23.

Your RMSE of 38.339 is from the randomization in train_test_split with seed(1). If I try with seed(2) then the RMSE is 37.734.

What’s the next step with different values of RMSE?

This is the variance of the method.

Ideally, we would evaluate the algorithm multiple times and report the mean and standard deviation of the model.

Does that help?

It does, thanks.

I ported your Python code to Pharo Smalltalk and wrote a blog post. See http://www.samadhiweb.com/blog/2017.08.06.dataframe.html.

Very cool Pierce. Nice work!

I used to work with a dev who was a massive small talk fan.

That is NOT the formula for variance… you’re supposed to divide by n or n-1, what is going on?

Might be population vs sample variance.