It can be confusing when you get started in applied machine learning.

There are so many terms to use and many of the terms may not be used consistently. This is especially true if you have come from another field of study that may use some of the same terms as machine learning, but they are used differently.

For example: the terms “*model parameter*” and “*model hyperparameter*.”

Not having a clear definition for these terms is a common struggle for beginners, especially those that have come from the fields of statistics or economics.

In this post, we will take a closer look at these terms.

## What is a Model Parameter?

A model parameter is a configuration variable that is internal to the model and whose value can be estimated from data.

- They are required by the model when making predictions.
- They values define the skill of the model on your problem.
- They are estimated or learned from data.
- They are often not set manually by the practitioner.
- They are often saved as part of the learned model.

Parameters are key to machine learning algorithms. They are the part of the model that is learned from historical training data.

In classical machine learning literature, we may think of the model as the hypothesis and the parameters as the tailoring of the hypothesis to a specific set of data.

Often model parameters are estimated using an optimization algorithm, which is a type of efficient search through possible parameter values.

**Statistics**: In statistics, you may assume a distribution for a variable, such as a Gaussian distribution. Two parameters of the Gaussian distribution are the mean (*mu*) and the standard deviation (*sigma*). This holds in machine learning, where these parameters may be estimated from data and used as part of a predictive model.**Programming**: In programming, you may pass a parameter to a function. In this case, a parameter is a function argument that could have one of a range of values. In machine learning, the specific model you are using is the function and requires parameters in order to make a prediction on new data.

Whether a model has a fixed or variable number of parameters determines whether it may be referred to as “*parametric*” or “*nonparametric*“.

Some examples of model parameters include:

- The weights in an artificial neural network.
- The support vectors in a support vector machine.
- The coefficients in a linear regression or logistic regression.

## What is a Model Hyperparameter?

A model hyperparameter is a configuration that is external to the model and whose value cannot be estimated from data.

- They are often used in processes to help estimate model parameters.
- They are often specified by the practitioner.
- They can often be set using heuristics.
- They are often tuned for a given predictive modeling problem.

We cannot know the best value for a model hyperparameter on a given problem. We may use rules of thumb, copy values used on other problems, or search for the best value by trial and error.

When a machine learning algorithm is tuned for a specific problem, such as when you are using a grid search or a random search, then you are tuning the hyperparameters of the model or order to discover the parameters of the model that result in the most skillful predictions.

Many models have important parameters which cannot be directly estimated from the data. For example, in the K-nearest neighbor classification model … This type of model parameter is referred to as a tuning parameter because there is no analytical formula available to calculate an appropriate value.

— Page 64-65, Applied Predictive Modeling, 2013

Model hyperparameters are often referred to as model parameters which can make things confusing. A good rule of thumb to overcome this confusion is as follows:

**If you have to specify a model parameter manually then
it is probably a model hyperparameter.**

Some examples of model hyperparameters include:

- The learning rate for training a neural network.
- The C and sigma hyperparameters for support vector machines.
- The k in k-nearest neighbors.

## Further Reading

- Hyperparameter on Wikipedia
- What are hyperparameters in machine learning? on Quora
- What is the difference between model hyperparameters and model parameters? on StackExchange
- What is considered a hyperparameter? on Reddit

## Summary

In this post, you discovered the clear definitions and the difference between model parameters and model hyperparameters.

In summary, model parameters are estimated from data automatically and model hyperparameters are set manually and are used in processes to help estimate model parameters.

Model hyperparameters are often referred to as parameters because they are the parts of the machine learning that must be set manually and tuned.

Did this post help you clear up the confusion?

Let me know in the comments below.

Are there model parameters or hyperparameters that you are still unsure about?

Post them in the comments and I’ll do my best to help clear things up further.

Awesome article! This was a big point of confusion, as I wasn’t sure what “knobs” I had at my disposal to tune my model — there are a lot of them, but they weren’t all in one place like the dash of a car. ðŸ™‚ Thank you for making this clear!

Thanks. I’m glad it helped!

Excellent post, Jason. Thanks!

You’re welcome Alan.

Thanks Jason , Excellent

I’m glad it helped.

Great explanation…

Thanks Wesley.

Superb explanation Jason….love reading your articles!!!

Thanks Deepak.

In part model para, you give this example “The support vectors in a support vector machine.” I am a little confusing, why not the coefficients in SVM?

We call the instances found by SVM “support vectors” they are technically not “weights” or “coefficients”.

Great post, Jason. Thanks!

One question: k-nearest neighbourhood is considered a non parametric model (vs parametric models). Shouldn’t k be considered as a hyperparameter then?

The “k” in kNN is a hyperparameter. I say exactly this Luis.

The confounding part was the use of “parameter” in:

“Many models have important parameters which cannot be directly estimated from the data. For example, in the K-nearest neighbor classification model â€¦ This type of model parameter is referred to as a tuning parameter because there is no analytical formula available to calculate an appropriate value.”

Why is this confounding Luis?

The book Applied Predictive Modeling does not contain the word hyperparameter. The article above states that many experts mix up the terms parameter and hyperparameter.

So what’s the point of including the quote? Here are some potential answers:

1. The authors used the term “tuning parameter” incorrectly, and should have used the term hyperparameter. This understanding is supported by including the quote in the section on hyperparameters, Furthermore my understanding is that using a threshold for statistical significance as a tuning parameter may be called a hyperparameter because it

However, I believe that “tuning parameter” is not an incorrect description.

Also, you linked to the Wikipedia page for Baysian hyperparameters rather than the page for hyperparameters in Machine learning https://en.wikipedia.org/wiki/Hyperparameter_optimization

The Wikipedia page gives the straightforward definition: “In the context of machine learning, hyperparameters are parameters whose values are set prior to the commencement of the learning process. By contrast, the value of other parameters is derived via training.”

Correct me if I’m wrong, but according to many definitions, hyperparameters are a type of parameter.

Synonyms for hyperparameters: tuning parameters, meta parameters, free parameters

Since hyperparameters are a type of parameter, the two terms are interchangeable when discussing hyperparameters. However, not all parameters are hyperparameters.

Nice perspective, thanks Tommy.

I cannot disagree generally, but the distinction is important, especially if you are a beginner trying to figure out what to “configure” or “tune”.

Hi Tommy, I provided the quote to help clarify the definitions, not as an example of misuse. Sorry for the confusion.

Nice, your definition matches with the “estimated from data vs not” approach used in the post.

Crystal clear. Thanks Jason

I’m glad it helped.

thanks. I was thinking both of them refer to the same thing. Thanks for clarification.

I’m glad it help helped.

Awesome! It was really confusing(parameters vs hyperparameter) and I was ignoring it, but this post made it very clear.

Thank You!!

Happy it helped!

superbly explained.Thanks for the always handy post.

Thanks!

clf = svm.SVC(C =0.01, kernel =’rbf’, random_state=33)

——

random_state is parameter or hyperparameter?

Deep Tim… great question!

A gut check says “hyperparameter”, but we do not optimize it, we control for it. This feels wrong though. Perhaps it is neither.

What I mean is, it impacts the skill of the model, or most models that are stochastic, but we do not “tune” the value for a specific model/dataset. The idea of the “best” random seed does not make sense. Instead, we would re-run the experiment n times in order to develop a robust estimate of skill. We would create an ensemble of n final models to produce a more robust set of predictions.

Does that help? Am I making sense?

Excellent post! I am currently studying an application of Stacked Autoencoders on passive sonar classification and your posts have been very helpful for me. I have learned a lot with you. Taking advantage, do you have any material on this topic? Or novelty detection? Thank you!

THanks.

Sorry, I don’t have posts on these topics, I hope to get to them sometime.

Good clarification and explanation. Thanks!

Thanks Siva.

Hi Jason, good explanation. I have one doubt that, if we have some hyperparameter for a given data sequence. Can we predict new set of hyperparameter if a new data sequence is given?

Parameters and hyperparameters refer to the model, not the data.

To me, a model is fully specified by its family (linear, NN etc) and its parameters. The hyper parameters are used prior to the prediction phase and have an impact on the parameters, but are no longer needed. So coefficients in a linear model are clearly parameters. The learning rate in any gradient descent procedure is a hyperparameter. Structural parameters such as the degree of a polynomial or the number of hidden units are somewhere in between, because they are decided prior to model fitting but are implicit in the parameters themselves. Whether all these numbers are chosen by an algorithm or by hand, I don’t see that as a very helpful distinction. Linear models were fitted by hand only a generation or two ago. Tukey cites drawing something like a super smoother by eye. Nobody would do that now.

Great note Antonio, thanks.