How To Implement Learning Vector Quantization (LVQ) From Scratch With Python

A limitation of k-Nearest Neighbors is that you must keep a large database of training examples in order to make predictions.

The Learning Vector Quantization algorithm addresses this by learning a much smaller subset of patterns that best represent the training data.

In this tutorial, you will discover how to implement the Learning Vector Quantization algorithm from scratch with Python.

After completing this tutorial, you will know:

  • How to learn a set of codebook vectors from a training data set.
  • How to make predictions using learned codebook vectors.
  • How to apply Learning Vector Quantization to a real predictive modeling problem.

Kick-start your project with my new book Machine Learning Algorithms From Scratch, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Update Jan/2017: Changed the calculation of fold_size in cross_validation_split() to always be an integer. Fixes issues with Python 3.
  • Update Aug/2018: Tested and updated to work with Python 3.6.
How To Implement Learning Vector Quantization From Scratch With Python

How To Implement Learning Vector Quantization From Scratch With Python
Photo by Tony Faiola, some rights reserved.

Description

This section provides a brief introduction to the Learning Vector Quantization algorithm and the Ionosphere classification problem that we will use in this tutorial

Learning Vector Quantization

The Learning Vector Quantization (LVQ) algorithm is a lot like k-Nearest Neighbors.

Predictions are made by finding the best match among a library of patterns. The difference is that the library of patterns is learned from training data, rather than using the training patterns themselves.

The library of patterns are called codebook vectors and each pattern is called a codebook. The codebook vectors are initialized to randomly selected values from the training dataset. Then, over a number of epochs, they are adapted to best summarize the training data using a learning algorithm.

The learning algorithm shows one training record at a time, finds the best matching unit among the codebook vectors and moves it closer to the training record if they have the same class, or further away if they have different classes.

Once prepared, the codebook vectors are used to make predictions using the k-Nearest Neighbors algorithm where k=1.

The algorithm was developed for classification predictive modeling problems, but can be adapted for use with regression problems.

Ionosphere Dataset

The Ionosphere dataset predicts the structure of the ionosphere given radar return data.

Each instance describes the properties of radar returns from the atmosphere and the task is to predict whether or not there is structure in the ionosphere.

There are 351 instances and 34 numerical input variables, 17 pairs of 2 for each radar pulse that generally have the same scale of 0-1. The class value is a string with a value of either a “g” for good return or “b” for a bad return.

Using the Zero Rule Algorithm that predicts the class with the most observations, a baseline accuracy of 64.286% can be achieved.

You can learn more and download the dataset from the UCI Machine Learning Repository.

Download the dataset and place it in your current working directory with the name ionosphere.csv.

Tutorial

This tutorial is broken down into 4 parts:

  1. Euclidean Distance.
  2. Best Matching Unit.
  3. Training Codebook Vectors.
  4. Ionosphere Case Study.

These steps will lay the foundation for implementing and applying the LVQ algorithm to your own predictive modeling problems.

1. Euclidean Distance

The first step needed is to calculate the distance between two rows in a dataset.

Rows of data are mostly made up of numbers and an easy way to calculate the distance between two rows or vectors of numbers is to draw a straight line. This makes sense in 2D or 3D and scales nicely to higher dimensions.

We can calculate the straight line distance between two vectors using the Euclidean distance measure. It is calculated as the square root of the sum of the squared differences between the two vectors.

Where x1 is the first row of data, x2 is the second row of data and i is the index for a specific column as we sum across all columns.

With Euclidean distance, the smaller the value, the more similar two records will be. A value of 0 means that there is no difference between two records.

Below is a function named euclidean_distance() that implements this in Python.

You can see that the function assumes that the last column in each row is an output value which is ignored from the distance calculation.

We can test this distance function with a small contrived classification dataset. We will use this dataset a few times as we construct the elements needed for the LVQ algorithm.

Putting this all together, we can write a small example to test our distance function by printing the distance between the first row and all other rows. We would expect the distance between the first row and itself to be 0, a good thing to look out for.

The full example is listed below.

Running this example prints the distances between the first row and every row in the dataset, including itself.

Now it is time to use the distance calculation to locate the best matching unit within a dataset.

2. Best Matching Unit

The Best Matching Unit or BMU is the codebook vector that is most similar to a new piece of data.

To locate the BMU for a new piece of data within a dataset we must first calculate the distance between each codebook to the new piece of data. We can do this using our distance function above.

Once distances are calculated, we must sort all of the codebooks by their distance to the new data. We can then return the first or most similar codebook vector.

We can do this by keeping track of the distance for each record in the dataset as a tuple, sort the list of tuples by the distance (in descending order) and then retrieve the BMU.

Below is a function named get_best_matching_unit() that implements this.

You can see that the euclidean_distance() function developed in the previous step is used to calculate the distance between each codebook and the new test_row.

The list of codebook and distance tuples is sorted where a custom key is used ensuring that the second item in the tuple (tup[1]) is used in the sorting operation.

Finally, the top or most similar codebook vector is returned as the BMU.

We can test this function with the small contrived dataset prepared in the previous section.

The complete example is listed below.

Running this example prints the BMU in the dataset to the first record. As expected, the first record is the most similar to itself and is at the top of the list.

Make predictions with a set of codebook vectors is the same thing.

We use the 1-nearest neighbor algorithm. That is, for each new pattern we wish to make a prediction for, we locate the most similar codebook vector in the set and return its associated class value.

Now that we know how to get the best matching unit from a set of codebook vectors, we need to learn how to train them.

3. Training Codebook Vectors

The first step in training a set of codebook vectors is to initialize the set.

We can initialize it with patterns constructed from random features in the training dataset.

Below is a function named random_codebook() that implements this. Random input and output features are selected from the training data.

After the codebook vectors are initialized to a random set, they must be adapted to best summarize the training data.

This is done iteratively.

  1. Epochs: At the top level, the process is repeated for a fixed number of epochs or exposures of the training data.
  2. Training Dataset: Within an epoch, each training pattern is used one at a time to update the set of codebook vectors.
  3. Pattern Features: For a given training pattern, each feature of a best matching codebook vector is updated to move it closer or further away.

The best matching unit is found for each training pattern and only this best matching unit is updated. The difference between the training pattern and the BMU is calculated as the error. The class values (assumed to be the last value in the list) are compared. If they match, the error is added to the BMU to bring it closer to the training pattern, otherwise, it is subtracted to push it further away.

The amount that the BMU is adjusted is controlled by a learning rate. This is a weighting on the amount of change made to all BMUs. For example, a learning rate of 0.3 means that BMUs are only moved by 30% of the error or difference between training patterns and BMUs.

Further, the learning rate is adjusted so that it has maximum effect in the first epoch and less effect as training continues until it has a minimal effect in the final epoch. This is called a linear decay learning rate schedule and can also be used in artificial neural networks.

We can summarize this decay in learning rate by epoch number as follows:

We can test this equation by assuming a learning rate of 0.3 and 10 epochs. The learning rate each epoch would be as follows:

We can put all of this together. Below is a function named train_codebooks() that implements the procedure for training a set of codebook vectors given a training dataset.

The function takes 3 additional arguments to the training dataset, the number of codebook vectors to create and train, the initial learning rate and the number of epochs for which to train the codebook vectors.

You can also see that the function keeps track of the sum squared error each epoch and prints a message showing the epoch number, effective learning rate and sum squared error score. This is helpful when debugging the training function or the specific configuration for a given prediction problem.

You can see the use of the random_codebook() to initialize the codebook vectors and the get_best_matching_unit() function to find the BMU for each training pattern within an epoch.

We can put this together with the examples above and learn a set of codebook vectors for our contrived dataset.

Below is the complete example.

Running this example trains a set of 2 codebook vectors for 10 epochs with an initial learning rate of 0.3. The details are printed each epoch and the set of 2 codebook vectors learned from the training data is displayed.

We can see that the changes to learning rate meet our expectations explored above for each epoch. We can also see that the sum squared error each epoch does continue to drop at the end of training and that there may be an opportunity to tune the example further to achieve less error.

Now that we know how to train a set of codebook vectors, let’s see how we can use this algorithm on a real dataset.

4. Ionosphere Case Study

In this section, we will apply the Learning Vector Quantization algorithm to the Ionosphere dataset.

The first step is to load the dataset and convert the loaded data to numbers that we can use with the Euclidean distance calculation. For this we will use the helper function load_csv() to load the file, str_column_to_float() to convert string numbers to floats and str_column_to_int() to convert the class column to integer values.

We will evaluate the algorithm using k-fold cross-validation with 5 folds. This means that 351/5=70.2 or just over 70 records will be in each fold. We will use the helper functions evaluate_algorithm() to evaluate the algorithm with cross-validation and accuracy_metric() to calculate the accuracy of predictions.

The complete example is listed below.

Running this example prints the classification accuracy on each fold and the mean classification accuracy across all folds.

We can see that the accuracy of 87.143% is better than the baseline of 64.286%. We can also see that our library of 20 codebook vectors is far fewer than holding the entire training dataset.

Extensions

This section lists extensions to the tutorial that you may wish to explore.

  • Tune Parameters. The parameters in the above example were not tuned, try different values to improve the classification accuracy.
  • Different Distance Measures. Experiment with different distance measures such as Manhattan distance and Minkowski distance.
  • Multiple-Pass LVQ. The codebook vectors may be updated by multiple training runs. Experiment by training with large learning rates followed by a large number of epochs with smaller learning rates to fine tune the codebooks.
  • Update More BMUs. Experiment with selecting more than one BMU when training and pushing and pulling them away from the training data.
  • More Problems. Apply LVQ to more classification problems on the UCI Machine Learning Repository.

Did you explore any of these extensions?
Share your experiences in the comments below.

Review

In this tutorial, you discovered how to implement the learning vector quantization algorithm from scratch in Python.

Specifically, you learned:

  • How to calculate the distance between patterns and locate the best matching unit.
  • How to train a set of codebook vectors to best summarize the training dataset.
  • How to apply the learning vector quantization algorithm to a real predictive modeling problem.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Discover How to Code Algorithms From Scratch!

Machine Learning Algorithms From Scratch

No Libraries, Just Python Code.

...with step-by-step tutorials on real-world datasets

Discover how in my new Ebook:
Machine Learning Algorithms From Scratch

It covers 18 tutorials with all the code for 12 top algorithms, like:
Linear Regression, k-Nearest Neighbors, Stochastic Gradient Descent and much more...

Finally, Pull Back the Curtain on
Machine Learning Algorithms

Skip the Academics. Just Results.

See What's Inside

96 Responses to How To Implement Learning Vector Quantization (LVQ) From Scratch With Python

  1. Avatar
    Glen November 29, 2016 at 6:55 am #

    Hi Mr. Brownlee,
    I am used LVQ for my own dataset and it is giving me a Mean accuracy of 57.546%. Could you please suggest me the reasons for such low accuracy?

  2. Avatar
    barry December 2, 2016 at 1:45 am #

    I tried running it for the electricity data in this dataset: http://www.inescporto.pt/~jgama/ales/ales_5.html

    I have used the LVQ on an arff file and I am getting very bad performance from the algorithm. The accuracy is around 60%. Can you try to use the algorithm on this dataset and let me know your results or ways to improve it?

  3. Avatar
    Curious_Kid December 9, 2016 at 4:24 am #

    Hi, How can I use LVQ for doing the instance selection? Let’s say I want to select 5000 instances out of 15000 instances in the dataset? Please give some details.

    • Avatar
      Jason Brownlee December 9, 2016 at 8:46 am #

      Sorry, I’m not sure I understand your question, perhaps you could explain your intention or goal?

      • Avatar
        Curious_Kid December 10, 2016 at 12:56 am #

        I wanted to know if we can use LVQ for instance selection. For e.g. if there are 15000 instances in my dataset and I only want to use 5000 of those instances as the training dataset. There are various ‘Instance Selection’ algorithms and I read that we can LVQ as well. But I am not sure how can I do that using the example that you have given above.

        • Avatar
          Jason Brownlee December 10, 2016 at 8:06 am #

          I see, you mean to use LVQ as a data preparation or data cleaning method.

          Perhaps. I have not done this, but I love the idea. Say use a threshold and only pick instances within a distance of codebook vectors.

          Try it and see – you could be onto something very interesting there. Please share results if you get some!

          • Avatar
            Rahul January 5, 2017 at 2:54 am #

            Correct me if i am wrong.
            What i understand from the question asked by curious_kid is :
            Say we are given a dataset of 15000 instances..
            example
            0.000133,5,0.446809,0.089798,0.516662,0.003467,0.422915,0.414912,UP
            0.000133,5,0.468085,0.047676,0.50476,0.003467,0.422915,0.414912,DOWN
            0.000133,5,0.489362,0.146902,0.495537,0.003467,0.422915,0.414912,UP
            0.000133,5,0.510638,0.193287,0.485272,0.003467,0.422915,0.414912,UP
            .
            .
            .
            ………

            Now he wants to use LVQ for instance selection(data pre processing). That means that the final result should have 5000 best instances which he can later use for classification.

            Now as suggested by you, it could be done by taking some threshold and only pick instances within a distance of codebook vectors.

            Doing this means that we are picking instances from the dataset on the basis of distance from the random codebooks generated. For e.g. in the function “get_best_matching_unit”, if we use this distance “distances.append((codebook, dist))”(line 86), then we won’t be training the codebooks as done in your code in the lines 109-114 and hence the main functionality of LVQ won’t be implemented(i.e. to train the random codebooks over epochs and adjusting bmu according to the error). If we do train the codebooks, then we will lose the distance values, because you are returning “distances[0][0]” in line 88.

            The way I am thinking to approach this problem is by training the codebooks as done by you in the code and getting the final value of codebooks as done in line 115. Giving the value of n_codebooks as 5000, which will give us the best 5000 instances representing the data ( as one codebook represent an instance). And then use those 5000 instances for classification(using any classification algorithm).

            Do you have any suggestions for this approach? I would be glad to know about it.

          • Avatar
            Jason Brownlee January 5, 2017 at 9:33 am #

            Interesting. Using LVQ this way is performing a type of outlier removal.

            It will probably be quite effective at lower dimensions (in data without outlier observations) and fall apart at higher dimensions given the curse of dimensionality as distances and thresholds quickly become less meaningful. Generally LVQ and kNN breakdown at higher dimensions for the same reason:
            https://en.wikipedia.org/wiki/Curse_of_dimensionality

            Yes, the codebook vectors could be taken as centroids or a projection of the data. I would contrast this approach with projection methods like PCA, sammons mapping, tsne and others. Develop models from each projection and compare the performance on resulting models in a hold out dataset.

            I’d love to hear about the results.

  4. Avatar
    Curious_Kid December 11, 2016 at 10:31 am #

    I am thinking of using the euclidean distance between the codebook vectors and the new data and then calculate the mean of those distances which can act as the threshold. Then I am planning to select all those instances for which the euclidean distance is less than the calculated mean value. Does that sound good to you or do I need to think in a different way?

    • Avatar
      Jason Brownlee December 12, 2016 at 6:47 am #

      Try and see.

      It might be worth doing some simulations in 2d with contrived data.

      Keep in mind that the more features – the higher the dimensionality and the curse of dimensionality will make the distances less and less meaningful.

  5. Avatar
    Alex January 2, 2017 at 3:20 pm #

    I could not understand the functionality of “learning_vector_quantization” function. For e.g. where is it called in the code? And thus the “predict” function will not be called. I am getting the same result by commenting the predict function. Please correct me if I am wrong or if I am missing something here.

    • Avatar
      Jason Brownlee January 3, 2017 at 7:36 am #

      Hi Alex,

      The “learning_vector_quantization” function is passed as an argument to the “evaluate_algorithm” function where it is called on line 68 of the final code listing.

      Does that help? Try adding some print() debug statements to help you clarify further.

      • Avatar
        Alex January 5, 2017 at 3:52 am #

        Thank you Jason. I understood it now. I have one more doubt. You have mentioned in your blog that once the codebook vectors are prepared, those codebook vectors are used to make predictions using 1-NN. But I am not able to understand that how are you doing it in your code. Please elaborate it a bit.

        • Avatar
          Jason Brownlee January 5, 2017 at 9:40 am #

          Which part don’t you understand Alex?

          The specific lines of code?

          The kNN algorithm?

  6. Avatar
    Navin February 11, 2017 at 2:17 pm #

    How can I implement this for large image databases. I have a folder full of images, how do I convert it into arrays so that I can implement lqv on it?

    • Avatar
      Jason Brownlee February 12, 2017 at 5:34 am #

      Sorry Navin, I do not have an example of working with raw images and LVQ.

      • Avatar
        Rahat April 18, 2018 at 5:12 pm #

        Could you give any idea, how can I use your implementation for raw image.

  7. Avatar
    inzar May 24, 2017 at 1:00 am #

    can you share about kohonen Self-Organizing Maps metohod?
    i really need to know how unsupervice learning in this methode work.
    thanks, sorry about my bad english.

  8. Avatar
    Hugo June 20, 2017 at 5:20 pm #

    Hi Jason,

    I’ve created a LVQ network with trained data and two samples. It works, but how would I go about getting confidence values for untrained data? Ex. confidence of .98 means the sample is really close to the actual model of one of my samples.

    • Avatar
      Jason Brownlee June 21, 2017 at 8:09 am #

      Good question, I don’t know. Off the cuff, I expect you could devise a confidence interval based on distance measures and the known extent of the domain.

      • Avatar
        Hugo June 22, 2017 at 1:35 pm #

        Thanks for your reply. As I normalize all the values between 0 and 1, what would I use for a maximum? I tried using 1 – distance(bestUnitMatch, sample) and I get values of .96; and if that’s right I’m so amazed because I only trained the network with 30 samples for each 2 of the sets

        • Avatar
          Jason Brownlee June 23, 2017 at 6:38 am #

          Perhaps the largest distance between two points in your domain?

  9. Avatar
    Wija October 21, 2017 at 12:14 am #

    Hi, I’m learning about LVQ right now on my studies, can you send me the whole code? so I can learn it better. It is for educational purpose. Thanks

  10. Avatar
    Sushanth Suresh November 1, 2017 at 4:24 pm #

    Hi,

    In the “train_codebooks()” function where is the “codebooks” variable getting updated?.Only the variable “bmu” is moved towards or away from the training vector which is not used to update the “codebooks” vector.

    Please let me know if i am missing something.

    Thank you

    • Avatar
      Jason Brownlee November 2, 2017 at 5:04 am #

      Correct, only the BMU is being updated for each training pattern.

      • Avatar
        Sushanth Suresh November 4, 2017 at 8:06 am #

        I think that this updated BMU should be saved back into the codebooks to ensure the learning is taking place.

        • Avatar
          Jason Brownlee November 5, 2017 at 5:12 am #

          No need, we have a copy of the reference to it.

          Perhaps read up on “pass by reference” in modern programming languages?

  11. Avatar
    Sushanth Suresh November 5, 2017 at 10:23 am #

    Thank you clarifying it.
    I was not aware of how “pass by reference” worked in python.

    • Avatar
      Jason Brownlee November 6, 2017 at 4:47 am #

      No problem, it’s the same as most other languages.

  12. Avatar
    Vinay December 8, 2017 at 1:13 am #

    Hi, Thank you for this lovely illustration. I am not a pro in python but have some questions.
    On line 61, train_set.remove(fold) returns a funny error “ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()”
    Is there any alternative around this. ?

    Also i was wondering the reason for using lists and not numpy array like dataframe. Any particular reason?

    Thanks again.

    • Avatar
      Jason Brownlee December 8, 2017 at 5:43 am #

      I write the code using Python 2.7, perhaps double check your Python version?

  13. Avatar
    m.ragab April 25, 2018 at 7:56 pm #

    I get this errors when I run ? what I do ?

    Python 3.6.3 (v3.6.3:2c5fed8, Oct 3 2017, 17:26:49) [MSC v.1900 32 bit (Intel)] on win32
    Type “copyright”, “credits” or “license()” for more information.
    >>>
    ==== RESTART: H:/Ebooks/Ebooks/machine/project/New folder (3)/LVQ/lvq.py ====
    Traceback (most recent call last):
    File “H:/Ebooks/Ebooks/machine/project/New folder (3)/LVQ/lvq.py”, line 140, in
    scores = evaluate_algorithm(dataset, learning_vector_quantization, n_folds, n_codebooks, learn_rate, n_epochs)
    File “H:/Ebooks/Ebooks/machine/project/New folder (3)/LVQ/lvq.py”, line 68, in evaluate_algorithm
    predicted = algorithm(train_set, test_set, *args)
    File “H:/Ebooks/Ebooks/machine/project/New folder (3)/LVQ/lvq.py”, line 119, in learning_vector_quantization
    codebooks = train_codebooks(train, n_codebooks, lrate, epochs)
    File “H:/Ebooks/Ebooks/machine/project/New folder (3)/LVQ/lvq.py”, line 110, in train_codebooks
    error = row[i] – bmu[i]
    IndexError: list index out of range
    >>>

    • Avatar
      Jason Brownlee April 26, 2018 at 6:25 am #

      The code assumes you are using Python version 2, you are using Python 3.

      You can change your version of python or change the code.

  14. Avatar
    m.ragab April 26, 2018 at 2:58 pm #

    I also get this error ! although I use python 2.7

    Python 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:19:30) [MSC v.1500 32 bit (Intel)] on win32
    Type “copyright”, “credits” or “license()” for more information.
    >>>
    ==== RESTART: H:/Ebooks/Ebooks/machine/project/New folder (3)/LVQ/lvq.py ====

    Traceback (most recent call last):
    File “H:/Ebooks/Ebooks/machine/project/New folder (3)/LVQ/lvq.py”, line 140, in
    scores = evaluate_algorithm(dataset, learning_vector_quantization, n_folds, n_codebooks, learn_rate, n_epochs)
    File “H:/Ebooks/Ebooks/machine/project/New folder (3)/LVQ/lvq.py”, line 68, in evaluate_algorithm
    predicted = algorithm(train_set, test_set, *args)
    File “H:/Ebooks/Ebooks/machine/project/New folder (3)/LVQ/lvq.py”, line 119, in learning_vector_quantization
    codebooks = train_codebooks(train, n_codebooks, lrate, epochs)
    File “H:/Ebooks/Ebooks/machine/project/New folder (3)/LVQ/lvq.py”, line 110, in train_codebooks
    error = row[i] – bmu[i]
    IndexError: list index out of range
    >>>

  15. Avatar
    mo3az April 26, 2018 at 11:46 pm #

    I can’t use the code without spaces and I can’t guess it as i’am not expert in python plz help me

  16. Avatar
    m.ragab May 1, 2018 at 8:45 pm #

    I get this error

    error = row[i] – bmu[i]
    IndexError: list index out of range

  17. Avatar
    Mahendhiran PD August 2, 2018 at 5:18 pm #

    Dear Jason

    I am Mahendhiran. I am trying to execute LVQ with audio data(MFCC). I am getting error like ‘row[column] = float(row[column].strip())

    TypeError: ‘int’ object is not subscriptable’

    I request you to please help to solve this issue. I am using spyder(python 3.6).
    Thanks in advance….

  18. Avatar
    Curious February 23, 2019 at 9:33 pm #

    Hello,

    is it not the case that your initial set of codebook vectors must contain at least one representative of each class value ?

    else your final step, the 1-nearest neighbour algorithm, will always return the same class value ?

    Thank you

  19. Avatar
    Arafat March 2, 2019 at 1:55 am #

    Hi Jason Brownlee,

    Can your lvq code be used for huge dataset like dataset contain up to 100000 rows along with 28 features. And also why does not your code working for the dataset which contains feature names?

    • Avatar
      Jason Brownlee March 2, 2019 at 9:35 am #

      Perhaps. The code is for educational purposes only – to teach how the method works.

      You might want to look into using an efficient library implementation or using numpy arrays.

  20. Avatar
    jayasri June 20, 2019 at 8:38 pm #

    how to save the LVQ model to classify the unknown data after training?

    • Avatar
      Jason Brownlee June 21, 2019 at 6:36 am #

      You can save the code vectors to file as text or numpy array and load them later.

      The predict(codebooks, row) function can be called any time with a row of data.

  21. Avatar
    Soyiba July 2, 2019 at 7:12 pm #

    Hi,
    I receive the following Error while running this code. Can you please tell me how to fix it.

    ValueError Traceback (most recent call last)
    in
    130 dataset = load_csv(filename)
    131 for i in range(len(dataset[0])-1):
    –> 132 str_column_to_float(dataset, i)
    133 # convert class column to integers
    134 str_column_to_int(dataset, len(dataset[0])-1)

    in str_column_to_float(dataset, column)
    19 def str_column_to_float(dataset, column):
    20 for row in dataset:
    —> 21 row[column] = float(row[column])
    22
    23 # Convert string column to integer

    ValueError: could not convert string to float:

  22. Avatar
    Mae L. September 17, 2019 at 6:40 am #

    Jason, do you have any tutorial of implementing SOM (supervised learning) with LVQ?

  23. Avatar
    Siavash November 3, 2019 at 11:14 am #

    Thanks for your great tutorial. How can I improve the code to LVQ2?

  24. Avatar
    Dirga Roslukito December 28, 2019 at 11:17 pm #

    hi, I used your LVQ for my JAFFE dataset and I also used PCA for feature extraction. but I get very low accuracy. Maybe you have some suggestion for this accuracy problem?

    thank you..

  25. Avatar
    Rohit January 8, 2020 at 8:21 am #

    Hi Jason,

    I am a student and new to ML, I found this explanation on LVQ interesting. Could you please tell me what is “new test_row”? I understand that the “codebook vectors” refers to original dataset but I don’t understand the “new test_row”. I am using Iris dataset to implement this.

    • Avatar
      Jason Brownlee January 8, 2020 at 8:37 am #

      Thanks.

      A new test row is a new example unseen during training for which we want to make a prediction.

  26. Avatar
    Shivani February 27, 2020 at 11:22 pm #

    Hi Jason, I tried the code that you have posted in this link, but I am struck in updating the BMUs(I have using 6 prototypes in my use case). Do you have any suggestions?

    • Avatar
      Jason Brownlee February 28, 2020 at 6:07 am #

      You will only have one or two BMUs. Perhaps use the code as-is as a starting point.

  27. Avatar
    Adil June 1, 2020 at 8:03 am #

    I look for product quantizion algorithme

    • Avatar
      Jason Brownlee June 1, 2020 at 1:38 pm #

      What is that exactly?

      • Avatar
        Adil June 2, 2020 at 12:27 am #

        sample codes in Product Quantization for large-scale approximated nearest neighbor search

  28. Avatar
    Aymen June 7, 2020 at 7:33 pm #

    Dear Jason,

    I would like to ask you please if you know how to learn the case of scalar quantization for an n-dimensional vector, e.g. uniform quantization.

  29. Avatar
    Bachtiar June 7, 2020 at 11:25 pm #

    Hi Jason, Thank you for your work!
    I’m new to machine learning, is this code include like sklearn split train & test? And in my dataset I put a label at the end of the array, how i put out the label so the label can become the output.
    Thank you!

    • Avatar
      Jason Brownlee June 8, 2020 at 6:14 am #

      Labels are not used as input, recall that machine learning models take input and predict the output. Providing the output as the input would be a mistake.

      • Avatar
        Bachtiar June 8, 2020 at 1:31 pm #

        so let’s say i wanted to predict gender with lvq, how do the system decide the speaker is male or female? so the output come out with predict and e.g. “you’re male”
        Thank you kindly

        • Avatar
          Jason Brownlee June 9, 2020 at 5:57 am #

          It would learn based on whatever input data you provided to the model in order to make the prediction.

  30. Avatar
    Jyothi September 19, 2020 at 1:49 am #

    Hi Jason, Do you have any suggestions for visualizing the output of the LVQ model when we don’t use any scientific libraries?

  31. Avatar
    Rachit Ahuja October 31, 2020 at 4:16 am #

    Hey Jason I am a bit confused in “def train_codebooks” specifically why am I checking the last element in bmu and row to determine whether to add or subtract the lr*error i.e.
    for i in range(len(row)-1):
    error = row[i] – bmu[i]
    if bmu[-1] == row[-1]:
    bmu[i] += rate * error
    else:
    bmu[i] -= rate * error

  32. Avatar
    Jensun Ravichandran November 5, 2020 at 5:31 am #

    LVQ is a heuristically motivated algorithm. There is a variant of LVQ called GLVQ (https://papers.nips.cc/paper/1113-generalized-learning-vector-quantization.pdf) with a differentiable cost function, that can be trained with gradient descent. Do you have a tutorial on implementing it in Python?

  33. Avatar
    Raghad April 3, 2021 at 2:03 am #

    could you please implement an LVQ model using scikit and matplotlib just a really simple training/ testing and accuracy on any dataset 🙂

    • Avatar
      Jason Brownlee April 3, 2021 at 5:35 am #

      I don’t think sklearn has an LVQ implementation.

  34. Avatar
    Heathcliff April 24, 2021 at 3:57 pm #

    Hi, Thank you this tutorial and the quality of your bloc.
    Why not compute directly the barycenter of each population and then apply the 1-nearest test?
    Regards,

    • Avatar
      Jason Brownlee April 25, 2021 at 5:16 am #

      You’re welcome.

      What is “barycenter”?

      Sure, you can develop any model you like. This tutorial was about a specific well-known model.

  35. Avatar
    Aravind S November 22, 2021 at 1:29 am #

    The way the codebook is randomised, it is possible that some classes are not represented in the intital codebook. So we need to randomise with representation for each class

    • Adrian Tam
      Adrian Tam November 23, 2021 at 1:16 pm #

      Possible, but as a result of randomizing, we believe it is in low probability.

  36. Avatar
    MiGoreng March 19, 2022 at 4:02 pm #

    Hello.

    What would be the best way to remove weight vectors that no longer win battles?

    Your comment is awaiting moderation.

  37. Avatar
    Alex August 30, 2022 at 12:37 am #

    Hi, just a quick clarification about the results obtained after the test.
    If I used your dataset the result is OK and given the mean accuracy high (87.353%) while if I using a dataset given from myself, the results are not attendible.
    I tried to change the parameters about the number of epocs, n°folds but nothing. The result is bad.
    Which will the possible reason? Could you help me to find a solution?
    Thanks
    Alex

    • Avatar
      James Carmichael August 30, 2022 at 6:37 am #

      Hi Alex…Please clarify the method utilized to generate the data you utilized so that we may better assist you.

  38. Avatar
    Alex August 30, 2022 at 5:24 pm #

    Hi James, thanks for your support.

    I’ve prepared a dataset with 340 columns and 65000 rows about an industrial process.
    I copy all your code from the beginning and when selected my dataset indicate the dataset’s name and run the code.
    After the running process, the response is zero:
    Scores: [0.0, 0.0, 0.0, …… 0.0]
    Mean Accuracy: 0.0%

    So, in this way, I tried to modify the parameters about the n.of epochs, n. folds but without success.

    At the moment, I don’t know how to solve this mistake.

Leave a Reply