David Mimno is an assistant professor in the Information Sciences department at Cornell University. He has a background and interest in Natural Language Processing (NLP), specifically topic modeling. Notably, he is the chief maintainer of MALLET, the Java-based NLP library.
I recently came across a blog post by David titled “Advice for students of machine learning“. This is a great post and includes similar advice that I give to programmers and coaching students.
It’s a great post and great advice and I summarize it for you in this blog post.
Introductory Machine Learning Books
David recommends some pretty advanced books as introductory texts. The reason is that his students are graduate students and are up to the challenge. I do not recommend these texts myself.
Nevertheless, the text’s he suggests are:
- Machine Learning: A Probabilistic Perspective
- Pattern Recognition and Machine Learning
- Probabilistic Graphical Models: Principles and Techniques
- Information Theory, Inference and Learning Algorithms
These may well be a selection of some of the best textbooks on machine learning available now. You’ll see them again and again as you look at the graduate courses at MIT, Stanford, Cornell and other leading US schools.
Introductory Mathematics Books
David comments that anything you can learn about linear algebra, probability and statistics will be useful.
He goes on to suggest a few key books:
- Introduction to Linear Algebra
- Bayesian Data Analysis
- Data Analysis Using Regression and Multilevel/Hierarchical Models
Practical Machine Learning Advice
After suggesting some introductory resources, David goes on to provide some practical advice when getting started in the field.
- Don’t expect to get anything the first time. David suggests reading the descriptions for the same method from multiple different sources. This is the same suggestion I make in my Algorithm Description Template, which I came to out of necessity.
- Implement a model. I agree with David that you cannot fully appreciate the model until you implement it yourself and bring it to life. David suggests comparing your implementation to others, such as those available in open source and seek out and understand any mathematical or programatic tricks being used that improve the efficiency.
- Read papers. David gives an anecdote of reading a paper on hid daily commute. Consider picking an algorithm or a problem and read the primary sources related to that paper.
- Pick a paper and Live inside it for a week. David suggests becoming one with a paper and thinking about it hard for a week until you know it intimately. For example, he suggests that you fill in the blanks in the progression for any derived equations. I can speak from experience and suggest you pick your paper carefully. I’ve picked papers that have taken me years with which to commune.
David finishes with a wonderful John von Neumann quote:
Young man, in mathematics you don’t understand things. You just get used to them.
Hold this quote close. Things won’t make sense for a while. Keep reading and playing until things click (or at least you have a functional empirical understanding).
Do you have any hard earned practical machine learning advice?
Thanks Jason.
You’re welcome Kartik.
Hi Jason,
I’m currently using these books:
1. A First Course in Machine Learning
2. An Introduction to Statistical Learning – With Applications in R
3. Applied Predictive Modeling
4. Bayesian Reasoning and Machine Learning
5. Machine Learning – The Art and Science of Algorithms that Make Sense of Data
And I’m using R by Example to help my knowledge of R.
Do you think my book selection is apt for a beginner?
Also, the mathematics in Machine Learning: A Probabilistic Perspective seems quite daunting. How would you recommend getting over that hurdle?
Thanks.
Great selection of books Kartik.
Consider focusing on the applied side first, then transition to the theory as you need it as you strive for better results.