I came across an upcoming book that might interest you.
It is titled Bootstrapping Machine Learning by Louis Dorard, PhD. A 40-page sample is provided and I enjoyed it. I think the final book will be a valuable read.
Louis takes the position that machine learning is commoditized to the point where if you are an application developer, you don’t need to learn machine learn ing algorithms, you only need to learn machine learning APIs.
Nowadays, anyone is in a position to exploit the power of Machine Learning algorithms with minimal coding experience and with the use of Prediction APIs
I like this approach and I advocate it for non-programmers. It is a sign of the maturing of the field that we can start to clearly differentiate the machine learning researcher from the machine learning practitioner and even the application developer. If you are a programmer you create applications, you don’t need to concern yourself with the esoteric details of programming language design. The same with machine learning versus algorithm algorithm research.
The sample does not touch on prediction APIs, but does provide background on the structure of the book, on machine learning as it relates to artificial intelligence, motivating examples of machine learning and a worked example.
Machine Learning Powered Applications
Examples of machine learning powered applications are described to motivate the desire and need to master prediction APIs:
- Amazon: the use of personalized recommendations made to users based on prior purchases. Also Nextflix for moving and Spotify for music are mentioned.
- Gmail: the priority inbox separating promotions and important email.
- Siri: the voice commands and information retrieval capability on iDevices like the iPhone.
- Facebook: the face recognition in photo part of the Facebook app.
When Machine Learning Fails
Louis works through an example using the Iris flower dataset as the context (the hello world of machine learning as he calls it). In this section he usefully highlights examples when machine learning can fail:
- Generalization: When there are too few examples to generalize or the examples provided are not representative of the underlying data population.
- Class Separation: When the classes cannot be meaningfully separated either linearly or non-linearly.
- Noise: When bias, errors or systematic noise are introduced in the collection of examples from which a machine learning system is trained.
More details of this are provided in his blog post of the same name.
The title of the book again is Bootstrapping Machine Learning: A guide to using Prediction APIs, with the tag line: “Exploit the value of your data. Create better, smarter apps.”