A Math-free Introduction to Machine Learning
By Stefano Cariddi
In our previous article, A Math-free Introduction to Data Science, we introduced the concept of Machine Learning. In this post, we aim at deepening it.
This post is divided in two parts: in the first part we will define Machine Learning, and in the second one we will discuss its branches.
Defining Machine Learning
The concept of Machine Learning was invented back in 1959 by the Artificial Intelligence pioneer Arthur Samuel, which defined it as the “field of study that gives computers the ability to learn without being explicitly programmed”.
To make it simpler, we will rephrase it as:
Machine Learning = learning through examples, not rules.
This is why it is called Machine Learning and not Machine Coding.
Conceiving the possibility of creating algorithms that could learn the rules by themselves without being forced to follow any predefined set decided by a human being was a paradigm shift whose impact on the information technology field is probably the equivalent of the impact on the Copernican Revolution for astronomy or the Theory of Evolution for biology. This shift laid the foundation for creating algorithms that mimic the human brain, which searches for some pattern and tries to generalize it.
Depending on the training and/or data used to perform it, sometimes these algorithms can perfectly reproduce the rules identified by human beings. An example is this article (“A robot wrote this entire article. Are you scared yet, human?”), written by an algorithm for The Guardian. (Spoiler alert: if you never dealt with state-of-the-art Natural Language Processing, you will be.) Other times, algorithms can learn completely different sets of rules that humans may not even comprehend or that sometimes invalidate our own. An example is AlphaGo, a computer program that plays the board game Go.
For those who do not know what AlphaGo is, here is another article by The Guardian that explains the whole story. In short, Go is considered the most complex strategy game invented by mankind. Unlike chess and other similar games, it is not analytically solvable no matter how much computational power you throw at it. A Google company called DeepMind created a Machine Learning algorithm that taught itself to play Go. The algorithm played against itself and learned from the match outcomes what worked and what did not.
AlphaGo beat the legendary Lee Sedol, the 18-time Go world champion (you can find the official documentary about this epic achievement at this link). The new Go world champion, Ke Jie, who had won the title against Lee Sedol himself, commented on AlphaGo’s victory by tweeting:
“Even if AlphaGo can beat Lee Sedol, it can’t beat me.”
- Ke Jie, March 16, 2016
Long story short, Google promoted a summit called Future of Go, where AlphaGo competed against Ke Jie himself… and won. The point, however, was not the victory itself, but how it was obtained. AlphaGo’s playing style was something that had never been seen before and has been defined by Go experts as “alien” or “inhuman”. Ke Jie himself stated these words:
“After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong… I would go as far as to say not a single human has touched the edge of the truth of Go.”
- Ke Jie, January 6, 2017
In fact, Elon Musks said, “Most people just don’t understand how quickly machine intelligence is advancing. It’s much faster than almost anyone realized, even within Silicon Valley.”
Yes, this is true even for the #1 ranked Go world master.
The Branches of Machine Learning
Now that we have seen what Machine Learning is, we will focus on the various paradigms that it contains. They are three branches of Machine Learning:
- Supervised learning
- Unsupervised learning
- Reinforcement learning
Supervised learning is the branch of Machine Learning that deals with “labeled data”. So what is a label? It is the value that you are trying to predict. It is the presence of labels that defines supervised learning because they are the ground truth that we aim at reproducing. So for the algorithm it is like learning with the supervision of a teacher that tells it which of its predictions are right and which are wrong.
Supervised learning itself is divided into two branches:
Classification is when the labels can only have discrete values (i.e. classes). Regression is when they can be continuous or pseudo-continuous (i.e. almost continuous).
Examples of classification problems:
- Dog or cat?
- Child, adult or elderly?
Examples of regression problems:
- What will tomorrow’s closing price of Google Alphabet stock be?
- How old is this person? (This is pseudo-continuous because the output will be an integer.)
The second branch of Machine Learning is unsupervised learning. Whereas supervised learning makes use of labeled data, unsupervised learning uses unlabeled data. Hence, the goal in this case is to perform some useful analytics on data rather than labeling them.
Unsupervised learning is divided into three branches:
- Similarity analysis
- Anomaly detection
Similarity is an ensemble of techniques aimed at finding the most similar elements in your dataset to the specific one you are considering. These algorithms drive all the recommendation engines that you can think about, like Instagram’s suggested posts, Amazon’s “other people that bought this item also bought these ones”, or Netflix’s TV series suggestions.
Clustering is the branch of unsupervised learning that groups the data points in clusters with similar properties. In a sense, it is close to similarity (of course, the most similar data belong to the same cluster), but at the same time clustering is different because its goal is to find which and how many data points share similar properties, not to find the most similar elements for each data point.
Finally, there is anomaly detection, which is a peculiar form of clustering analysis because it aims to divide the data into two categories, inliers and outliers, which are, respectively, the data representing and deviating from the main point distribution.
Last but not least, the third branch of Machine Learning is reinforcement learning. Reinforcement learning does not deal with “doing stuff with data”. Instead, its goal is to teach an agent (i.e. an algorithm) to interact with an environment. This is done with an approach called reward maximization which associates a reward to every successful action performed by the agent, whose goal is precisely to maximize it. A remarkable example of reinforcement learning is AlphaGo.
In conclusion, we have defined Machine Learning and we have provided an overview of its branches. A careful reader may have noticed that we have not discussed hot topics such as Computer Vision or Natural Language Processing. The reason is because they belong to other AI branches that exist outside of Machine Learning. Therefore, we will stop here, but before we go, we want to drop one last quote:
“Machine Intelligence is the last invention that humanity will ever need to make.”
- Nick Bostrom, philosopher and founding director of the Future of Humanity Institute, Oxford
Discover more about Ennova Research