Machine Learning vs Human Learning

Shayan Ali Bhatti
6 min readMay 30, 2020

You must have heard that this is the age of Artificial Intelligence (AI). AI is being used to detect diseases, in driving autonomous vehicles, automated business decisions and so on. But what is AI and Machine Learning and how some of the concepts in these are actually derived from the way humans learn?

Picture by Gerd Leonhard taken from https://www.flickr.com/photos/gleonhard/33661762360 shared under Creative Commons 2.0 License.

Artificial Intelligence vs Human Intelligence

In simple words, if a machine/software is using some sensor to “sense” data and take decisions based on them, then we call this Artificial Intelligence because humans do the same. We perceive and “sense” changes in surrounding and act according to them, acting as intelligent beings. Since machines do this task the artificial way, using sensors or some other means to detect change in data, thus we call this process “Artificial Intelligence”; humans being “naturally intelligent”.

The above definition of AI is very narrow. AI, however, is a very broad field and includes fields like Machine Learning and Deep Learning, which themselves include numerous algorithms that help the machines take decisions “artificially” and “intelligently”. The following picture is a description of relation between AI, Machine Learning and Deep Learning.

Relation between AI, Machine Learning and Deep Learning. Image taken from NVIDIA blog https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/

It can be seen in picture above that AI is the umbrella in which Machine Learning and Deep Learning exist.

Machine Learning

Machine Learning is simply defined as,

“A software made to learn by making mistakes without being explicitly programmed”.

Machine learning has 2 major types:

  1. Supervised Learning. In supervised learning, our software is first “trained” on “known” data, that is, the machine learning model tries to learn to classify the data, for which the model already knows the solutions. It can be cats / dogs images or any complicated task. After the “training” is done, then we expect our model to classify / predict the “unseen” data of the same type correctly.
  2. Unsupervised Learning. In unsupervised learning, the data does not come with “known solution”, as was the case in supervised learning. In unsupervised learning, from a lot of data, that us humans cannot possibly analyze one by one, we use “Clustering” to cluster the similar data with each other in clusters, based on some criteria e.g. distance or density etc. thus making it easier for humans to then analyze what similarity the data in those clusters have. An example can be that from millions of chemical compounds, if scientists want to see which compounds have properties similar to each other, then they might use clustering.

Supervised Machine Learning in Human Learning Terms

The way humans learn is that they are trained to solve problems in school. Our teacher solves a problem, gives us solution for the problem, then we learn how to solve it, we make mistakes, we learn, and in the end we are supposed to do well in final test and ultimately in life when something related to those problems comes. Thus, we are trained to solve those kinds of problems.

Same goes for supervised Machine Learning in which the Machine Learning model is first “trained” on examples with “known solutions”. This data is called “training set / training data”. Then the model learns by making mistakes and improving on them gradually. After it is “trained, we expect the machine learning model to do well, when an unseen relevant problem comes known as “test data”, and ultimately in real life when Machine Learning model is deployed in production environment.

There are, however, 2 more terms in Machine Learning that are very common and tell how our Machine Learning model is doing. They are:

  1. Underfitting.
  2. Overfitting.

Let us see what these terms mean and how they are related to human learning.

Underfitting

Underfitting, in simple terms, means that our Machine Learning model did not do well in being trained on training data. Technically, our model’s predictions had high bias (error), it could not even predict the training data with known solutions correctly. This is depicted in following figure:

Since, it did not do well on training data, it is definitely not going to do well on test data either. Thus, it is undesirable.

Underfitting in Human Learning Terms

In human terms, underfitting means that a student was given homework with known solutions and the student could not even get the answers for those questions right. So obviously it can be predicted that that student will not be able to do well in final test and real life either, if relevant problems come.

Overfitting

Overfitting, in simple terms, means that our data has a lot of variance so our model just memorized the training data’s pattern, as depicted in the picture below:

When a model totally memorizes the training data, then our model gives very good performance on training data, but very poor performance on test data and thus fails to work properly when deployed to solve real-world problems. Because even though the test data was similar to the training data, it might not necessarily follow the same pattern of training data, causing the model that memorized training data, to perform poorly.

Overfitting in Human Learning Terms

In human terms, it means that the student memorized all homework questions and their answers. But when the teacher twisted the wording of same questions in the final test, then student did poorly because the student thought that they were different questions.

In practice though, in Machine Learning, we try to be between under and over fitting. This is called the bias-variance tradeoff. Underfitting means we have high bias in data, overfitting means we have high variance in data. Practically, we must have a model whose performance strives for a balance between them.

Neural Networks in Brain

Brain’s nervous system is comprised of Neural Networks whose building block is a neuron. A Neural Network has neurons receive signals via the dendrites to the soma and sends out signals down the axon. A single biological neuron can be seen below:

Illustration of a neuron by David Baillot taken from https://scx2.b-cdn.net/gfx/news/hires/2018/2-whyareneuron.jpg

Neural Networks in Machine Learning

In Machine Learning, the concept of Neural Networks is derived from the Neural Networks in brain. As, each neuron receives data from previous one and then processes signal and sends processed output to next neuron. The detailed working of a Neural Network is explained in detail in my article with practical coding implementation at “https://medium.com/analytics-vidhya/coding-a-neural-network-for-xor-logic-classifier-from-scratch-b90543648e8a”.

However, the major difference between brain’s and Machine Learning’s Neural Network is that brain’s Neural Network has around 100 billion neurons and in our Machine Learning Neural Network model, we might not even have a million neurons.

Conclusion

AI and Machine Learning are based on the way humans learn, that is, by making mistakes and improving on them. I hope that this article teaches few Machine Learning terminologies and their relation with human thinking, to the newbies trying to learn Machine Learning.

Gain Access to Expert View — Subscribe to DDI Intel

--

--

Shayan Ali Bhatti

Avid observer of life and software & Machine Learning developer