You’ve probably encountered the term “machine learning” more than a few times lately. Often used interchangeably with artificial intelligence, machine learning is in fact a subset of AI, both of which can trace their roots to MIT in the late 1950s.

Machine learning is something you probably encounter every day, whether you know it or not. The Siri and Alexa voice assistants, Facebook’s and Microsoft’s facial recognition, Amazon and Netflix recommendations, the technology that keeps from crashing into things – all are a result of advances in machine learning.

While still nowhere near as complex as a human brain, systems based on machine learning have achieved some impressive feats, like defeating human challengers at , , , and .

Dismissed for decades as overhyped and unrealistic (the infamous ), both AI and machine learning have enjoyed a huge resurgence over the last few years, thanks to a number of technological breakthroughs, a massive explosion in cheap computing horsepower, and a bounty of data for machine learning models to chew on.

.

HP got into trouble back in 2009 when facial recognition technology built into the webcam on an HP MediaSmart laptop was able to . In June 2015, faulty algorithms in the Google Photos app . 

Another dramatic example: Microsoft’s ill-fated Taybot, a March 2016 experiment to see if an AI system could emulate human conversation by learning from tweets. In less than a day, malicious Twitter trolls had turned Tay into . Talk about corrupted training data.

A machine learning lexicon

But machine learning is really just the tip of the AI berg. Other terms closely associated with machine learning are neural networks, deep learning, and cognitive computing.

Neural network. A computer architecture designed to mimic the structure of neurons in our brains, with each artificial neuron (microcircuit) connecting to other neurons inside the system. Neural networks are arranged in layers, with neurons in one layer passing data to multiple neurons in the next layer, and so on, until eventually they reach the output layer. This final layer is where the neural network presents its best guesses as to, say, what that dog-shaped object was, along with a confidence score.

There are multiple types of neural networks for solving different types of problems. Networks with large numbers of layers are called “deep neural networks.” Neural nets are some of the most important tools used in machine learning scenarios, but not the only ones.

Deep learning. This is essentially machine learning on steroids, using multi-layered (deep) neural networks to arrive at decisions based on “imperfect” or incomplete information. The deep learning system is what defeated 11 professional poker players last December, by constantly recomputing its strategy after each round of bets. 

Cognitive computing. This is the term favored by IBM, creators of Watson, the supercomputer that kicked humanity’s ass at Jeopardy in 2011. The difference between cognitive computing and artificial intelligence, in IBM’s view, is that instead of replacing human intelligence, cognitive computing is designed to augment it—enabling doctors to diagnose illnesses more accurately, financial managers to make smarter recommendations, lawyers to search caselaw more quickly, and so on.

This, of course, is an extremely superficial overview. Those who want to dive more deeply into the intricacies of AI and machine learning can start with University of Washington’s Pedro Domingos, or this series of , as well as “” by InfoWorld’s Martin Heller.

Despite all the hype about AI, it’s not an overstatement to say that machine learning and the technologies associated with it are changing the world as we know it. Best to learn about it now, before the machines become fully self-aware.