Machine learning algorithms explained


Machine learning and deep learning have been widely embraced, and even more widely misunderstood. In this article, I’d like to step back and explain both machine learning and deep learning in basic terms, discuss some of the most common machine learning algorithms, and explain how those algorithms relate to the other pieces of the puzzle of creating predictive models from historical data.

What are machine learning algorithms?

Recall that is a class of methods for automatically creating models from data. Machine learning algorithms are the engines of machine learning, meaning it is the algorithms that turn a data set into a model. Which kind of algorithm works best (supervised, unsupervised, classification, regression, etc.) depends on the kind of problem you’re solving, the computing resources available, and the nature of the data.

How machine learning works

Ordinary programming algorithms tell the computer what to do in a straightforward way. For example, sorting algorithms turn unordered data into data ordered by some criteria, often the numeric or alphabetical order of one or more fields in the data.

Linear regression algorithms fit a straight line, or another function that is linear in its parameters such as a polynomial, to numeric data, typically by performing matrix inversions to minimize the squared error between the line and the data. Squared error is used as the metric because you don’t care whether the regression line is above or below the data points; you only care about the distance between the line and the points.

Nonlinear regression algorithms, which fit curves that are not linear in their parameters to data, are a little more complicated, because, unlike linear regression problems, they can’t be solved with a deterministic method. Instead, the nonlinear regression algorithms implement some kind of iterative minimization process, often some variation on the method of steepest descent.    

Steepest descent basically computes the squared error and its gradient at the current parameter values, picks a step size (aka learning rate), follows the direction of the gradient “down the hill,” and then recomputes the squared error and its gradient at the new parameter values. Eventually, with luck, the process converges. The variants on steepest descent try to improve the convergence properties.


What are machine learning features?

Since I mentioned feature vectors in the previous section, I should explain what they are. First of all, a feature is an individual measurable property or characteristic of a phenomenon being observed. The concept of a “feature” is related to that of an explanatory variable, which is used in statistical techniques such as linear regression. Feature vectors combine all of the features for a single row into a numerical vector.

Part of the art of choosing features is to pick a minimum set of independent variables that explain the problem. If two variables are highly correlated, either they need to be combined into a single feature, or one should be dropped. Sometimes people perform principal component analysis to convert correlated variables into a set of linearly uncorrelated variables.

Some of the transformations that people use to construct new features or reduce the dimensionality of feature vectors are simple. For example, subtract Year of Birth from Year of Death and you construct Age at Death, which is a prime independent variable for lifetime and mortality analysis. In other cases, feature construction may not be so obvious.

Common machine learning algorithms

There are dozens of machine learning algorithms, ranging in complexity from linear regression and logistic regression to deep neural networks and ensembles (combinations of other models). However, some of the most common algorithms include:

  • Linear regression, aka least squares regression (for numeric data)
  • Logistic regression (for binary classification)
  • Linear discriminant analysis (for multi-category classification)
  • Decision trees (for both classification and regression)
  • Naïve Bayes (for both classification and regression)
  • K-Nearest Neighbors, aka KNN (for both classification and regression)
  • Learning Vector Quantization, aka LVQ (for both classification and regression)
  • Support Vector Machines, aka SVM (for binary classification)
  • Random Forests, a type of “bagging” ensemble algorithm (for both classification and regression)
  • Boosting methods, including AdaBoost and XGBoost, are ensemble algorithms that create a series of models where each new model tries to correct errors from the previous model (for both classification and regression)

Where are the neural networks and deep neural networks that we hear so much about? They tend to be compute-intensive to the point of needing GPUs or other specialized hardware, so you should use them only for specialized problems, such as image classification and speech recognition, that aren’t well-suited to simpler algorithms. Note that “deep” means that there are many hidden layers in the neural network.

For more on neural networks and deep learning, see “.”

Hyperparameters for machine learning algorithms

Machine learning algorithms train on data to find the best set of weights for each independent variable that affects the predicted value or class. The algorithms themselves have variables, called hyperparameters. They’re called hyperparameters, as opposed to parameters, because they control the operation of the algorithm rather than the weights being determined.

The most important hyperparameter is often the learning rate, which determines the step size used when finding the next set of weights to try when optimizing. If the learning rate is too high, the gradient descent may quickly converge on a plateau or suboptimal point. If the learning rate is too low, the gradient descent may stall and never completely converge.

Many other common hyperparameters depend on the algorithms used. Most algorithms have stopping parameters, such as the maximum number of epochs, or the maximum time to run, or the minimum improvement from epoch to epoch. Specific algorithms have hyperparameters that control the shape of their search. For example, a Random Forest Classifier has hyperparameters for minimum samples per leaf, max depth, minimum samples at a split, minimum weight fraction for a leaf, and about 8 more.

Hyperparameter tuning

Several production machine-learning platforms now offer automatic hyperparameter tuning. Essentially, you tell the system what hyperparameters you want to vary, and possibly what metric you want to optimize, and the system sweeps those hyperparameters across as many runs as you allow. (Google Cloud hyperparameter tuning extracts the appropriate metric from the model, so you don’t have to specify it.)

There are three search algorithms for sweeping hyperparameters: Bayesian optimization, grid search, and random search. Bayesian optimization tends to be the most efficient.

You would think that tuning as many hyperparameters as possible would give you the best answer. However, unless you are running on your own personal hardware, that could be very expensive. There are diminishing returns, in any case. With experience, you’ll discover which hyperparameters matter the most for your data and choice of algorithms.

Automated machine learning

Speaking of choosing algorithms, there is only one way to know which algorithm or ensemble of algorithms will give you the best model for your data, and that’s to try them all. If you also try all the possible normalizations and choices of features, you’re facing a combinatorial explosion.

Trying everything is impractical to do manually, so of course machine learning tool providers have put a lot of effort into releasing AutoML systems. The best ones combine feature engineering with sweeps over algorithms and normalizations. Hyperparameter tuning of the best model or models is often left for later. Feature engineering is a hard problem to automate, however, and not all AutoML systems handle it.

In summary, machine learning algorithms are just one piece of the machine learning puzzle. In addition to algorithm selection (manual or automatic), you’ll need to deal with optimizers, data cleaning, feature selection, feature normalization, and (optionally) hyperparameter tuning.

When you’ve handled all of that and built a model that works for your data, it will be time to deploy the model, and then update it as conditions change. Managing machine learning models in production is, however, a whole other can of worms.