is a branch of artificial intelligence that includes for automatically creating models from data. At a high level, there are four kinds of machine learning: supervised learning, unsupervised learning, , and active machine learning. Since reinforcement learning and active machine learning are relatively new, they are sometimes omitted from lists of this kind. You could also add semi-supervised learning to the list, and not be wrong.

## What is supervised learning?

Supervised learning starts with training data that are tagged with the correct answers (target values). After the learning process, you wind up with a model with a tuned set of weights, which can predict answers for similar data that haven’t already been tagged.

You want to train a model that has high accuracy without overfitting or underfitting. High accuracy means that you have optimized the loss function. In the context of classification problems, accuracy is the proportion of examples for which the model produces the correct output.

Overfitting means that the model is so closely tied to the data it has seen that it doesn’t generalize to data it hasn’t seen. Underfitting means that the model is not complex enough to capture the underlying trends in the data.

The loss function is chosen to reflect the “badness” of the model; you minimize the loss to find the best model. For numerical (regression) problems, the loss function is often the mean squared error (MSE), also formulated as the root mean squared error (RMSE), or root mean squared deviation (RMSD). This corresponds to the Euclidian distance between the data points and the model curve. For classification (non-numerical) problems, the loss function may be based on one of a handful of measures including the area under the ROC curve (AUC), average accuracy, precision-recall, and log-loss. (More on the AUC and ROC curve below.)

To avoid overfitting, you often divide the tagged data into two sets, the majority for training and the minority for validation or testing. The validation set loss is usually higher than the training set loss, but it’s the one you care about, because it shouldn’t exhibit bias towards the model.

you can modify and repeat it at will.

## Data encoding and normalization for machine learning

To use categorical data for machine classification, you need to encode the text labels into another form. There are two common encodings.

One is *label encoding*, which means that each text label value is replaced with a number. The other is *one-hot encoding*, which means that each text label value is turned into a column with a binary value (1 or 0). Most machine learning frameworks have functions that do the conversion for you. In general, one-hot encoding is preferred, as label encoding can sometimes confuse the machine learning algorithm into thinking that the encoded column is ordered.

.

## Feature engineering for machine learning

A *feature* is an individual measurable property or characteristic of a phenomenon being observed. The concept of a “feature” is related to that of an explanatory variable, which is used in statistical techniques such as linear regression. Feature vectors combine all the features for a single row into a numerical vector.

Part of the art of choosing features is to pick a minimum set of *independent* variables that explain the problem. If two variables are highly correlated, either they need to be combined into a single feature, or one should be dropped. Sometimes people perform principal component analysis to convert correlated variables into a set of linearly uncorrelated variables.

Some of the transformations that people use to construct new features or reduce the dimensionality of feature vectors are simple. For example, subtract `Year of Birth`

from `Year of Death`

and you construct `Age at Death`

, which is a prime independent variable for lifetime and mortality analysis. In other cases, *feature construction* may not be so obvious.

## Common machine learning algorithms

There are dozens of , ranging in complexity from linear regression and logistic regression to deep neural networks and ensembles (combinations of other models). However, some of the most common algorithms include:

- Linear regression, aka least squares regression (for numeric data)
- Logistic regression (for binary classification)
- Linear discriminant analysis (for multi-category classification)
- Decision trees (for both classification and regression)
- Naïve Bayes (for both classification and regression)
- K-nearest neighbors, aka KNN (for both classification and regression)
- Learning vector quantization, aka LVQ (for both classification and regression)
- Support vector machines, aka SVM (for binary classification)
- Random forests, a type of “bagging” (bootstrap aggregation) ensemble algorithm (for both classification and regression)
- Boosting methods, including AdaBoost and XGBoost, are ensemble algorithms that create a series of models where each incremental model tries to correct errors from the previous model (for both classification and regression)
- Neural networks (for both classification and regression)

## Hyperparameter tuning

Hyperparameters are free variables other than the weights being tuned within a machine learning model. The hyperparameters vary from algorithm to algorithm, but often include the learning rate used to control the size of the correction applied after the errors have been calculated for a batch.

Several production machine learning platforms now offer automatic hyperparameter tuning. Essentially, you tell the system what hyperparameters you want to vary, and possibly what metric you want to optimize, and the system sweeps those hyperparameters over as many runs as you allow. (Google Cloud Machine Learning Engine’s hyperparameter tuning extracts the appropriate metric from the TensorFlow model, so you don’t have to specify it.)

There are three major search algorithms for sweeping hyperparameters: Bayesian optimization, grid search, and random search. Bayesian optimization tends to be the most efficient. You can easily implement your own hyperparameter sweeps in code, even if that isn’t automated by the platform you are using.

To summarize, supervised learning turns labeled training data into a tuned predictive model. Along the way, you need to clean and normalize the data, engineer a set of linearly uncorrelated features, and try multiple algorithms to find the best model.