6 secrets of building better models part two: boosting

Many analysts who are interested in building predictive models invest a lot of their time and effort in trying to understand how to best tune the parameters of the specific technique that they are using, whether that technique be logistic regression or a neural network, and they are doing this in order to achieve the best accuracy of the resultant model. In this series of videos we look at some often overlooked approaches that can be applied in the same way to a wide variety of algorithms and which may lead to better predictive accuracy. In all of our examples we’ll focus on improving the accuracy of a predictive model applied to a classification prediction problem.


Boosting is another ensemble model-building method that was designed to help develop strong classification models from weak classifiers. Boosting methods focus on error (or misclassifications) that occur in prediction. After an initial model is built, the Boosting algorithm applies a series of weights to the data so that cases that were inaccurately predicted are given larger values and those that were accurately predicted smaller values. The classification algorithm is then re-applied to the data, but this time greater emphasis is given to correctly predicting the previously misclassified cases (i.e. those with the larger weights). The idea is that by repeatedly applying this approach, the algorithm attempts to hunt down the harder to classify cases.

Watch this video to find out more

Check out the other videos in this series

Scroll to Top