What is the Difference Between Bagging & Boosting in Tree-Based Methods?

This series(“Bagging & Boosting Ensemble Methods and What is the Difference Between Them?”) consists of 6 separate articles and is the fifth article in this series.


Now that we’ve learned how the bagging and boosting methods work in the series of articles on this topic, let’s try to understand how there are differences between them.

For splitting the actual train data to multiple datasets, as known as the bootstrap samples both these methods use the bootstrapping statistical method.

In bagging once the bootstrap samples create, there will be no changes for building multiple models. Where as in the boosting based on the previous model output the individual observation will have weightage. Some data points the bootstrap will have low weightage, whereas some data points will have higher weightage.


In the training phase both these methods will change in the way they build models. In the bagging method all the individual models will take the bootstrap samples and create the models in parallel. Whereas in the boosting each model will build sequentially. The output of the first model (the errors information) will be pass along with the bootstrap samples data.


For performing in the bagging method, all the individual models will predict the target outcome, using the majority voting approach we will select the final prediction. Whereas in the boosting method all the model predictions will have some weightage, the final prediction will be the weighted average. In the bagging method it is just the normal average.


There’s not an outright winner; it depends on the data, the simulation and the circumstances. Bagging and Boosting decrease the variance of your single estimate as they combine several estimates from different models. So the result may be a model with higher stability.

If the problem is that the single model gets a very low performance, Bagging will rarely get a better bias. However, Boosting could generate a combined model with lower errors as it optimises the advantages and reduces the pitfalls of the single model.


By contrast, if the difficulty of the single model is over-fitting, then Bagging is the best option. Boosting for its part doesn’t help to avoid over-fitting; in fact, this technique is faced with this problem itself. For this reason, Bagging is effective more often than Boosting.

You can find many examples of these methods on my Kaggle account.

In my next article and in my last article on this topic, I will write about the difference between the two by asking and answering questions in the form of questions that can be asked in the job interview.


1. https://quantdare.com/what-is-the-difference-between-bagging-and-boosting/
2. https://sebastianraschka.com/blog/2016/model-evaluation-selection-part2.html
3. https://laptrinhx.com/a-comprehensive-guide-to-boosting-machine-learning-algorithms-3831039337/
4. https://machinelearningmastery.com/arima-for-time-series-forecasting-with-python/
5. https://medium.com/swlh/difference-between-bagging-and-boosting-f996253acd22
6. https://www.pluralsight.com/guides/ensemble-methods:-bagging-versus-boosting
7. https://www.kaggle.com/mathchi/notebooks
8. http://www.plusxp.com/2011/02/back-to-the-future-the-game-episode-1-review/

Experienced Ph.D. with a demonstrated history of working in the higher education industry. Skilled in Data Science, AI, Deep Learning, Big Data, & Mathematics.