Averaging the results of several base learners in statistical boosting In the last years, boosting emerged as one of the most important tools for data science, as it combines the powerfulness of a machine learning approach and the interpretability of a statistical model. The main idea consists in combining the results of several weak estimators in order to obtain a good joint one. The procedure is step-wise: at each step, a number of possible weak estimators are applied and the best one is selected to improve the joint estimate. In this way, however, the results of the other weak estimators are discarded. The goal of this project is to find a way to also incorporate these results in the estimation process: at each step, instead of selecting the best weak estimator, a weighted average of all estimates is performed. There may be a loss in interpretability, but advantages in robustness. An important point to study is the choice of the weighting scheme: they may be based on the relative AIC, but other options will be evaluated. The new approach will be tried in a real dataset.