The underlying principle of this technique is that several weak learners combined to provide a keen learner. The steps involved are
• Build several decision trees on bootstrapped training samples of data
• On each tree, each time a split is considered, a random sample of mm predictors is chosen as split candidates, out of all pp predictors
• Rule of thumb: At each split m=pvm=p
• Predictions: At the majority rule