In the last decade, a good number of supervised learning algorithms have been introduced by the intelligent computational researchers in machine learning and data mining. Recently research in classification problems to reduce misclassification rate focuses on aggregation methods like Boosting, which combines many classifiers to generate a single strong classifier. Boosting is also known as AdaBoost algorithm, which uses voting technique to focus on training instances that are hard to classify. In this paper, we introduce a new approach of Boosting using decision tree for classifying noisy data. The proposed approach considers a series of decision tree classifiers and combines the votes of each classifier for classifying known or unknown instances. We update the weights of training instances based on the misclassification error rates that are produced by the training instances in each round of classifier construction. We tested the performance of our proposed algorithm with existing decision tree algorithms by employing benchmark datasets from the UCI machine learning repository. Experimental analysis proved that the proposed approach achieved high classification accuracy for different types of dataset.
|Publication status||Published - 2013|
|Event||International Conference on Informatics, Electronics and Vision (ICIEV) - Dhaka, Bangladesh|
Duration: 1 Jan 2013 → …
|Conference||International Conference on Informatics, Electronics and Vision (ICIEV)|
|Period||1/01/13 → …|