Handwritten Digit Classification and Performance Analysis of Different Machine Learning Algorithms and CNN
DOI:
https://doi.org/10.18311/jmmf/2023/34170Keywords:
Image Classification, K-Nearest Neighbor, Support Vector Machine, Random Forest classifier, CNN, Gaussian Naïve Bayes.Abstract
This work deals with the method of hand written image classification by using machine learning and deep learning. Python along with Tensor Flow framework is used here for developing the classification model. Image classification can be considered as a supervised learning approach which works on a labelled dataset for classification. Though many research works are going on this machine learning based Image classification, yet it is challenging for accurate classification. This paper focuses on handwritten image data classification task using machine learning algorithms namely K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Random Forest classifier (RF). In this paper a comparative analysis is performed on the dataset based on various parameters using different machine learning algorithms. Results are discussed in terms of classification accuracy of different ML based algorithms. From our experiment, it has been noticed that RF Classifier exhibits the highest accuracy of 100% which is followed by SVM having the accuracy of 98% and finally a KNN model having a correctness of 97%. The novelty of this paper is that along with various ML algorithms (RF, KNN, SVM and Naive Bayes) we applied deep learning algorithms (CNN), with three different activation functions and their combinations (RELU, TANH and SIGMOID) along with variation of optimizers (Adam and SGD). Finally performance of the combined optimization method is found to be better than the performance of the individual optimization method.
Downloads
Metrics
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
References
Y. LeCun. “The MNIST database of handwritten digits.” http:/ /yann.lecun.com/exdb/mnist/ (accessed on 22 nd june,22)
Image Classification using Feedforward Neural Network in Keras. (2017, October 23) (accessed on 23rd june, 2022).
E. Kussul and T. Baidyk, (2004): “Improved method of handwritten digit recognition tested on MNIST database,” Image and Vision Computing, vol. 22, pp. 971-981, 10/01 2004, doi:10.1016/j.imavis.2004.03.008. DOI: https://doi.org/10.1016/j.imavis.2004.03.008
B. R. Atienza, “Advanced Deep Learning with Tensor Flow 2 and Keras - Second Edition,” second edition ed: Packt Publishing Limited.
https://en.wikipedia.org/wiki/Random_forest (accessed on 22nd dec, 2022)
Deng, L., (2012): The MNIST database of handwritten digit images for machine learning research best of the web. IEEE Signal Processing Magazine,29(6), pp.141-142. DOI: https://doi.org/10.1109/MSP.2012.2211477
https://www.javatpoint.com/k-nearest-neighbor-algorithm-for-machine-learning
Gedeon, T.D. and Harris, D., (1992): June. Progressive image compression. In Neural Networks, 1992. IJCNN, International Joint Conference on (Vol. 4, pp. 403-407). IEEE. DOI: https://doi.org/10.1109/IJCNN.1992.227311
https://www.javatpoint.com/machine-learning-support-vector-machine-algorithm
Rui Wang, Wei Li, Runnan Qin and JinZhong Wu (2017): “Blur Image Classification based on Deep Learning”, IEEE, ISBN 978-1-5386-1621-5 pp. 1-6.
Harrison, O. (2019, July 14): Machine Learning Basics with the K-Nearest Neighbors Algorithm. Retrieved October 1, 2019, from https://towardsdatascience.com/machine-learning-basics-with-the- knearest-neighbors-algorithm-6a6e71d01761.
Goswami, A. (2018, August 9): Intro to image classification with KNN. Retrieved September 28, 2019, from https://medium.com/ @YearsOfNoLight/intro-to-image-classificationwith-knn-987bc112f0c2.
Alzubaidi, L., Zhang, J., Humaidi, A.J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., Santamaría, J., Fadhel, M.A., Al-Amidie, M. and Farhan, L., 2021. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions.Journal of big Data, 8(1), pp.1-74. DOI: https://doi.org/10.1186/s40537-021-00444-8