Improving Convolutional Neural Network (CNN) architecture (miniVGGNet) with Batch Normalization and Learning Rate Decay Factor for Image Classification
Keywords:Convolutional Neural Network, deep learning, MiniVGGNet, hyper parameter
The image classification is a classical problem of image processing, computer vision, and machine learning. This paper presents an analysis of the performance using Convolutional Neural Network (CNN) for image classifying using deep learning. MiniVGGNet is CNN architecture used in this paper to train a network for image classification, and CIFAR-10 is selected dataset used for this purpose. The performance of the network was improved by hyper parameter tuning techniques using batch normalization and learning rate decay factor. This paper compares the performance of the trained network by adding batch normalization layer and adjusting the value of learning rate decay factor for the network architecture. Based on the experimental results, adding batch normalization layer allow the networks to improve classification accuracy from 80% to 82%. Applying learning rate decay factor will improve classification accuracy to 83% and reduce the effects of overfitting in learning plot. Performance analysis shows that applying hyper parameter tuning can improve the performance of the network and increasing the ability of the model to generalize.
How to Cite
Open access licenses
Open Access is by licensing the content with a Creative Commons (CC) license.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.