Enhancing Image Classification Accuracy Using Transfer Learning with Pretrained CNNs

Authors

  • Amit Kumar Department of CS&E, Faculty of Engineering, Teerthanker Mahaveer University, India Author
  • Narpat Singh Department of CS&E, Faculty of Engineering, Teerthanker Mahaveer University, India Author
  • Vishal Mishra Department of CS&E, Faculty of Engineering, Teerthanker Mahaveer University, India Author
  • Ugra Sen Department of CS&E, Faculty of Engineering, Teerthanker Mahaveer University, India Author
  • Shobha Bharti Department of CS&E, Sri Ram Murti Smarak College of Engineering &Technology, India Author
  • Sumit Kumar Pushkar Department of CS&E, Guru Nanak University, India Author

Keywords:

Transfer Learning, Convolutional Neural Networks (CNNs), Image Classification, Fine-Tuning, EfficientNetB0

Abstract

 

This study conducts a detailed investigation into improving image classification performance using transfer learning with pre-trained convolutional neural networks (CNNs). The goal of this research is to improve the reliability and efficiency of models trained on diverse datasets through adaptive fine-tuning methods. Each of the four state-of-the-art CNN architectures (VGG16, ResNet50, InceptionV3, and EfficientNetB0) was fine-tuned and evaluated on CIFAR-10, Caltech-101, and a custom image dataset, demonstrating the variations observed in real-world datasets. The proposed method includes systematic preprocessing, feature extraction from pre-trained models, and selective fine-tuning of the upper convolutional layers with the Adam algorithm and dynamic learning rate scheduling for miniature fine-tuning of CNN architectures. This adaptive learning enables faster convergence, decreased overfitting, and improved accuracy without significant computational expenses. The research results show that EfficientNetB0 achieved the highest accuracy of 97.8%, followed by InceptionV3 (96.2%), ResNet50 (95.8%), and VGG16 (92.3%). The results further emphasize that the classification accuracy benefits substantially from transfer learning, especially on small or heterogeneous datasets. Furthermore, the hybrid fine-tuning mechanism was scalable and more resource-efficient for real-world use, allowing for models to be fine-tuned on-the-fly or with limited data. In general, this work demonstrates that transfer learning can operate as a paradigm for contemporary computer vision. It provides a balance between high accuracy and computational demands. In the future, we plan to expand this framework to include ensemble-based transfer learning and Vision Transformers (ViTs) to further improve robustness, interpretability, and performance across complex image classification settings.

Downloads

Published

13-03-2026

Conference Proceedings Volume

Section

Articles

How to Cite

Kumar, A. ., Singh, N. ., Mishra, V. ., Sen, U. ., Bharti, S. ., & Pushkar, S. K. . (2026). Enhancing Image Classification Accuracy Using Transfer Learning with Pretrained CNNs. DMPedia Lecture Notes in Computer Science & Engineering, IMPACT26, 572-580. https://digitalmanuscriptpedia.com/conferences/index.php/DMP-LNCSE/article/view/164