• icon+90(535) 849 84 68
  • iconnwsa.akademi@hotmail.com
  • icon Fırat Akademi Samsun-Türkiye

Article Details

  • Article Code : FIRAT-AKADEMI-8774-5786
  • Article Type : Araştırma Makalesi
  • Publication Number : 2A0208
  • Page Number : 79-93
  • Doi : 10.12739/NWSA.2025.20.3.2A0208
  • Abstract Reading : 166
  • Download : 49
  • Atıf Sayısı : 0
  • Share :

  • PDF Download

Issue Details

  • Year : 2025
  • Volume : 20
  • Issue : 3
  • Number of Articles Published : 2
  • Published Date : 1.07.2025

Cover Download Context Page Download
Technological Applied Sciences

Serial Number : 2A
ISSN No. : 1308-7223
Release Interval (in a Year) : 4 Issues

Busra Er1 , VOLKAN KAYA2

Keywords

A COMPARATIVE STUDY OF CNN AND TRANSFORMER-BASED DEEP LEARNING MODELS FOR TEA LEAF DISEASE RECOGNITION

Busra Er1 , VOLKAN KAYA2

This study presents a comparative analysis of six deep learning models for the automatic classification of eight different disease categories found in tea leaves. The dataset used in the study was divided into three parts with 70% training, 15% validation, and 15% testing ratios. As part of the experimental evaluation, five convolutional neural network (CNN) based architectures (ResNet50, DenseNet121, EfficientNet-B0, MobileNetV3-Large, and ConvNeXt-Tiny) and one Transformer-based model (Vision Transformer, ViT-Small) were tested using the same training strategies. The models were trained using a transfer learning and fine-tuning approach; performance metrics were reported based on accuracy, precision, recall, and F1-score values. In addition, the number of parameters and the prediction time per image were calculated for each model. Experimental results show that the DenseNet121 model achieved the highest success rate in the validation dataset, while the ConvNeXt-Tiny architecture achieved the highest accuracy and F1-score values in the standalone test dataset. The findings indicate that modern CNN-based architectures offer high generalization capabilities in the classification of tea leaf diseases. The results obtained serve as a comparative reference for future studies in the field of agricultural image analysis.

Keywords
Tea Leaf Disease Classification, Deep Learning, Convolutional Neural Networks, Vision Transformer, Plant Disease Detection,

Details
   

Authors

Busra Er (1)

Busraer7@gmail.com | 0009-0009-4255-2800

VOLKAN KAYA (2) (Corresponding Author)

ERZINCAN BINALI YILDIRIM UNI.
vkaya@erzincan.edu.tr | 0000-0001-6940-3260

Supporting Institution

:

Project Number

:

Thanks

:
References
[1] Yang, J., Xu, G., Yang, M., and Lin, Z., (2025). Lightweight wavelet-CNN tea leaf disease detection. PLoS One, 20(5):e0323322.

[2] Ahmed, F., Ahad, M.T., and Emon, Y.R., (2023). Machine learning-based tea leaf disease detection: A comprehensive review. arXiv preprint arXiv:2311.03240.

[3] Ozturk, O., Sarica, B., and Seker, D.Z., (2025). Interpretable and robust ensemble deep learning framework for tea leaf disease classification. Horticulturae, 11(4):437.

[4] Ye, R., Gao, Q., and Li, T., (2024). BRA-YOLOv7: improvements on large leaf disease object detection using FasterNet and dual-level routing attention in YOLOv7. Frontiers in Plant Science, 15, 1373104.

[5] Zhang, J., Guo, H., Guo, J., and Zhang, J., (2023). An information entropy masked vision transformer (iem-vit) model for recognition of tea diseases. Agronomy, 13(4):1156.

[6] Xue, Z., Xu, R., Bai, D., and Lin, H., (2023). YOLO-tea: A tea disease detection model improved by YOLOv5. Forests, 14(2):415.

[7] Li, J. and Liao, C., (2025). Tea disease recognition based on image segmentation and data augmentation. IEEE Access.

[8] Kaggle, “Identifying Disease in Tea leaves”, kaggle.com, [Online]. Available: https://www.kaggle.com/datasets/shashwatwork/identifying-disease-in-tea-leafs. [Accessed: Nov. 11, 2025].

[9] He, K., Zhang, X., Ren, S., and Sun, J., (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp:770-778).

[10] Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K.Q., (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp:4700-4708).

[11] Tan, M. and Le, Q., (2019, May). Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning (pp:6105-6114). PMLR.

[12] Howard, A., Sandler, M., Chu, G., Chen, L.C., Chen, B., Tan, M., ... and Adam, H., (2019). Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision (pp:1314-1324).

[13] Liu, Z., Mao, H., Wu, C.Y., Feichtenhofer, C., Darrell, T., and Xie, S., (2022). A convnet for the 2020s. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp:11976-11986).

[14] Dosovitskiy, A., (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.

[15] Pan, S.J. and Yang, Q., (2009). A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359.

[16] Yosinski, J., Clune, J., Bengio, Y., and Lipson, H., (2014). How transferable are features in deep neural networks? Advances in neural information processing systems, 27.