Publication Cover
Automatika
Journal for Control, Measurement, Electronics, Computing and Communications
Volume 65, 2024 - Issue 3
147
Views
0
CrossRef citations to date
0
Altmetric
Regular Paper

KDViT: COVID-19 diagnosis on CT-scans with knowledge distillation of vision transformer

, ORCID Icon, &
Pages 1113-1126 | Received 05 Jan 2024, Accepted 11 Apr 2024, Published online: 15 May 2024
 

Abstract

This paper introduces Knowledge Distillation of Vision Transformer (KDViT), a novel approach for medical image classification. The Vision Transformer architecture incorporates a self-attention mechanism to autonomously learn image structure. The input medical image is segmented into patches and transformed into low-dimensional linear embeddings. Position information is integrated into each patch, and a learnable classification token is appended for classification, thereby preserving spatial relationships within the image. The output vectors are then fed into a Transformer encoder to extract both local and global features, leveraging the inherent attention mechanism for robust feature extraction across diverse medical imaging scenarios. Furthermore, knowledge distillation is employed to enhance performance by transferring insights from a large teacher model to a small student model. This approach reduces the computational requirements of the larger model and improves overall effectiveness. Integrating knowledge distillation with two Vision Transformer models not only showcases the novelty of the proposed solution for medical image classification but also enhances model interpretability, reduces computational complexity, and improves generalization capabilities. The proposed KDViT model achieved high accuracy rates of 98.39%, 88.57%, and 99.15% on the SARS-CoV-2-CT, COVID-CT, and iCTCF datasets respectively, surpassing the performance of other state-of-the-art methods.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The research in this work was supported by the Telekom Malaysia Research Development under grant number RDTC/231084.