166
Views
0
CrossRef citations to date
0
Altmetric
Original Articles

Morphological classification of fine particles in transmission electron microscopy images by using pre-trained convolution neural networks

ORCID Icon, , , & ORCID Icon
Pages 657-666 | Received 23 Nov 2023, Accepted 12 Feb 2024, Published online: 08 Mar 2024
 

Abstract

Morphological information on fine particles is essential for understanding their transport behavior in the ambient atmosphere and in the human respiratory system. More than 3000 transmission electron microscopy (TEM) images of fine particles were collected from ambient atmosphere and directly from various sources, such as diesel and gasoline engine exhaust, biomass burning, coal combustion, and road dust, and were then morphologically categorized into four major classes (spherical, agglomerate, polygonal, and dendrite). Pre-trained convolutional neural network (CNN) models (DenseNet169, InceptionV3, MobileNetV3Small, ResNet50, and VGG16) and traditional machine learning models (decision trees, random forests, and support vector machines) were trained using the classified particles. The fine-tuned CNN model (DenseNet169) having the deepest feature learning exhibited the best performance among the tested models, with an overall classification accuracy of 89% and an average per-class accuracy ranging from 84% to 97%. The reliable classification of thousands of images was performed within several minutes. The agglomerated class was the least misclassified because of its significantly different features from those of the other classes. The critical regions of the particles for classification decisions varied among the pre-trained models. Our results suggest that the pre-trained CNN models would be useful for the rapid morphological classification of a large number of fine particles.

Copyright © 2024 American Association for Aerosol Research

Graphical Abstract

Data availability statement

The model was developed using Python code using powerful libraries such as Keras and TensorFlow. The code will be made available upon request.

Acknowledgments

We are so thankful to our collaborators (Prof. Haegon Jeon) who shared their expertise.

Disclosure statement

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Funding

This research was supported by National Research Foundation of Korea (NRF) grant from the Korean Government (MSIT; the Ministry of Science and ICT) (NRF-2019R1A2C3007202, NRF-2019M1A2A2103956, and NRF-2021M1A5A1065667), and it was also funded by the Samsung Advanced Institute of Technology (SAIT).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.