309
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Inspection of Cotton Woven Fabrics Produced by Ethiopian Textile Factories Through a Real-Time Vision-Based System

, &

ABSTRACT

Fabric is produced by the weaving process through the interlacement of warp and weft yarn or knitting process through the loop formation of yarn. During these processes, there is a possibility of fabric defect formation which hinders the acceptability by the fabric consumers. Ethiopian textile factories practiced a human inspection system, a traditional means of detecting fabric quality, for monitoring textile fabric defects. Manual fabric defect detection helps to instantly correct small defects, but it is time-consuming and results in human error due to fatigue and lack of concentration. Moreover, the accuracy of recognizing the defect highly depends on the mental status of the person that checks the defects. This initiated the development of a better fabric defect identification system that helps textile experts to detect fabric defects with better precision and speed. This study proposes a vision-based fabric inspection system for plain woven gray fabrics with a uniform texture. Accordingly, a comprehensive Fabric Defect Detection Database (FDDD) is constructed. The fabric significant features were calculated using a convolutional neural network (CNN) which is a state-of-the-art technology in image processing and task analysis. The experimental result of this study shows an average accuracy of about 89% in fabric defect recognition.

摘要

织物是通过经纱和纬纱交织而成的织造工艺或通过纱线形成线圈而成的针织工艺生产的. 在这些过程中,有可能形成织物缺陷,这阻碍了织物消费者的可接受性. 埃塞俄比亚纺织厂采用人工检测系统,这是检测织物质量的传统方法,用于监测织物缺陷. 手动织物缺陷检测有助于立即纠正小缺陷,但由于疲劳和注意力不集中,这很耗时,还会导致人为错误. 此外,识别缺陷的准确性在很大程度上取决于检查缺陷的人的心理状态. 这启动了一个更好的织物缺陷识别系统的开发,该系统有助于纺织专家以更好的精度和速度检测织物缺陷. 本研究提出了一种基于视觉的织物检测系统,用于具有均匀纹理的平纹坯布. 因此,构建了一个全面的织物缺陷检测数据库(FDDD). 使用卷积神经网络(CNN)计算织物的显著特征,这是图像处理和任务分析中最先进的技术. 该研究的实验结果表明,织物缺陷识别的平均准确率约为89%.

Introduction

The textile manufacturing process consists of a series of complex and orderly processes such as spinning, weaving or knitting, dyeing, printing and finishing, and garment manufacturing. The stability and quality of the textile fabric produced by the whole production line are crucial to the enterprise (Kumar Citation2011). There are many factors that affect the final product on the production line of textile manufacturing. These include raw materials quality, mechanical factors, dye type, yarn size, and human factors (Xu et al. Citation2020). In general, textile fabric defects refer to defects on the surface of the fabric. There are many types of fabric defects, most of which are caused by process problems and machine malfunctions. Defects will affect the quality of the final product, resulting in a great waste of all kinds of resources (Bullón et al. Citation2017; Sadik and Islam Citation2014). Among these, fabric with defects results in a loss of 40–70% in price (Shady et al. Citation2006) where a loss of about 30% also occurs in the transportation system (Yu, Zhaoyang, and Jing Citation2009). As replacing the defective fabric is a difficult task, defect detection has to be implemented at each stage of the manufacturing process. Because defects in the previous stage will affect the later stage in the process of fabric manufacturing. Therefore, the fabric defects should be identified, classified, and controlled at each processing step to reduce the loss of enterprises earlier and faster (Singh and Jaspreet Citation2016; Zheng, Yu, and Jin Citation2013).

Thus, effective fabric defect detection is one of the key measures for modern fabric manufacturers to control costs, enhance product value and core competence (Shahrabadi et al. Citation2022). In the traditional method, inspection to identify fabric defects is carried out by workers at a quality control table with a high-power consuming light source illuminated on the fabrics, and they will identify the presence of defects. This method has high power consumption and causes stress and fatigue to workers’ eyes while undertaking visual inspection to attain high productivity (Fan et al. Citation2021). Moreover, the visual inspection method is an inadequate and expensive process due to its high time-consumption with very low accuracy of fabric defect detection. As the manual inspection method is a tedious process, automatic fabric defect detection is necessary for the textile industry to reduce costs and increase productivity (Li et al. Citation2021).

The core of a complete online textile fabric defect detection system is the detection algorithms. Many researchers and engineers in this field have devoted themselves to the design of robust and efficient algorithms within the past few decades (Mahajan, Kolhe, and Patil Citation2009; Yaşar Çıklaçandır, Utku, and Özdemir Citation2019). Compared to manual fabric defect detection, automatic detection systems are more effective with higher efficiency (Li et al. Citation2021). This study is proposed to improve defect detection accuracy and efficiency through an advanced vision-based fabric inspection system for the textile industries.

Generally, fabric defect detection algorithms can be categorized as traditional algorithms and learning-based algorithms (Li et al. Citation2021). The traditional algorithms are based on feature engineering with prior knowledge of statistical, structural, spectral, and model-based methods (Alper, Vural, and Hakan Citation2014; Zhao et al. Citation2020; Çıklaçandır, Utku, and Özdemir Citation2022). The learning-based algorithms can be further divided into classical machine learning algorithms and deep learning algorithms. Machine learning uses mathematical algorithms to learn and analyze data to make predictions and take decisions in the future, which has been widely employed in recent years and achieved stratifying results in various disciplines and industries (Yaşar Çıklaçandır, Utku, and Özdemir Citation2021). Deep Learning Algorithms have been applied to fabric defect detection problems and have achieved satisfying results for the improvement of textile product quality and production efficiency (Barua et al. Citation2020; Subrata et al. Citation2020). A prototype for a fabric defect recognition system has been formulated based on the identification of the fabric using its texture and surface morphological structure features. Different varieties of the deep learning approach such as the AlexNet (Krizhevsky, Ilya, and Geoffrey Citation2017) and VGGNet-16 (Tammina Citation2019) used CNN model as a potential candidate for extracting features from the fabric images.

The deep learning-based object detector can be classified as a one-stage detector and two-stage detectors (Wu, Doyen, and Steven Citation2020). One-stage detector has fast detection speed but fails to meet detection accuracy requirements, whereas two-stage algorithms have vice versa effect. Both types of detection algorithms have almost similar advantages and disadvantages in fabric defect detection. Higher detection accuracy and faster detection speed are required in the textile industry. Therefore, the algorithm should be selected according to the actual application scenarios and requirements to find the balance between efficiency and accuracy (Li et al. Citation2021). Recently, single-shot multibox detector (SSD) based on convolutional neural network (CNN) has obtained good performance in object detection. Some improvements have been made to the fabric defect scenario, and the experimental results show rationality and effectiveness (Dlamini et al. Citation2022; Liu et al. Citation2018).

Even though deep learning methods bring huge impact in the segmentation and classification of defects, there is still inability to meet multiscale defect detection in the practical applications (Zhou, Li, and Liang Citation2021). Many studies reveal that even the best model is still troubled by the large size of the problem (Rasheed et al. Citation2020). In most cases, the datasets used for verifying the proposed models were widely based on online data, such as from TILDA, not on the real data obtained directly from the industries. Furthermore, challenges to the training process of deep learning emerge from obtaining defective image data compared to normal defect-free samples (Li, Weigang, and Jiahao Citation2017). This resulted in a reduction in the accuracy of existing detection models. The actual textile production line requires high real-time performance of the algorithm which demanded high execution efficiency.

In this study, a huge number of plain-woven fabric images with major defects and defect free were collected, and a prototype CNN model was developed to examine the defect types offline and/or in real-time. The accuracy and convergence time of the newly proposed CNN model is compared to the VGGnet model. The study provided an improved way to inspect and detect plain woven fabric defects either online and/or offline.

Materials and methods

Materials and implementation tools

In this study, pure cotton gray plain-woven fabrics, Cannon 450D camera, lighting system, transport encoder, frame grabbers, and computer are used. The research is undertaken on MATLAB student version R2022a, operated under the Microsoft Windows 10 64-bit operating system of a Hp Pavilion laptop. The laptop embeds Intel(R) Core (TM) i7-4710MQ CPU @ 2.60 GHZ processor, with 8 GB RAM.

Methods

Dataset preparation

The study follows an experimental research design to develop a fabric defect recognition system using a deep learning approach. Fabric images containing different orientations, noise levels, camera orientations, and magnification power were collected. Four typical fabrics, namely, defect-free, miss-pick, double-pick, and hole defects, were selected to be analyzed in this study. About 600 samples of fabric images for each type of fabric defect are directly collected from Bahir Dar textile share company. Samples of the collected images are shown in . Among the collected fabric images, 70% were used for training, whereas 30% were used for testing the performance of the proposed models. A total of 20 different experiments are conducted by using the different combinations of the datasets obtained using random splitting techniques.

Figure 1. The fabric image captured using a digital camera (a) defect-free (b) hole (c) miss pick and (d) double pick.

Figure 1. The fabric image captured using a digital camera (a) defect-free (b) hole (c) miss pick and (d) double pick.

Digitalized defective and defect-free fabric images were first preprocessed, followed by feature extraction using the optimal feature extraction techniques, and finally, the optimal features are identified with the best pattern extraction techniques as shown in . In doing so, the study follows the image processing steps such as image analysis and understanding to automatically classify the fabric into respective classes.

Figure 2. A structure of automated fabric inspection systems flow chart.

Figure 2. A structure of automated fabric inspection systems flow chart.

Data augmentation

The effects of over-fitting and vanishing gradients are the most common challenges of designing the CNN model. These effects on image data are reduced by artificially enlarging the dataset using label-preserving transformations. Accordingly, the dataset size is enlarged by applying augmented techniques in this study.

Pre-processing

The pre-processing component is responsible for enhancing the fabric image to the main image processing activities. Undesired noises appearing on the fabric images such as rotated, scaled, zoomed, and blurred are removed. Mainly, two tasks are performed in this step: image size normalization and image quality enhancement.

Image size normalization

As the size of images is not used as a feature for categorization in the proposed design, different fabric images need to be resized to similar values for training and testing the proposed CNN model. Thus, all captured fabric images are normalized to a fixed size as they do not have the same size initially. The skew angle detection and correction were also undertaken in this process.

Image quality enhancement

Due to various reasons, the quality of captured images does not show all the necessary features used for discriminating the different fabric images. The input image quality is improved by using appropriate noise filtering techniques such as PSNR (signal-to-noise ratio) and MSE (mean square error). The MSE and PSNR are calculated according to equations 1 and 2.

(1) MSE=mni1(m,ni21cμ.nmn(1)
(2) PSNR=10log10X2MSE(2)

A higher value of the PSNR indicates better quality of the noise removal techniques and inverse for MSE. In addition, the brightness level of the fabric images is enhanced in this study, and the effect of illumination is normalized through histogram equalization techniques in this study.

Feature extraction

The descriptive features of the fabric sample such as color, size, shape, and texture features are used to design the recognizer system. The hierarchical fabric image feature is extracted directly from the fabric image through the CNN model. CNN is designed to automatically and adaptively learn spatial hierarchies of features through backpropagation by using multiple building blocks, such as convolution layers, pooling layers, and fully connected layers (Yamashita et al. Citation2018).

Model evaluation

The fabric defect detection network model is evaluated by the precision and the time taken for testing, which includes precision, recall rate, and error rate. The test time refers to the time it takes for the algorithm to test one image. The shortest test time of the algorithm has important practical significance. Accordingly, precision, recall, and error rate are defined as Equations 3, 4, and 5, respectively.

(3) P=TPTP+FP(3)
(4) Recall=TPTP+FN=TPP(4)
(5) Errorrate=FP+FNTP+TN+FP+FN=FNTP+FN=FNP=1recall(5)

Where, TP is the number of positive samples determined by the model to be positive, TN is the number of negative samples determined by the model to be actually negative, FP is the number of positive samples determined by the model but actually negative samples, and FN is the number of negative samples determined by the model but actually positive samples, as shown in .

Table 1. Precision parameter definitions.

Result and discussion

In this study, the major factors affecting the performance of the designed model are identified. Image quality, environmental conditions of fabric image digitalization, camera orientation, and magnification power are some of the major factors affecting the performance of the proposed model. The RGB image of the fabric was used to check the model, and experimental studies shows the accuracy improvement from 75.2% to 82.44% by using AlexNet model as shown in .

Figure 3. The accuracy performance of the AlexNet model.

Figure 3. The accuracy performance of the AlexNet model.

A new fabric defect recognition model was developed in this study. A recognition performance of 88.14%, a training performance of 4 h, and an average elapsed time of 25-s time for the recognition of a new unseen fabric image were achieved. According to the experimental study, a newly proposed model yields a better recognition performance than the existing AlexNet model in the case of fabric identification system. depicts the recognition performance obtained from the newly proposed model.

Figure 4. The performance of the newly proposed CNN model: (a) model accuracy, (b) loss of a proposed model.

Figure 4. The performance of the newly proposed CNN model: (a) model accuracy, (b) loss of a proposed model.

The performance of fabric defect identification of the newly proposed CNN model is summarized in . It showed that from the total 312 tested images, 275 (88.1%) were correctly classified and 37 (11.9%) were incorrectly classified. The result of the proposed model shows that the classification accuracy of 1 (defect-free), 2 (hole), 3 (miss pick), and 4 (double pick) fabric images were 89.8%, 98.9%, 95.2%, and 71.9%, respectively, with an overall recognition accuracy of 89%. This indicated that the model achieves better results in the identification of hole defects and less performance in double-pick fabric defects. As per researchers’ finding, the latter case may result due to the pattern similarity between double-pick defects and defect-free fabric images in some cases. This was mainly observed when uneven warp or weft yarns are inserted as a double pick, and the system considers them as defect-free images. shows image of double-pick fabric defect (marked in blue) with low pattern intensity, which is ambiguous to differentiate it easily from defect-free images. This is not the only reason for the low performance of identifying the defect and further study needs to be undertaken in the future.

Figure 5. Double-pick fabric defect with low-intensity of patterned defect.

Figure 5. Double-pick fabric defect with low-intensity of patterned defect.

Table 2. The confusion matrix of the newly proposed CNN model for fabric defect identification system.

Proper feature extraction of invariance to the image quality variations due to illumination, scaling, and deformations is critical to developing an efficient recognition system. In this study, different experimental schemes were implemented to identify the optimal feature extraction technique. In the first scheme, the grayscale fabric images were used to extract features in less training and testing time. Compared to three channels in RGB, processing one channel in grayscale minimizes the recognition time by three times. However, this results in less performance compared to the RGB fabric images due to certain errors caused when it changed into grayscale, especially in double picks and missed picks. Thus, another localized channel conversion system needs to be imposed to use the grayscale as an efficient scheme of fabric defect recognition system.

Factors affecting the performance of the CNN model such as the number of convolutional layers, pooling layers, fully connected layers, activation, and hypermeter were considered during initial pilot tests. Restructuring, limiting the learnable parameters, and hyper-parameters improved the performance of the CNN model (Khan et al. Citation2020). Accordingly, a better result was registered when 11 hidden layers were considered to construct the CNN model in this study. A significant performance improvement was observed, while batch normalization layers were used in each convolutional layer immediately next to the nonlinearity layers. Also, the effect of the different hypermeters such as size and number of learnable kernels, number of strides, and training option parameters were considered in this study.

The results obtained in this study are compared to the ones obtained from the literature reviews in order to obtain the optimal CNN model for the fabric recognition system. According to the literature review, VGGnet and AlexNet were identified as potential candidates for designing fabric recognition systems. The VGGnet system enhanced the total error rates registered by AlexNet from 15.3% to 7.3%. Thus, this study was based on the VGGnet and AlexNet architecture for designing the CNN model to extract features from the woven fabrics. The accuracy of the fabric defect recognition systems is shown in .

Figure 6. The accuracy comparison of CNN model: VGGNet, AlexNet, and the newly proposed model.

Figure 6. The accuracy comparison of CNN model: VGGNet, AlexNet, and the newly proposed model.

Fewer parameters to be learned during the training improved the capability to converge faster and reduced overfitting problems (Karen and Andrew Citation2015). The experimental result of this study shows that the VGGnet model takes more than 6 training hours, while the newly proposed model takes only 4 h. Therefore, the current designed model is converged faster as compared with the VGGnet model as shown in . This is mainly due to the number of parameters learned were very less as compared with the VGGnet model.

Figure 7. The comparison chart of the VGGNet, AlexNet, and newly proposed model using computational time.

Figure 7. The comparison chart of the VGGNet, AlexNet, and newly proposed model using computational time.

Conclusion

Consumer needs for quality or defect-free fabrics have increased rapidly in recent years utmost priority than ever. To ensure this consistently, various manufacturers have implemented different methods including performing 100% inspection. However, this is an impossible task due to several drawbacks of visual offline systems. To overcome this, online automated fabric inspection, a continuous computer-based system operated in real time is introduced as an alternative even though the perfect system has not been investigate yet. Thus, this study used a huge number of real fabric images and proposed a new model for fabric defect recognition systems. Various plain fabric images with major defects and free of defects were collected, and a prototype CNN model was developed to examine the defect types off-line or in real-time (during the production of the fabric on the weaving machine). The newly proposed CNN model resulted in an accuracy of more than 89% and a faster convergence time of about 33% compared to the VGGnet model. This study provided an improved way to inspect and detect plain fabric defects either online and/or offline. The author would like to recommend the inspection method be expanded to other fabric structures such as twill, sateen, knitted, and other patterns such as striped and checked fabrics.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The work was supported by the Ethiopian Institute of Textile and Fashion Technology, Bahir Dar University.

References

  • Alper, S. M., A. Vural, and Ö. Hakan. 2014. “Textural Fabric Defect Detection Using Statistical Texture Transformations and Gradient Search.” The Journal of the Textile Institute 105 (9): 998–11. https://doi.org/10.1080/00405000.2013.876154.
  • Barua, S., P. Hemprasad, D. D. Parth, and A. Manoharan. 2020. “Deep Learning-Based Smart Colored Fabric Defect Detection System.” In Applied Computer Vision and Image Processing, edited by B. Iyer, A. M. Rajurkar, and V. Gudivada, 212–219. Singapore: Springer Singapore.
  • Bullón, P. J., A. A. González, E. A. Hernández, and A. Queiruga-Dios. 2017. “Manufacturing Processes in the Textile Industry. Expert Systems for Fabrics Production.” ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal 6 (1): 41–50. https://doi.org/10.14201/ADCAIJ2017614150.
  • Çıklaçandır, F. G., S. Utku, and H. Özdemir. 2022. “The Effect of Wavelet Transform for Woven Fabric Defect Classification.” BIndustria Textila 73 (2): 165–170. https://doi.org/10.35530/IT.073.02.202030.
  • Dlamini, S., C. Y. Kao, S. L. Su, and C. F. Jeffrey Kuo. 2022. “Development of a Real-Time Machine Vision System for Functional Textile Fabric Defect Detection Using a Deep YOLOv4 Model.” Textile Research Journal 92 (5–6): 675–690. https://doi.org/10.1177/00405175211034241.
  • Fan, J., W. K. Wong, J. Wen, C. Gao, D. Mo, and Z. Lai. 2021. “Fabric Defect Detection Using Deep Convolution Neural Network.” AATCC Journal of Research 8 (1): 143–150. https://doi.org/10.14504/ajr.8.S1.18.
  • Karen, S., and Z. Andrew. 2015. “Very Deep Convolutional Networks for Large-Scale Image Recognition.” ArXiv - Computer Vision and Pattern Recognition 1409.1556 (6):1–14. https://doi.org/10.48550/arXiv.1409.1556.
  • Khan, A., S. Anabia, Z. Umme, and S. Aqsa. 2020. Artificial Intelligence Review A Survey of the Recent Architectures of Deep Convolutional Neural Networks, 53. Netherlands: Springer.
  • Krizhevsky, A., S. Ilya, and E. H. Geoffrey. 2017. “ImageNet Classification with Deep Convolutional Neural Networks.” Communications of the ACM 60 (6): 84–90. https://doi.org/10.1145/3065386.
  • Kumar, A. 2011. “Computer Vision-Based Fabric Defect Analysis and Measurement Hu J.” In Computer Technology for Textiles and Apparel: Woodhead Publishing Series in Textiles, 45–65. Cambridge, UK: Woodhead Publishing.
  • Li, C., J. Li, Y. Li, L. He, X. Fu, and J. Chen. 2021. “Fabric Defect Detection in Textile Manufacturing: A Survey of the State of the Art.” SECURITY and COMMUNICATION NETWORKS 2021:1–13. 9948808. https://doi.org/10.1155/2021/9948808.
  • Liu, Z., S. Liu, C. Li, S. Ding, Y. Dong. 2018. “Fabric Defects Detection Based on SSD.” ACM International Conference Proceeding Series. NSW, Sydney, Australia, 2, 74–78.
  • Li, Y., Z. Weigang, and P. Jiahao. 2017. “Deformable Patterned Fabric Defect Detection with Fisher Criterion-Based Deep Learning.” IEEE Transactions on Automation Science and Engineering 14 (2): 1256–1264. https://doi.org/10.1109/TASE.2016.2520955.
  • Mahajan, P. M., S. R. Kolhe, and P. M. Patil. 2009. “A Review of Automatic Fabric Defect Detection Techniques.” Advances in Computational Research 1 (2): 18–29.
  • Rasheed, A., B. Zafar, A. Rasheed, N. Ali, M. Sajid, S. H. Dar, U. Habib, T. Shehryar, and M. T. Mahmood. 2020. “Fabric Defect Detection Using Computer Vision Techniques: A Comprehensive Review.” Mathematical Problems in Engineering 2020:1–24. 8189403. https://doi.org/10.1155/2020/8189403.
  • Sadik, S., and S. Islam. 2014. Defects of Woven Fabrics and Their Remedies. Daffodil, Dhaka, Bangladesh: Daffodil International University.
  • Shady, E., Y. Gowayed, M. Abouiiana, S. Youssef, C. Pastore. 2006. “Detection and Classification of Defects in Knitted Fabric Structures.” Textile Research Journal 76 (4): 295–300. https://doi.org/10.1177/0040517506053906.
  • Shahrabadi, S., Y. Castilla, M. Guevara, L. G. Magalhães, D. Gonzalez, and T. Adão. 2022. “Defect Detection in the Textile Industry Using Image-Based Machine Learning Methods: A Brief Review.” Journal of Physics Conference Series 2224:12010. https://doi.org/10.1088/1742-6596/2224/1/012010.
  • Singh, K., and K. Jaspreet. 2016. “Identification and Classification of Fabric Defects.” International Journal of Advanced Research 4 (8): 1137–1141. https://doi.org/10.21474/IJAR01/1314.
  • Subrata, D., W. Amitabh, S. Keerthika, and N. Thulasiram. 2020. “Defect Analysis of Textiles Using Artificial Neural Network.” Current Trends in Fashion Technology & Textile Engineering 6 (1): 1–5. https://doi.org/10.19080/CTFTTE.2019.05.555677.
  • Tammina, S. 2019. “Transfer Learning Using VGG-16 with Deep Convolutional Neural Network for Classifying Images.” International Journal of Scientific & Research Publications (IJSRP) 9 (10): 9420. https://doi.org/10.29322/IJSRP.9.10.2019.p9420.
  • Wu, X., S. Doyen, and C. H. H. Steven. 2020. “Recent Advances in Deep Learning for Object Detection.” Neurocomputing 396:39–64. https://doi.org/10.1016/j.neucom.2020.01.085.
  • Xu, X., C. Dan, Z. Yu, and G. Jun. 2020. “Application of Neural Network Algorithm in Fault Diagnosis of Mechanical Intelligence.” Mechanical Systems and Signal Processing 141:106625. https://doi.org/10.1016/j.ymssp.2020.106625.
  • Yamashita, R., M. Nishio, R. K. G. Do, and K. Togashi. 2018. “Convolutional Neural Networks: An Overview and Application in Radiology.” Insights into Imaging 9 (4): 611–629. https://doi.org/10.1007/s13244-018-0639-9.
  • Yaşar Çıklaçandır, F. G., S. Utku, and H. Özdemir. 2019. “A Survey on Woven Fabric Defects.” Annals of the University of Oradea Fascicle of Textiles, Leatherwork 20 (2): 113–118.
  • Yaşar Çıklaçandır, F. G., S. Utku, and H. Özdemir. 2021. “Fabric Defect Classification Using Combination of Deep Learning and Machine Learning.” Journal of Artificial Intelligence and Data Science (JAIDA) 1 (1): 22–27.
  • Yu, Z., L. Zhaoyang, and L. Jing 2009. “Fabric Defect Detection and Classification Using Gabor Filters and Gaussian Mixture Model.” In 9th Asian Conference on Computer Vision, edited by Z. Hongbin, T. Rin-Ichiro, and M. Stephen, 9, 635–644. Xi’an, China.
  • Zhao, S., L. Yin, J. Zhang, J. Wang, and R. Zhong. 2020. “Real-time fabric defect detection based on multi-scale convolutional neural network.” IET Collaborative Intelligent Manufacturing 2 (4): 189–196. https://doi.org/10.1049/iet-cim.2020.0062.
  • Zheng, D., H. Yu, and L. H. Jin. 2013. “A New Method for Classification of Woven Structure for Yarn-Dyed Fabric.” Textile Research Journal 84 (1): 78–95. https://doi.org/10.1177/0040517513483858.
  • Zhou, X., Y. Li, and W. Liang. 2021. “CNN-RNN Based Intelligent Recommendation for Online Medical Pre-Diagnosis Support.” IEEE/ACM Transactions on Computational Biology and Bioinformatics 18 (3): 912–921. https://doi.org/10.1109/TCBB.2020.2994780.