773
Views
1
CrossRef citations to date
0
Altmetric
Computer Science

An automated approach for fibroblast cell confluency characterisation and sample handling using AIoT for bio-research and bio-manufacturing

, , , , &
Article: 2240087 | Received 13 Mar 2023, Accepted 19 Jul 2023, Published online: 02 Aug 2023

Abstract

Current methods used in cell culture monitoring, characterisation and handling are manual, time consuming and highly dependent on subjective observations made by human operators, resulting in inconsistent outcomes. This project focuses on developing an automated system for cell growth analysis, utilising Artificial Intelligence of Things (AIoT) for use in bio-manufacturing and bio-research. The proposed AIoT system applies a U-Net convolutional neural network (CNN) model for fibroblast cell segmentation to monitor confluency and incorporates a mechanical robotic arm for automated sample handling. Intel Movidius Neural Compute Stick 2 (NCS2) and OpenVINO Toolkit were used to allow for standalone deployment on an UP2 Squared and a Raspberry Pi board that is integrated with a digital microscope system. The robotic arm was programmed to pick, place and sort the cell samples within the working environment. The results obtained from the CNN model development achieved an accuracy of 95% and an intersection over Union (IoU) of 66%. The OpenVINO Toolkit successfully optimised power-consumption and accelerated the segmentation on a 2K image to be completed in less than 13 seconds. The AIoT cell detection and characterisation system is able to automatically analyse the cell culture while reducing manual sample handling by laboratory personnel. Eventually, it is hoped that this AIoT automated cell detection and characterisation system will have a positive impact and contribute towards the implementation of the Industrial Revolution IR4.0 in bio-based research and industries.

1. Introduction

Cell culture is the process of cultivating cells in an artificial environment under certain conditions until they reach a specific growth rate. It is an essential process used in the bio-manufacturing industry to produce a variety of biologics, including vaccines, therapeutic proteins, and monoclonal antibodies (Kantardjieff & Zhou, Citation2014). In cell culture biology, the term confluency describes the percentage of cells covering the surface area of a culture flask or petri dish. When confluency reaches a certain level, the cells are required to be separated into new cell cultures (subculture) to enable further expansion and continuous growth of the cells (Greb, Citation2017).

However, traditional cell confluency characterisation methods have been reliant on manual techniques that use a microscope to analyse and monitor cell growth (Bleloch, Citation2021). The conventional approach is heavily reliant on the subjective microscopic analysis that requires a trained medical laboratory personnel to observe cell cultures and estimate confluency levels.

The overall process requires special procedures to handle the samples and should only be carried out by trained practitioners, making it inefficient, time-consuming, and prone to human errors. Daily inspection of cell cultures and monitoring of confluency typically requires 30 minutes to an hour, while the time needed for the cell culture to reach 100% confluency can vary between 2 to 7 days. Given the need for industrial transformation towards automation in the bio-manufacturing industry, there is a growing demand for smart applications that can improve efficiency and minimise risks in cell culture production (Moutsatsou et al., Citation2019).

This research focuses on analysing fibroblast cells, a type of mammalian cell that are essentially used for human skin treatment (Cell Culture Basics Handbook, n.d.). The objectives of this project are to develop a standalone fibroblast cell confluency monitoring system using Artificial Intelligence of Things (AIoT) and deep learning algorithms, simplify bio-related analysis procedures to save time and yield faster, accurate results, and to allow for an automated workflow of the bio-manufacturing process. This project aims to provide a substantial contribution to cell culture monitoring and handling through the implementation of AIoT applications in bio-research and bio-processing towards IR4.0.

In microscopic applications, cell detection and characterisation are difficult tasks due to abnormalities in microscopic images. Therefore, image processing techniques have become an essential aspect to obtain accurate cell classifications. (Lohana & Rajalakshmi, Citation2020) discussed image processing techniques that are particularly useful for feature extraction. They proposed the implementation of image sharpening using kernel filters, as well as canny edge detector, which can perform edge detection tasks accurately by manipulating the slope value of each pixel. Meanwhile, image enhancement algorithms were proposed in (Wu et al., Citation2008) that were performed to enhance the overall visualisation of the image. The techniques that were used were contrast stretching and histogram equalisation to adjust colour distribution. Whereas a median filter was used to reduce noise and produce a smoother image. Furthermore, image segmentation algorithms were used to detect edges, intensities discontinuity and homogeneous areas. By using thresholds, the foreground and background pixels became separated into two levels, then each level was given a different grey level value (Forero & Hidalgo, Citation2011).

On the other hand, several CNN approaches have been exploited to analyse cell cultures and microscope images. ResNet152, Inception-V3 and Inception-Resnet-v2 are deep architectures that can extract deep features with high accuracy to classify cell types. However, there has been a noticeable variation of the results obtained from different datasets indicating the dependency of the performance on the image features (Nguyen et al., Citation2018). Meanwhile, a VGG-16 CNN model has been developed to classify actin filament networks in cell cultures and to diagnose breast cancer. The experimental result showed that humans can identify actin filaments with an accuracy of 78% whilst the model can achieve 97% accuracy. Nevertheless, the model requires human supervision due to its insensitivity to cell type variation (Oei et al., Citation2019). On the other hand, a real-time system was developed by using YOLO based on Darknet-53 to localise and quantise nuclei. Although the system showed good performance in detecting and counting small nuclei, some errors occurred when detecting defective nuclei (Su et al., Citation2020). (Rettig et al., Citation2019) developed a supervised learning computer vision (SLCV) system to analyse muscle fibres. The presented error analysis showed that supervised learning surpassed computer vision. Nonetheless, the combination of the two methods in a step-by-step implementation had successfully achieved accurate detection results. (El Ariss et al., Citation2016) presented the use of object detection and classification using R-CNN to diagnose and count sickle cells in blood samples. The R-CNN method utilises CNN for feature extraction and support vector machines (SVM) to classify the feature which resulted in over 90% accuracy in detecting sickle cells. However, the authors recommended applying image enhancement, morphological operations and image segmentation to improve the process due to R-CNN being incapable of accurately detecting sickle cells among overlapped blood cells. Meanwhile for cell characterisation (Limon-Cantu & Alarcon-Aquino, Citation2021), proposed an anomaly detection system that utilised Dendritic Cell Algorithms (DCA) and Multiresolution Analysis (MRA) to categorise signal data as normal or anomalous. The model developed achieved high accuracy in detecting anomalous behaviour indicating invading agents to the immune system cells. However, the approach is highly dependent on pre-defined parameters that limit the performance under high feature variations (Deshpande et al., Citation2021). reviewed several methods that include image processing, supervised and unsupervised machine learning algorithms for blood cells analysis. The review concluded that choosing a suitable method depends on the complexity level of the analysis, where simple tasks such as distinguishing white blood cells from red blood cells can be completed using image processing techniques or k-means clustering. Whereas complex applications like leukaemia detection and blood group identification require the implementation of advanced machine vision techniques and supervised machine learning approaches. In their extensive review (Malik et al., Citation2023), evaluated the strengths and limitations of various image analysis techniques for mammalian cells. Their findings indicated that deep learning outperforms traditional computer vision and machine learning methods.

For medical image segmentation, U-Net architecture has shown accurate cell segmentation results despite being trained with limited datasets. U-Nets simple architecture makes training more efficient and requires less training time compared to other methods (Siddique et al., Citation2021). An enhanced U-net model called Mc-Unet was proposed in (Hu et al., Citation2020) where several modifications were applied to the model compared to the original U-Net model to achieve better performance with images consisting of cell shapes deformation, high density of cells and low contrast. A performance analysis of several CNN models used to segment lung tissues for COVID-19 patients was presented in (Iyer et al., Citation2021). The four models used are UNet, SegNet, VGG UNet and HR-Net. HR-Net achieved the best accuracy among the four models, whereas U-Net achieved the best inference speed, which is three times faster than HR-Net.

Although many approaches have been developed to analyse microscope images, few research were done that focus specifically on cell culture confluency. Once such system was developed that can classify confluency for IPs cells using deep learning techniques. The system is based on Resnet-50 CNN architecture and is designed to categorise the status of cell cultures into low, medium, and high growth. This approach scored an accuracy of 85% despite the limited training dataset (Chu et al., Citation2020). As to model optimisation, Intel OpenVINO toolkit provides inference engine tools that can efficiently inference deep learning models (van der Aalst et al., Citation2019). The performance results demonstrated that the implementation of OpenVINO inference engine can enhance the prediction time to become three times faster compared to Caffe-based implementations.

Meanwhile to automate sample handling, a control method that is particularly useful is kinematics control. This method allows robotic motion based on coordinate geometry and can be achieved by manipulating joint angles and linkages (forward kinematic) as well as the coordinates X, Y, Z of the gripper (inverse kinematic) (Toquica et al., Citation2017). (Procter & Lindo Secco, Citation2021) proposed a robotic arm with a Biomimetic design that incorporates Brushless DC motors (BLDC) and ODrive control for teleoperations and Biomedical applications. The high-power BLDC provides the robot with high torque. Whereas the ODrive controller allows making speed under manageable level. However, this approach does not provide high precision since gearboxes were used to prioritise speed and strength over precision. A cell culture handling platform called “StemCellFactory” was proposed in (Doulgkeroglou et al., Citation2020), which incorporates an automated platform with machine learning to assess cell cultures and allows for full automation. Moreover, the system is equipped with a robotic arm that can move cell cultures between sample dispensers and a washing station. However, this system’s drawback is that it is highly customised and requires a large space for deployment, making it unsuitable for smaller research laboratories.

(Razdan & Sharma, Citation2022) provided a summary of the recent developments in the field of Internet of Medical Things (IoMT), including advanced IoMT technologies and their potential applications in healthcare. The authors highlighted the benefits of IoMT applications such as the development of personalised medication and reduced healthcare costs. They also stressed the importance of these technologies in addressing potential inequalities and providing access to healthcare. Meanwhile (Mavrogiorgou et al., Citation2023), highlighted the importance of generating data from sensors and IoMT devices and its significant impact on the holistic health records. This allows for better health ecosystem that transforms data into actionable knowledge and opening the possibilities for developing personalised medicine, preventing diseases and better management of hospitals readmissions. A smart healthcare framework is proposed in (Umer et al., Citation2022), That uses sensor generated data to monitor heart conditions for patients based on IoT and cloud technologies along with deep learning techniques. The framework also shares patients’ health records and the processing results with a medical professional who will provide emergency help. The system can improve the survival rate for patients with heart disease. (Yousaf et al., Citation2022) addressed the problem of bugs in medical IoT devices that may generate false information and result in severe consequences for patients with brain diseases. The authors designed a hybrid approach to classify bugs in the software using CNN and Harris Hawk Optimisation (HHO) along with Natural Language Processing (NLP) techniques. The approach achieved 96% accuracy for IoT bug severity classification. According to (Herath & Mittal, Citation2022), The adoption of smart solutions such as Artificial Intelligence (AI) and Internet of Things (IoT) into healthcare systems has a significant influence towards automation, reducing human errors, data utilisation and optimising healthcare environments and management. AI models have become of high importance for accurate illness diagnosis, Infection and injury detection, along with contributions in immunology, drug discovery and patient history analysis.

Although many studies have utilised computer vision and deep learning CNN algorithms to automate microscopic analysis, none of them have focused on predicting confluency in fibroblast cell culture using CNN image segmentation. Similarly, while some automation systems are available, they often require significant customisation or are designed for specific applications, limiting their adaptability to other laboratory settings. Thus, there is a need for further research to explore the potential of CNN segmentation techniques to predict and monitor confluency in fibroblast cell culture, and to develop a standalone automated sample handling system that can be easily adapted to various laboratory environments and applications.

To address this gap, an AI deep learning technique has been developed to enable accurate estimation of confluency through historical visual imagery data. This approach utilises U-Net architecture for image segmentation to perform image analysis on the fibroblast cell culture conditions, then trigger a robotic arm to replace the cell culture flask or petri dish from beneath the digital microscope when confluency reaches over 80%. This can be done without the need for human intervention. Once a sample is ready for collection, the operator will then receive a notification on their phone.

Our prototype provides a fully automated solution for the production of fibroblast cells. However, the design is limited to low volume production that is suitable for research laboratories and small bio-manufacturing facilities. The modular design can be easily integrated with the laboratory equipment, which can be of significant impact towards implementing automation in the industry. It provides a low-cost, high efficiency solution that can be adapted into existing environments. Hence, this study aims to solve an unsolved problem in the field of biomedical research and bio-manufacturing.

2. Materials and methods

This methodology section provides a detailed description of the development process used to develop the proposed prototype. In this research, the aim was to resolve the difficulties encountered in the cell culture analysis and the sample handling of fibroblast cell cultures. To achieve this goal, a system was developed that combines image segmentation using a U-Net deep learning model with robotics automation to monitor and handle cell cultures automatically. Moreover, this section discusses the methods used to cultivate the fibroblast cells, obtain the fibroblast cell culture dataset, prepare the data for training, develop and train the U-Net model, and deploy the system using Raspberry Pi and Intel Movidius Neural Compute Stick 2 (NCS2). It also discusses the hardware system development process and the steps taken to ensure the accuracy and reliability of the results obtained. The diagram in (Figure ) shows the overall system design beginning with placing a sample under the microscope, then analysing the sample images and triggering the robotic arm to replace confluent samples.

Figure 1. Diagram of the overall system.

Figure 1. Diagram of the overall system.

2.1. Cell cultures

BJ (ATCC CRL-2522) are fibroblast cells derived from a neonatal male foreskin. The cells were cultivated in a Dulbecco’s Modified Eagle Medium (DMEM) that contains 10% fetal calf serum, 1% glutamine, 1% penicillin, and 0.5% amphotericin B in a humidified atmosphere with 5% CO2 at 37°C. The conditions were created to stimulate cell growth and it is expected to achieve 80% to 100% confluency within 2 to 7 days.

2.2. Fibroblast cells dataset

The fibroblast cell culture image dataset is obtained using an EVOS XL Core digital microscope. The dataset consists of 39 images at 10× magnification and 15 images at 4× magnification (Figure ). Each image has a high resolution of 2048 × 1536, whereby it consists of multiple cells representing the imagery features to be segmented.

Figure 2. Sample fibroblast cells images from the dataset with 10x and 4x magnifications.

Figure 2. Sample fibroblast cells images from the dataset with 10x and 4x magnifications.

2.3. Data preparation

All the images in the dataset were manually collected using a digital microscope to ensure their quality, thus no pre-processing is required to remove noise, blurs, or other artefacts. On the other hand, the images in the dataset have low contrast levels, which can often negatively affect the performance of a deep learning model. Nevertheless, since the contrast levels vary across different images in the dataset, histogram equalisation is not necessary. In fact, it could hinder the model’s ability to generalise effectively. Instead, the model can learn to handle the varying contrast levels in the dataset and improve its performance.

To prepare the images for training, an image annotation process is essential to transform raw images into ground truth images that can be understood by the CNN model. The nature of the application requires binary semantic annotation, which was performed manually for each image on the Apeer cloud platform. This allows the system to label pixels into a single class, whereby the system can then locate and segment the separate cells onto an image and treat them as one object.

Furthermore, image patching was used to divide large images and ground truth masks into smaller images for the purpose of model training. This method is mainly useful to avoid out of memory errors that occur when large images are trained on regular graphics processing units (GPU). Particularly, it is important to ensure that the required memory does not exceed the available memory allocated for the training process. Therefore, the following formula was used to calculate the required memory and to determine the suitable image size:

(1) Minimum required memory=batch size×features10242+total parameters×410242(1)

Considering a batch size of 16 images and a floating-point format of float32, the minimum required memory for the full image (2048×1536) is 53.91GB. Since this value exceeds the maximum available GPU memory provided by Google Colab (12GB), image patching using Patchify Python library was used to divide each image into (512×512) images, which generates 12 images from a single (2048×1056) image. This process reduces the required GPU memory to 4.5GB, which can then be processed using Google Colab GPU.

2.4. CNN model development and training

Although there are several CNN architectures that can perform well in terms of cell classification and detection, U-Net architecture was chosen to train the deep learning model due to its functionality and computational complexity level. This architecture is particularly useful for cell segmentation, which is needed to accurately determine the percentage of cells in an image. Based on the original U-Net architecture, a U-Net model was constructed and modified according to the fibroblast cells dataset (Figure ). Following the same concept, the architecture was formed of contracting (encoder) and expansive (decoder) paths. In the encoder path, two convolutional operations with a 3 × 3 kernel size and ReLU activation are performed on the input. Followed by a maxpool layer with a 2 × 2 matrix that is applied to use the maximum values and reduce the size of the image, resulting in a smaller number of parameters. This process is performed repeatedly in the encoder path. Whereas in the decoder path, it was similar to the contracting path except that 2 × 2 up-sampling and concatenation were used instead of maxpooling to reconstruct the image. Additionally, a dropout layer is added between all the convolutional operations to avoid overfitting.

Figure 3. U-Net architecture used for model training.

Figure 3. U-Net architecture used for model training.

Overall, the U-Net model developed consists of 40 hidden layers that have been fully encoded using Python Keras. The initial weights are set based on normal distribution which was done using Keras initialiser function HeNormal(). Meanwhile, the activation function used in all layers was the ReLU function, except for the output layer which uses sigmoid activation function since sigmoid is more sensitive for weight changes which is essential to obtain accurate binary output. Moreover, Adam optimiser was used to optimise the learning parameters and provide the optimal learning rate.

As for model training, the data was divided into 80% for training and 20% for testing. The accuracy was calculated using the Keras model.evaluate() function, which evaluates the model based on the testing data. Meanwhile, the loss function used is “binary_crossentropy” since the prediction was obtained in binary form. Additionally, Intersection over Union (IoU) is another metric that was used to evaluate the system, which was used to indicate the accuracy of pixel segmentation according to the falsely classified segments. The following formula was used to represent IoU:

(2) IoU=Intersection between prediction and maskABUnion between prediction and maskAB(2)

2.5. Pixels values for confluency estimation

For confluency estimation, the number of pixels labelled as cells by the CNN model was calculated relative to the total number of pixels in an image. This approach can accurately predict the percentage of cells compared to the surface of a cell culture. The following formula shows the calculation of confluency where TP is the total number of pixels and CP is the pixels labelled as cell.

(3) Confluency=TPCPTP×100(3)

2.6. Model deployment

A Raspberry Pi together with Intel Movidius Neural Compute Stick 2 were used to allow the implementation for this project as a standalone system. The Raspberry Pi was used as the brain of the system, whereas NCS2 was used as the processing unit for the imaging system. Intel OpenVINO toolkit meanwhile was utilised to optimise the system and to make it deployable. This was done by converting Keras HDF5 model into Intermediate Representation (IR) format, which accelerates model inferencing. Furthermore, to segment the full (2048×1536) images, an image patching-unpatching technique was used to divide the original image into 12 images and then perform predictions on the individual patched images and reconstruct the prediction for the original image (Figure ). An alternative system using an UP2 Squared board instead of the Raspberry Pi was also trailed to compare the performance.

Figure 4. Illustration of the patching-unpatching technique. (a) Original image (b) Patched images (c) Patched images with predictions and (d) Unpatched (reconstructed) image.

Figure 4. Illustration of the patching-unpatching technique. (a) Original image (b) Patched images (c) Patched images with predictions and (d) Unpatched (reconstructed) image.

The process was developed using Python. Initially, the input image is obtained from the microscope using the OpenCV package, as an RGB image with a resolution of 2048 × 1536. The image is then divided into smaller images with resolutions of 512 × 512, which is the standard shape of the input layer, using the Patchify library. The resulting images are then used to generate predictions using the IR model in the OpenVINO Inference engine package. Finally, the final predictions are generated using Patchify that corresponds to the original 2048 × 1536 image.

Once image segmentation is complete and 80% confluency was detected, a notification message was sent to the personnel using Pushbullet platform, which is a cloud-based messaging service that allows users to send and receive notifications across multiple devices informing the confluency level and showing an image of the cell culture. This allows the user to monitor confluency levels remotely using any smart device with internet connection.

2.7. Hardware system development for automation

When the imaging system detects 80% confluency, it triggers a 6DoF robotic arm to replace the sample. A 6DoF robotic arm was chosen to have more flexibility when transferring the cell flasks. Furthermore, the placement of the cell flasks is consistent, hence the robotic arm movement is repetitive between designated areas. The arm was integrated with the Raspberry Pi, and it moves the sample based on axis inputs. This requires a simultaneous control of servo motor movements to ensure a smooth sample handling process. For this system a servo driver HAT was used in order to integrate 6 servo motors and establish an I2C communication with the Raspberry Pi board. Meanwhile, an AC-DC power supply was used to power the entire system.

The control process of servo motors was based on pulse width modulation (PWM), where each 500 µs increment in PWM corresponds to 45° increment in the servo motor angle with a range starting from 500 µs at 0° until 2500 µs at 180° (Figure ). The robotic arm has 6 revolute joints, and 5 links. To determine the range of possible positions for the end-effector, the transformation matrix was calculated for the robotic arm. By specifying the servo positions in a loop function, the robotic arm can be set to perform the repetitive task of sample pick and place.

Figure 5. Illustration of the robotic arm initial parameters.

Figure 5. Illustration of the robotic arm initial parameters.

3. Results

The automated cell detection and characterisation prototype using AIoT was developed to solve confluency monitoring and handling issues commonly encountered by bio-research and bio-manufacturing laboratories. The deployed setup of the AIoT system integrated with a digital microscope system was tested and deployed at Institute for Medical Research Malaysia (IMR) as shown in (Figure ). The process has been successfully automated both with predicting confluency, sample handling and establishing communication with the laboratory practitioner.

Figure 6. The prototype of the developed automated cell detection and characterisation using AIoT.

Figure 6. The prototype of the developed automated cell detection and characterisation using AIoT.

3.1. CNN model prediction

The model’s performance was evaluated using accuracy and IoU metrics, which represent the proportion of correctly classified pixels and the overlap between predicted and ground-truth segmentation masks, respectively. As shown in Figure , the model was able to perform cell segmentation on different microscope magnifications with high accuracy. The evaluation of the model is summarised in Table , which shows that the model achieved an accuracy of 95.66% and an IoU score of 66.76%. These results demonstrate the effectiveness of the model in accurately segmenting cells in microscope images. The evaluation was conducted using a testing set, and no pre-processing steps were taken before evaluating the model. These findings suggest that our CNN model has the potential to significantly improve cell segmentation in bio-research and bio-manufacturing laboratories.

Figure 7. (a) Prediction on 4x magnification image and (b) prediction on 10x magnification image.

Figure 7. (a) Prediction on 4x magnification image and (b) prediction on 10x magnification image.

Table 1. Evaluation of the CNN model

3.2. Confluency estimation

Confluency estimation is directly dependent on the CNN model performance since it is calculated based on the prediction images. (Figure ) shows some of the predictions obtained from images with different magnifications and different confluency levels. By observing the results, it can be seen that the imaging system performs better on lower confluency images compared to the higher confluency images which is due to errors generated by the image patching technique. Nevertheless, this approach can eliminate human error as it can accurately predict the confluency level of a cell culture.

Figure 8. Results of confluency estimation for (a) 4x magnification and (b) 10x magnification.

Figure 8. Results of confluency estimation for (a) 4x magnification and (b) 10x magnification.

3.3. Imaging system performance

As shown in (Table ), OpenVINO inferencing is able to accelerate the prediction of the imaging system by up to 35%. Moreover, OpenVINO was also optimised for deployment on the Raspberry pi and UP2 Squared board running with the Movidius NCS2. As for power consumption, it was observed that running the system on a single board computer was far more power efficient compared to a PC and can reduce power consumption by up to 83%.

Table 2. Performance of the AIoT system on different platforms

3.4. System integration

To achieve full automation, the robotic arm was programmed to move in a cycle to transfer samples between the intended cell culture storage and the digital microscope. The position was determined by pre-setting the PWM of the servomotors. Moreover, when the CNN model detects 80% confluency, the robotic arm automatically replaces the sample under the microscope. Simultaneously, the Raspberry Pi which is connected to the internet sends the confluency results via the Pushbullet API. The confluency results are then stored in an Amazon public cloud, which enables researchers to remotely monitor confluency using any smart device (Figure ). This remote monitoring feature provides a convenient and efficient way for researchers to track the progress of their experiments and make timely decisions based on the confluency results without the need to be physically present in the laboratory. Additionally, the system integration approach used in this study demonstrates the potential of combining AI and IoT technologies to achieve full automation and improve the efficiency of confluency monitoring.

Figure 9. Schematic diagram of the integrated AIoT system.

Figure 9. Schematic diagram of the integrated AIoT system.

4. Discussion

The training plots for the U-Net model are shown in (Figure ). By analysing the graphs, the fluctuations indicate that the model is struggling to learn after 30 epochs, which is due to the limited data, and the complexity level of the dataset. Moreover, a small separation can be noticed in the second 30 epochs. This occurs because of using dropout during the training process, which means that some neurons get ignored during the learning phase. However, during the testing phase all neurons are used, hence better accuracy is achieved.

Figure 10. Model training performance graphs.

Figure 10. Model training performance graphs.

Although model evaluation shows high accuracy, the achieved IoU still needs improvement to reduce misclassification of cell segmentations. Nevertheless, the IoU of 66.76% falls within the acceptable segmentation performance. Eventually, the results achieved were satisfactory and show that the model was able to learn and generalise, however, increasing the size of the dataset and improving data annotation are expected to improve the overall segmentation results further.

As for confluency estimation, minor errors can be observed with high confluency images that occur due to the patching-unpatching technique (Figure ). With comparison to ground truth images, those errors affect the confluency by approximately ± 3%, which is not a vital error for confluency estimation. One possible solution to this error can be by applying pixel blending techniques. However, they have not been applied as they can be computationally expensive especially for high resolution images.

Figure 11. Errors caused by the patching-unpatching technique.

Figure 11. Errors caused by the patching-unpatching technique.

Furthermore, the imaging system performance shows that it requires a relatively long time to perform the prediction, which is due to the processing requirements. However, since fibroblast cell culture grows over extended periods of time, the achieved performance is considered efficient for this application. Meanwhile, using OpenVINO and the NCS2 optimised the power consumption, this is ideal since the system may run constantly for days to monitor the cell culture conditions. Overall, the imaging system can effectively save the analysis time and reduce human error.

On the other hand, the robotic arm operation was satisfactory. It can efficiently replace samples considering accurate positioning based on PWM control. Meanwhile, the use of Pushbullet API was adequate for notifying users and providing feedback. However, because Pushbullet is a third-party application, remote monitoring is subject to limitations such as storage size and controllability. Therefore, a database can be integrated to improve the IoT aspect. Furthermore,

the integration allows for automated sample handling, reducing the workload for laboratory practitioners and improving the consistency of the results. The notification feature also provides a convenient way for researchers to track the progress of the cell culture and make timely decisions based on the confluency results. Despite the promising results, the developed AIoT system has several limitations that should be addressed in future research. One limitation is the reliance on specific hardware and software configurations. While the system was optimised for deployment on the Raspberry Pi and UP2 Squared board running with the Movidius NCS2, other hardware configurations may not be compatible or may require additional optimisation.

In future research, the developed AIoT system for automated confluency estimation and sample handling could be improved by increasing the size and diversity of the dataset used for training the CNN model, exploring different CNN architectures or incorporating other segmentation techniques, and applying the system to other cell types or other laboratory automation tasks. Future research should also explore the potential societal implications of AIoT systems for laboratory automation, including their impact on the workforce, scientific research, and ethical considerations.

5. Conclusion

In conclusion, by utilising deep learning algorithms, fibroblast cell segmentation was performed based on U-Net architecture. This approach can successfully predict the percentage of cells in a cell sample and accurately estimate confluency at 95% accuracy and 66% IoU score. Meanwhile, the system was accelerated and optimised by using OpenVINO Toolkit and Movidius NCS2, which allowed for system deployment on a Raspberry Pi board to implement a modular and standalone system that can be easily integrated into bio-research laboratories. For full automation, the system integrates a robotic arm that is programmed to replace confluent samples based on the results of the imaging system. The developed system can save time, produce accurate results and minimise human error. Furthermore, it effectively automates the workflow of the cell culture handling process.

The outcomes of this project are expected to provide a significant contribution and high relevance to the bio-research and bio-manufacturing industries towards IR 4.0. Although this project focuses on fibroblast cell cultures, the same approach can be used to monitor confluency for various types of adherent cell cultures which are defined as cell cultures which grow on a flat surface. Moreover, this project provides an automated modular solution that can be implemented at any environment and can be integrated with existing laboratory equipment.

For future work, it is expected that further improvements are needed to enhance the overall performance of the system which can be done by adding more images to the training dataset as well as improving the accuracy of image annotations and patch blending. This will enhance the pixel-to-pixel segmentation results and enable the confluency prediction to be more precise. Additionally, adding a feedback system to update the location and position of a cell culture sample under the microscope will improve the efficiency of the robotic arm operations.

Acknowledgments

We would like to acknowledge our research collaborators at the Institute for Medical Research who provided materials and equipment for data collection, and at UniKL for providing valuable information, dataset and guidance needed to complete this project.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Muaadh Shamhan

Muaadh Shamhan received B. Eng (Hons) in Mechatronics Engineering from International Islamic University Malaysia (IIUM). His studies mainly involved electronics and software programming for IoT, Ai and robotics development. He is interested in Machine learning, deep learning, and machine vision applications for automation processes. His final year project was about applying deep learning convolutional neural network to segment and analyze cell growth. Email: [email protected]

Ahmad Syahrin Idris

Ahmad Syahrin Idris is currently an Assistant Professor at the Department of Electrical and Electronic Engineering, University of Southampton Malaysia (UoSM). He received his B. Eng in Electrical and Electronics Engineering from University Technology Petronas, Malaysia and received his M.Phil from The University of Sheffield, UK in Electronic and Electrical Engineering. He later received his PhD in Opto-electronics from Kyushu University, Japan. Email: [email protected]

Siti Fauziah Toha

Siti Fauziah Toha is a Professor at the Department of Mechatronics Engineering, International Islamic University Malaysia (IIUM). She received B. Eng in Electrical and Electronics Engineering from University Technology Petronas and received MSc from Universiti Sains Malaysia in electrical engineering. She then completed her Ph.D in Automatic Control and Systems Engineering from The University of Sheffield. Her current research is in Control Algorithms and AI Optimisation, Assistive Bio-inspired robotics, and Renewable Energy. Email: [email protected]

Muhammad Fauzi Daud

Muhammad Fauzi Daud is a senior lecturer at the Institute of Medical Science Technology, Universiti Kuala Lumpur. He completed his Ph.D. at Kroto Research Institute, University of Sheffield, working on peripheral nerve engineering research. Before that, he received his undergraduate training in BSc. Biomedical Science from the University of Sheffield. His research interest includes bioscaffolds for peripheral nerve engineering, regenerative neurobiology, and microcarrier technology for cell biomanufacturing. He is the Head of Research and Innovation at the Institute of Medical Science Technology, Universiti Kuala Lumpur. Email: [email protected]

Izyan Mohd Idris

Izyan Mohd Idris received her medical degree from the Royal College of Surgeons in Ireland (RCSI) and MSc from The University of Sheffield, UK. She is currently undertaking a Ph.D in Medicine in the field of Tissue Engineering. She has experience working in a biochemistry diagnostic laboratory, the primary cell culture laboratory and bank for Inborn Errors of Metabolism (IEM). Email: [email protected]

Hafizi Malik

Hafizi Malik received B. Eng (Hons) in Mechatronics Engineering and is currently pursuing his Master of Science in Mechatronics Engineering at International Islamic University Malaysia. During his undergraduate studies, he primarily involved in electronics and programming for robotic and IoT applications, and his final year project was about developing clustering- and stochastic-based driving cycle prediction method while also developing a LoRaWAN-enabled electronic device for geo-fencing, tracking and communication. Email: [email protected]

References

  • Bleloch, J. (2021). Cell culture basics: Equipment, fundamentals and protocols. Cell Culture Basics Handbook, (n.d.). www.invitrogen.com/cellculturebasics
  • Chu, S. L., Lin, L. Y., Tsai, M. D., Abe, K., Sudo, K., Nakamura, Y., & Yokota, H. (2020). CNN based iPS cell formation stage classifier for human iPS cell growth status prediction using time-lapse microscopy images. Proceedings - IEEE 20th International Conference on Bioinformatics and Bioengineering, BIBE 2020, 616–17. https://doi.org/10.1109/BIBE50027.2020.00105
  • Deshpande, N. M., Gite, S., & Aluvalu, R. (2021). A review of microscopic analysis of blood cells for disease detection with AI perspective. Peer Journal Computer Science, 7, 1–27. https://doi.org/10.7717/peerj-cs.460
  • Doulgkeroglou, M. N., DiNubila, A., Niessing, B., König, N., Schmitt, R. H., Damen, J., Szilvassy, S. J., Chang, W., Csontos, L., Louis, S., Kugelmeier, P., Ronfard, V., Bayon, Y., & Zeugolis, D. I. (2020). Automation, monitoring, and standardization of cell product manufacturing. Frontiers in Bioengineering and Biotechnology, 8. https://doi.org/10.3389/fbioe.2020.00811
  • El Ariss, A. B., Younes, M., Matar, J., & Berjaoui, Z. (2016). Prevalence of sickle cell trait in the Southern Suburb of Beirut, Lebanon. Mediterranean Journal of Hematology and Infectious Diseases, 8(1), 2016015. https://doi.org/10.4084/MJHID.2016.015
  • Forero, M. G., & Hidalgo, A. (2011). Image processing methods for automatic cell counting in vivo or in situ using 3D confocal microscopy. Advanced Biomedical Engineering. https://doi.org/10.5772/23147
  • Greb, C. (2017). Introduction to Mammalian Cell Culture _ Science Lab _ Leica Microsystems. https://www.leica-microsystems.com/science-lab/introduction-to-mammalian-cell-culture/
  • Herath, H. M. K. K. M. B., & Mittal, M. (2022). Adoption of artificial intelligence in smart cities: A comprehensive review. International Journal of Information Management Data Insights, 2(1), 100076. https://doi.org/10.1016/j.jjimei.2022.100076
  • Hu, H., Guan, Q., Chen, S., Ji, Z., & Lin, Y. (2020). Detection and recognition for life state of cell cancer using two-stage cascade CNNs. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 17(3), 887–898. https://doi.org/10.1109/TCBB.2017.2780842
  • Iyer, T. J., Raj, A. N. J., Ghildiyal, S., & Nersisson, R. (2021). Performance analysis of lightweight CNN models to segment infectious lung tissues of COVID-19 cases from tomographic images. Peer Journal Computer Science, 7, 1–20. https://doi.org/10.7717/PEERJ-CS.368
  • Kantardjieff, A., & Zhou, W. (2014). Mammalian cell cultures for biologics manufacturing. Advances in Biochemical Engineering/biotechnology, 139, 1–9. https://doi.org/10.1007/10_2013_255
  • Limon-Cantu, D., & Alarcon-Aquino, V. (2021). Multiresolution dendritic cell algorithm for network anomaly detection. Peer Journal Computer Science, 7, e749. https://doi.org/10.7717/PEERJ-CS.749
  • Lohana, P., & Rajalakshmi, P. (2020). Microscopic image processing. http://reports.ias.ac.in/report/19332/microscopic-image-processing
  • Malik, H., Idris, A. S., Toha, S. F., Mohd Idris, I., Daud, M. F., & Azmi, N. L. (2023). A review of open-source image analysis tools for mammalian cell culture: Algorithms, features and implementations. Peer Journal Computer Science, 9, e1364. https://doi.org/10.7717/peerj-cs.1364
  • Mavrogiorgou, A., Kiourtis, A., Manias, G., Symvoulidis, C., & Kyriazis, D. (2023). Batch and streaming data ingestion towards creating holistic health records. Emerging Science Journal, 7(2), 339–353. https://doi.org/10.28991/ESJ-2023-07-02-03
  • Moutsatsou, P., Ochs, J., Schmitt, R. H., Hewitt, C. J., & Hanga, M. P. (2019). Automation in cell and gene therapy manufacturing: From past to future. Biotechnology Letters, 41(11), 1245–1253. https://doi.org/10.1007/s10529-019-02732-z
  • Nguyen, L. D., Lin, D., Lin, Z., & Cao, J. (2018). Deep CNNs for microscopic image classification by exploiting transfer learning and feature concatenation. Proceedings - IEEE International Symposium on Circuits and Systems, 2018-May. https://doi.org/10.1109/ISCAS.2018.8351550
  • Oei, R. W., Hou, G., Liu, F., Zhong, J., Zhang, J., An, Z., Xu, L., Yang, Y., & Horvath, D. (2019). Convolutional neural network for cell classification using microscope images of intracellular actin networks. PLoS ONE, 14(3), e0213626. https://doi.org/10.1371/journal.pone.0213626
  • Procter, S., & Lindo Secco, E. (2021). Design of a Biomimetic BLDC Driven Robotic Arm for Teleoperation & Biomedical Applications. Journal of Human, Earth, and Future, 2(4), 345–354. https://doi.org/10.28991/HEF-2021-02-04-03
  • Razdan, S., & Sharma, S. (2022). Internet of Medical Things (IoMT): Overview, emerging technologies, and case studies. IETE Technical Review (Institution of Electronics and Telecommunication Engineers, India), 39(4), 775–788. https://doi.org/10.1080/02564602.2021.1927863
  • Rettig, A., Haase, T., Pletnyov, A., Kohl, B., Ertel, W., Von Kleist, M., & Sunkara, V. (2019). SLCV–a supervised learning—computer vision combined strategy for automated muscle fibre detection in cross-sectional images. PeerJ, 7(7), e7053. https://doi.org/10.7717/peerj.7053
  • Siddique, N., Paheding, S., Elkin, C. P., & Devabhaktuni, V. (2021). U-net and its variants for medical image segmentation: A review of theory and applications. IEEE Access, 9, 82031–82057. https://doi.org/10.1109/ACCESS.2021.3086020
  • Su, H. H., Pan, H. W., Lu, C. P., Chuang, J. J., & Yang, T. (2020). Automatic detection method for cancer cell nucleus image based on deep-learning analysis and color layer signature analysis algorithm. Sensors (Switzerland), 20(16), 4409–4419. https://doi.org/10.3390/s20164409
  • Toquica, A. L., Fernando Ortiz Martinez, L., Rodriguez, R., Fernando, A., & Chavarro, C. (2017). Kinematic modelling of a robotic arm manipulator using matlab. Journal of Engineering and Applied Sciences, 12(7), www.arpnjournals.com
  • Umer, M., Sadiq, S., Karamti, H., Karamti, W., Majeed, R., & Nappi, M. (2022). IoT based smart monitoring of patients’ with acute heart failure. Sensors, 22(7), 2431. https://doi.org/10.3390/s22072431
  • van der Aalst, W. M. P., Batagelj, V., Ignatov, D. I., Khachay, M., Kuskova, V., Kutuzov, A., Kuznetsov, S. O., Lomazova, I. A., Loukachevitch, N., Napoli, A., Pardalos, P. M., Pelillo, M., Savchenko, A. V., & Tutubalina, E., (Eds.) (2019). Analysis of images, social networks and texts (Vol. 11832). Springer International Publishing. https://doi.org/10.1007/978-3-030-37334-4
  • Wu, Q., Merchant, F. A., & Castleman, K. R. (2008). Microscope image processing. Elsevier/Academic Press.
  • Yousaf, I., Anwar, F., Imtiaz, S., Almadhor, A. S., Ishmanov, F., Kim, S. W., & Khalifa, F. (2022). An optimized hyperparameter of convolutional neural network algorithm for bug severity prediction in Alzheimer’s-based IoT system. Computational Intelligence and Neuroscience, 2022, 1–14. https://doi.org/10.1155/2022/7210928