UMYU Scientifica

A periodical of the Faculty of Natural and Applied Sciences, UMYU, Katsina

ISSN: 2955 – 1145 (print); 2955 – 1153 (online)

Image

ORIGINAL RESEARCH ARTICLE

Development and Validation of AI-Integrated Web Application Software for Accurate Microscopic Detection and Quantification of Malaria Parasites in Blood Films

1*Yahuza, Kabir, 2Umar, Aliyu. M, 2Ebenezer, A. T., 1Baha’uddeen Salisu, 3Yusha’u E. S., 4Kaware, Musa Sani, and 5Samaila, Abdullahi

1Department of Microbiology, Umaru Musa Yar’adua University, PMB 2218, Katsina, Nigeria

2Department of Biological Sciences, Federal University Dutsinma, Katsina State, Nigeria

3Ministry of Environment, Katsina State, Nigeria

4Department of Community Medicine, Umaru Musa Yar’adua University, Katsina, Nigeria

4Department of Pharmacology, Umaru Musa Yar’adua University, Katsina, Nigeria

*Corresponding author: kabir.yahuza@umyu.edu.ng

ABSTRACT

Malaria, a mosquito-borne infectious disease caused by various Plasmodium parasite species, presented a significant global health challenge, with Plasmodium falciparum being notorious for its severe and life-threatening manifestations, which are particularly prevalent in Sub-Saharan Africa. This study aims to develop and validate an AI-Integrated Web Application (AIWA) for enhanced microscopic detection and quantification of malaria parasites in blood films. Two hundred and fifty blood samples infected with Plasmodium falciparum were collected and processed, yielding a total of 450 images capturing morphological features across different parasite stages. The AIWA was built using the “You Only Look Once (YOLO)” V8 algorithm, utilizing a single convolutional neural network (CNN) and trained on pre-processed, annotated microscopic images for accurate detection and quantification of parasites, white blood cells, Red blood cells, artifacts, and other blood-associated components. The model was validated for accuracy and precision and subsequently integrated into a user-friendly software application (Python Programming Language and the Python Django framework) to enable easier access to the technology both online and offline. The AIWA achieved high precision (0.836) in identifying positive cases, and a comparative analysis with conventional microscopic methods using Regression and Bland-Altman analyses revealed a robust correlation (R² > 94%) and strong agreement between the two approaches. In conclusion, the results showed AIWA's feasibility and efficiency in replacing conventional microscopy for accurately detecting and quantifying malaria parasites, offering a promising solution for enhanced diagnosis, especially in resource-limited settings.

Keywords: Malaria Diagnosis, Plasmodium falciparum, Artificial Intelligence (AI), Convolutional Neural Networks (CNNs), AI-Integrated Web Application, Digital Microscopy

INTRODUCTION

Malaria remains a significant global health issue, caused by multiple Plasmodium species, with Plasmodium falciparum being the most deadly and responsible for most severe cases and deaths, especially in sub-Saharan Africa (Zekar and Sharman, 2020). It spreads through bites of infected female Anopheles mosquitoes, and although major control efforts have lowered cases in some areas, the disease burden stays high: in 2022, there were an estimated 249 million cases worldwide, with about 230,000 deaths in sub-Saharan Africa alone, 70% of these involved children under five (WHO, 2022). These numbers highlight the urgent need for better case detection and faster treatment in resource-limited settings.

Microscopic examination of Giemsa-stained blood films remains the gold standard for diagnosis because it enables the detection and identification of parasites and estimation of parasitemia. However, reliable microscopy depends on well-trained personnel, adequate laboratory infrastructure, and significant time and conditions, which are often unavailable in areas with the highest malaria burden (Wardhani et al., 2020). Human error, variability between observers, and the challenge of detecting low-density infections lead to false negatives and species misidentification, which delay proper treatment and sustain transmission (Mehanian et al., 2017; WHO, 2022). Therefore, there is an urgent need for diagnostic methods that minimize subjectivity, increase efficiency, and expand access in underserved communities.

Recent developments in digital imaging and artificial intelligence (AI) present promising alternatives to traditional microscopy (Yahuza et al., 2024a). Deep learning, especially Convolutional Neural Networks (CNNs), has demonstrated excellent results in image classification tasks that simulate expert microscopic analysis (Yang et al., 2019). Modern smartphones combine high-resolution imaging with substantial on-device processing power, creating portable, affordable platforms for image capture and automated analysis (McDermott, 2020). AI-driven image analysis has already proven useful beyond malaria, including tuberculosis and other neglected tropical diseases, and can offer quick, objective assessments in low-resource settings (Bharadwaj et al., 2021).

AI-Integrated Web Applications (AIWAs) utilize these technological advancements to develop field-ready diagnostic tools. By combining optimized image capture with trained machine-learning models, AIWAs can automate parasite detection and measurement, potentially decreasing dependence on limited microscopy expertise (Visser et al., 2021). Critical development steps include algorithm training and validation, image capture standardization, software and hardware integration, and thorough clinical testing against reference methods (Visser et al., 2021). Key performance indicators include sensitivity, specificity, predictive values, and consistency across different devices and epidemiological settings.

Despite promising proof-of-concept studies, several obstacles restrict the widespread adoption of AIWAs in low-income settings. These include inconsistent image quality due to different camera hardware, uneven staining and slide preparation, limited internet connectivity for cloud-based models, and the need for extensive external validation to ensure reliable performance across diverse field conditions (Hunt et al., 2021; Lou et al., 2023). Tackling these issues requires careful system design, balancing on-device processing with optional cloud support, and validation strategies that account for real-world variability.

Improved, accessible diagnostics are key to malaria control and elimination strategies. An AIWA that accurately detects P. falciparum could reduce diagnostic time, enable task-shifting to less specialized health workers, support remote consultations, and provide real-time data for surveillance programs (Alonso and Tanner, 2013; Delahunt et al., 2015). By reducing diagnostic subjectivity and expanding coverage to remote areas, such technologies may significantly reduce malaria-related illness and deaths and enhance public health responses (Landier et al., 2016).

This study therefore, aims to develop and validate an AI-Integrated Web Application for accurate microscopic detection and quantification of Plasmodium falciparum in blood films. Specifically, we will (i) develop and validate a machine-learning model to detect P. falciparum from microscopic images, (ii) integrate the model into a user-friendly software application for automated detection and quantification of parasite morphological structures, and (iii) compare the AIWA’s performance with conventional microscopy using regression and Bland–Altman analyses. Through these steps, we seek to advance diagnostic capacity in resource-constrained settings and support more timely, accurate malaria case management.

MATERIALS AND METHODS

Ethical Clearance

The present study and the ethical aspects of the research methodology were thoroughly examined and approved by the Katsina State Health Research Ethics Committee of the Ministry of Health, Katsina State, Nigeria, with approval number MOH/ADM/SUB/1152/1/887.

Sample Collection

Two hundred and fifty (250) blood samples were used for this study. These comprise 200 Blood samples containing Plasmodium species obtained from the Laboratory Units of Federal Teaching Hospital Katsina, General Hospital Katsina, Turai Umaru Musa Yar’adua Maternal and Children Hospital Katsina, and National Obstetric Fistula Center Babar Ruga Katsina; and 50 prepared thick and thin blood films collected from National Institute of Medical Research (NIMER) Yaba Lagos. The blood samples were transported to the Microbiology Laboratory of Umaru Musa Yar’adua University, Katsina, and the Laboratory Department of the National Obstetric Fistula Center, Babar Ruga, Katsina, for photomicrographs.

Sample Analysis

Preparation of a thick and a thin blood film

This part of the research was carried out at the Microbiology Laboratory of Umaru Musa Yar’adua University, Katsina, using Giemsa and Field Stains, according to the method described by Bashir et al. (2019).

For the thick blood films, a small drop of blood (about the size of a match head) was placed onto the center of a clean glass slide. Another slide was held at a 45-degree angle to spread the blood drop evenly across the slide surface, creating a thick blood film. The thick blood film was then allowed to air dry completely.

For the thin blood film, a small drop of blood was placed near one end of the clean microscope slide. The edge of another slide was used to spread the blood drop thinly along the slide, creating a thin blood film. The thin blood film was then allowed to air-dry completely and fixed with Alcohol.

The thick and thin blood films were then stained with Giemsa and Field stains for clearer visualization of the parasites. Once both the thick and thin blood films were dried, immersion oil was applied, and a cover slip was placed over them. The prepared slides were then ready for microscopic examination to detect and identify Plasmodium parasites. Safety and hygiene measures were maintained throughout the process (Bashir et al., 2019).

Photomicrographs of Plasmodium falciparum

The prepared slides were placed on the Labomed Lx 500 (USA) microscope stage, and a systematic examination was initiated, beginning with low magnification (10x and 20x) to identify areas of interest. Subsequently, higher magnifications (40x and 100x oil immersion) were employed for more detailed imaging (Hase et al., 2021).

The focus was carefully adjusted to obtain clear images and count malaria parasites, with particular attention given to P. falciparum and its life stages (Bashir et al., 2019). Images were captured using both the microscope and mobile phone cameras to ensure the AI model's versatility (Plate 1). The camera was connected directly to the microscope to enable easy image capture and digitization. The digital images were then used to train and evaluate the AI model for detecting malaria parasites (Hase et al., 2021).

Plate 1: Labomed Lx 500 Microscope with built-in camera and a mobile phone attached.

Conventional Quantification of Plasmodium falciparum

In the traditional way of measuring P. falciparum in blood films, a carefully planned process was used to ensure accuracy and consistency in detecting parasitemia. This included systematically making both thick and thin smears, staining them with Giemsa and Field stains, drying the films, and applying immersion oil (Dave and Upla, 2017). The slides were then prepared carefully for microscopic examination, using both low and high magnifications to enable thorough parasite counting.

The counting process involved calculating the number of parasites per unit of red blood cells. Parasite density was expressed either as the number of parasites per field of the blood film or as a percentage of infected red blood cells (Adu-Gyasi et al., 2012).

Identification and counting were performed by assessing the prepared blood film, specifically focusing on infected red blood cells containing malaria parasites. The total number of infected red blood cells was then divided by the total number of observed red blood cells. The resulting fraction was multiplied by 100 to obtain the percentage of infected cells, providing a quantitative measure of parasitemia in the blood film as shown in eq. 1 (Mehanian et al., 2017).

The eq. 3.1 was used for both the conventional and AI model quantification of Plasmodium falciparum parasitaemia.

\[Parasitemia\ (\%) = \frac{Number\ of\ Infected\ Red\ Blood\ Cells\ }{Total\ Number\ of\ Red\ Blood\ Cells}\ \times 100\%\ \ \ \ \ \ \ \ \ \ \ \ \ eq.\ 1\]

Image Data Pre-Processing and Annotations

The goal of image pre-processing is to improve the quality of the images and to remove any unwanted noise, which can negatively impact the performance of the AI model, and also, to provide the AI model with labeled data, which can be used to train, identify and quantify P. falciparum present in the images.

The image pre-processing followed the description of Kuzborskij et al. (2020). The method emphasized that image cropping removes irrelevant background information and focuses on the area of interest, in this case, the parasites. Additionally, the image was enhanced using techniques such as histogram equalization and contrast stretching to improve the visibility and clarity of the parasites. Furthermore, color normalization was performed to ensure consistent coloration of the parasites across all images.

The image annotation was carried out using the proposed tools of Kuzborskij et al. (2020). Specifically, “LabelImg” was used to draw bounding boxes around the parasites and classify them into different categories. The annotation process was carried out with the help of experts in parasitology and pictorial guides to malarial parasite identification, to ensure that the annotations are accurate and consistent across all images.

Building the AI Model

The AI model was developed using the You Only Look Once (YOLO) V8 architecture. YOLO is a real-time object detection algorithm that identifies and localizes objects within an image (Redmon et al., 2016). YOLO's architecture is based on a single convolutional neural network (CNN) trained end-to-end to predict class probabilities and bounding box coordinates for each object in an image. The algorithm divides an image into a grid of cells, with each cell responsible for predicting the object within it (Redmon et al., 2016). For identifying and counting malaria parasites, the CNN was trained on preprocessed, annotated images of the parasites. The model learns to detect and classify the parasites present in the images. The trained model was also capable of detecting and classifying new images of malaria parasites that were not included in the training dataset (Goyal et al., 2018).

Model Performance Evaluation

To evaluate malaria parasite detection in this study, the precision-recall (PR) curve, average precision (AP), and mean average precision (mAP) were used as metrics. Precision measures accuracy in information retrieval and is often considered alongside recall. Precision refers to the ratio of relevant targets correctly identified in the results to the total number of targets returned for a query. The terms true positive (TP), true negative (TN), false positive (FP), and false negative (FN) were used to describe classification outcomes. True positive indicates the number of positive instances correctly predicted as positive, TN indicates the number of negative instances correctly predicted as negative, FP indicates negative instances incorrectly predicted as positive, and FN indicates positive instances incorrectly predicted as negative (Park et. al., 2020). Precision is defined in Eq. 2 as follows:

\[Precision\ = \ \frac{TP}{TP\ + \ FP}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ eq.\ 2\]

The recall rate, which measures the proportion of relevant targets among all the relevant targets, is defined in eq. 3 as follows:

\[Recall\ = \ \frac{TP}{TP + \ FN}\ \ \ \ \ \ \ \ \ \ \ \ \ \ eq.\ 3\]

In certain situations, specific values can provide a clearer representation of the test model's performance than a graph. The average precision (AP) is commonly employed as a metric for this purpose, and its calculation formula is as follows (eq. 3.4).

\[AP\ = \ \int_{0}^{1}{p(r)d(r)}\ \ \ \ \ \ \ \ \ \ \ \ eq.\ 3.4\]

In Formula above, 'p' represents precision, 'r' represents recall, and precision is a function of recall. Therefore, the average precision corresponds to the area under the precision-recall (P-R) curve, and mAP (mean average precision) is the average of the average precision values across all categories (Park and Kim, 2020).

Developing Non-Expert User-Friendly Interface

A user-friendly interface was developed using the Python programming language and the Django web framework. This enables medical practitioners and researchers to access the technology, both online and offline, for effective detection, identification, and quantification of malaria parasites.

Additional Pilot Testing and Comparing the performance of the AI modeling and the conventional method by using Regression and Bland-Altman analyses

In our published pilot-testing study (Yahuza et al., 2024b), the developed model was pilot-tested and compared with the traditional approach to evaluate its detection accuracy and sensitivity relative to the gold-standard conventional microscopy.

RESULTS

Photomicrographs of prepared blood films

A total of 200 slides were prepared in both thick and thin smears. From the 200 slides, 450 images were captured using microscopy (Plate 2). The captured images contained morphological features of P. falciparum, with a focus on its different stages.

Of the 450 images, 315 showed the parasite in ring forms, and the remaining 135 showed the parasite in trophozoite form.

The successful analysis and documentation of 450 images significantly contribute to the creation of a comprehensive dataset. This dataset serves as a foundational resource for developing a robust machine-learning model capable of detecting and quantifying P. falciparum based on its morphological characteristics.

Plate 2: Photomicrographs: (a) Thick blood smear (b) Thin blood smear

Detection of Plasmodium falciparum using an AI model

The AI model's ability to recognize morphological traits enabled it to detect white blood cells and artifacts in blood films, in addition to malaria parasites. This expanded scope provides a thorough understanding of the model's functionality and potential uses in malaria diagnosis. It is important to recognize the importance of correctly detecting the malaria parasite and other components in blood films before going into the specific results, as these factors affect the overall effectiveness and consistency of diagnostic processes.

Plates 3, and 4 show sample images demonstrating the detection and prediction capabilities of the developed mobile App for P. falciparum based on its known features. The ring stage exhibits small, circular ring-shaped structures with a single central nucleus (Plates 3-4).

The figures also indicated the App’s ability to detect artifacts, largely based on the detection of structures resembling parasites ex vivo and on abnormal staining intensity/patterns, shapes, and sizes.

The App also distinguishes white blood cells from trophozoite amoeboids based on size, internal structure, and stain absorption (Plate 3 and 4).

Additionally, the App shows the percentage of parasite or artifact detection. The detection of P. falciparum using the App ranged from 0.34 to 0.89 (Plate 3) and 0.29 to 0.83 (Plate 4). Similarly, the detection of artifacts ranged from 0.42 to 0.67 (Plate 3) and from 0.37 to 0.77 (Plate 4).

Plate 3: Pattern of detection of P. falciparum parasites, artifacts and white blood cells from a photomicrograph of a blood film, highlighting the ring of blue cytoplasm of falciparum trophozoites. The numbers indicate the percentage accuracy of the detection of the parasites/artifacts/white blood cells. The closer to 0, the lower the accuracy, the closer to 1, the higher the accuracy.

Plate 4: Detection of P. falciparum parasites, white blood cells and artifacts from a photomicrograph of a blood smear, highlighting the ring stages. The numbers indicate the percentage accuracy of the detection of the parasites/artifacts/white blood cells. The closer to 0, the lower the accuracy; the closer to 1, the higher the accuracy.

Diagnostic Accuracy of the Developed AI-Model

Performance Metrics for YOLO V8 Model

To evaluate the YOLO V8 model's performance in detecting P. falciparum, a suite of performance metrics was used to conduct a thorough assessment. These metrics play a pivotal role in measuring the precision, recall, and overall accuracy of the model's detection capabilities. Among the frequently employed metrics for evaluating object identification systems, the mean Average Precision (mAP) holds significance. Additionally, the F1 score, a commonly used measure for assessing the effectiveness of the machine learning algorithms employed in this evaluation.

Precision and Recall Analysis

Precision, representing the accuracy of positive predictions, and recall (sensitivity), measuring the model's ability to capture all positive instances, were calculated. The YOLO V8 model achieved 0.836 overall precision in identifying positive cases, which is very high, while the precision for the "Falciparum Parasite" class at a 0.5 confidence threshold was 0.749 (Figure 1). This indicates that when the model predicts an instance as "Falciparum Parasite" with a confidence level of 0.5 or higher, it is correct about 74.9% of the time. Indicating a rate of <30% imprecise identification of positive parasites.

Figure 1: Precision-Recall Curve for the Detection of Artefacts, P. falciparum and White Blood Cells from the Images using the developed Application. The Numbers at top right indicate the precision.

F1 Score

The F1 score, a combination of precision and recall, was computed to provide a single measure of the model's performance. With an F1 score of 0.82, the model demonstrated effectiveness in achieving a harmonious balance between precision and recall.

Figure 2 shows the F1-confidence curve, which indicates the relationship between the F1 score (the harmonic mean of precision and recall) and confidence, another indicator of the balance between precision and recall. It showed that the F1 scores for all classes (artifacts, P. falciparum, and WBCs) were 0.82, which is relatively high at the 0.480 confidence level.

Figure 2: F1-Confidence Curve for the Detection of Artefacts, P. falciparum and White Blood Cells from the Images using the developed Application. Numbers at the top right, after the 'all classes' label, indicate the F1 score and the confidence threshold, respectively.

Precision-confidence curve

The precision-confidence curve, which shows precision at different levels, as shown in Figure 3. It indicated that at a confidence level of 0.901, a perfect correlation of 1.00 was obtained, meaning that all predicted positives were true positives, at such a high confidence level.

Figure 3: Precision-Confidence Curve for the Detection of Artefacts, P. falciparum and White Blood Cells from the Images using the developed Application. Numbers at the top right, after the 'all classes' label, indicate the precision and the confidence threshold, respectively.

Confusion Matrix Analysis

A detailed confusion matrix was constructed to delineate the model's true positives, true negatives, false positives, and false negatives. This comprehensive analysis facilitated a nuanced understanding of the model's class-wise performance.

Figure 4 shows the confusion matrix from the detection data, which indicates that the instance of identifying a true P. falciparum as a false negative (i.e., artifact) is very low (1). This indicates a very good detection ability of the App and a very low chance of false-negative identification. Figure 5 shows the motive for this work

Figure 4: Confusion matrix for the Detection of Artefacts, P. falciparum and White Blood Cells from the Images using the developed Application. Numbers within the intersection of the squares indicate the actual true samples on the x axis identified as wrong on the y axis.

How to Operate the Developed AI Model

To operate the developed AI model, the user should follow the steps below:

Obtain the microscopic image(s) of the sample to be analysed

Open a web browser and enter the URL of the application (https://kabir.mhinnov8.com.ng).

Enter the sample ID and upload the captured image from conventional microscopy.

Click on the analyze button and wait for the result to display.

The result window will display the Sample ID, Percentage of Parasitemia, Date, and the processed image as shown in Plate 6. The video guide version on how to operate the model is available at https://youtu.be/4AKDSka2qCI

Plate 6: Screenshot of the result window showing the processed image and percentage parasitemia

Figure 5: Overview of the Development and Validation of the Falciparum AI Quantification System

Additional Pilot Testing and Comparing the performance of the AI modeling and the conventional method by using Regression and Bland-Altman analyses

Our published pilot study (Yahuza et al., 2024b) comparing the performance of an AI model with conventional microscopic methods revealed a robust correlation (R² > 94%) between the two approaches, with the slope of the Z-regression equation closely matching the anticipated value, indicating substantial concordance.

These findings highlight the potential of the AI model for routine application in the identification and quantification of malaria parasites in blood films, offering a straightforward and economical alternative.

DISCUSSION

The present study aimed to design, develop, and validate an AI-Integrated Web Application (AIWA) capable of accurately detecting and quantifying Plasmodium falciparum on Giemsa-stained blood films. The results demonstrate that the AIWA, powered by a YOLOv8 deep-learning algorithm, achieved high diagnostic performance and showed strong agreement with conventional microscopy, underscoring its potential as a reliable and scalable diagnostic tool in resource-limited settings.

A key strength of the study was the creation of a comprehensive dataset of 450 high-quality photomicrographs representing multiple life stages of P. falciparum. Previous studies emphasize that AI performance improves significantly when trained on diverse morphological stages and staining variations (Poostchi et al., 2018; Maturana et al., 2022). This aligns with our findings, as the model effectively detected both ring forms and trophozoites, demonstrating robustness across typical clinical presentations.

The diagnostic metrics obtained in this study further support the reliability of the AIWA. The overall precision of 0.836 and class-specific precision of 0.749 for P. falciparum are consistent with performance benchmarks reported in similar deep-learning malaria detection tools (Yang et al., 2019; Mehanian et al., 2017). The strong F1 score (0.82) indicates balanced performance between sensitivity and specificity, which is important for minimizing false negatives, a critical concern in malaria diagnostics (Hawkes and Kain, 2007). The confusion matrix results demonstrated a very low rate of false-negative misclassification, comparable to that reported for established automated microscopy systems, such as those evaluated by Torres et al. (2018).

The regression analysis showed a correlation exceeding 94% between the AIWA’s quantification and conventional microscopy. This level of agreement corresponds with the performance of other AI-based malaria quantification systems, which report similarly strong concordance with expert microscopy (Bhowmick et al., 2021; Yoon et al., 2021). Furthermore, Bland–Altman analysis confirmed good agreement across methods, reinforcing the reliability of the AIWA as a viable complement or alternative to traditional microscopy (Haghayegh et al., 2020).

The use of YOLOv8 for object detection is well justified given its high speed and accuracy in detecting small biological objects in complex backgrounds. Prior studies show that YOLO-based architectures outperform traditional CNN classifiers in medical image tasks requiring real-time performance and localization (Lou et al., 2023; Redmon et al., 2016). Our findings are consistent with these observations, particularly in the model’s ability to distinguish parasites from white blood cells and staining artifacts, an ongoing challenge in automated hematological analysis (Rosado et al., 2016).

Importantly, the development of a user-friendly Python/Django-based interface enhances the system's practical utility. Mobile-based and offline-capable tools have been identified as essential innovations for malaria diagnosis in underserved and remote regions (Visser et al., 2021; Bharadwaj et al., 2021). By enabling users to upload images directly from standard microscopy setups and obtain near-instant automated detection and quantification, the AIWA addresses diagnostic delays and variability in operator expertise, persistent barriers to malaria control (Wardhani et al., 2020; Guinovart et al., 2006).

Despite these strengths, several limitations must be acknowledged. The sample size, although adequate for model development, remains limited for broad generalization. Variations in staining techniques, camera quality, and slide preparation across different settings may affect performance, a limitation noted in other AI-based malaria detection studies (Hunt et al., 2021). Additionally, the AIWA is currently trained primarily on P. falciparum, an important but insufficient focus for regions with mixed-species infections, such as parts of Southeast Asia and South America (Alonso and Tanner, 2013). Expanding species coverage would enhance its global applicability.

Overall, the findings of this study align with global trends emphasizing the integration of AI and smartphone-based technologies into malaria surveillance and case management frameworks (Landier et al., 2016; WHO, 2022). By reducing subjectivity, improving diagnostic speed, and increasing access in areas lacking expert microscopists, the AIWA demonstrates significant potential to contribute to malaria control and elimination efforts.

CONCLUSION

This study successfully developed and validated an AI-integrated web application software capable of accurately detecting and quantifying Plasmodium falciparum from microscopic images of blood films. The YOLOv8-based model achieved high overall precision, strong F1 performance, and excellent correlation with conventional microscopy, demonstrating its reliability and diagnostic value. The AIWA offers a practical, user-friendly, and cost-effective alternative to manual microscopy, with the potential to improve diagnostic accuracy, reduce human error, and expand access to malaria diagnosis in resource-limited settings.

By integrating advanced AI algorithms with accessible mobile and web-based technologies, this work contributes a promising tool for strengthening malaria control efforts. Future research should focus on large-scale field validation, optimizing detection for lower parasite densities, and expanding the system to recognize other Plasmodium species. With further refinement and deployment, the AIWA could play a vital role in accelerating timely diagnosis, guiding appropriate treatment, and supporting global malaria elimination strategies.

RECOMMENDATIONS

i. Widespread use/testing of the mobile application in the laboratories to confirm its diagnostic accuracy.

ii. Revisiting the model to improve the precision and accuracy of detection even higher.

iii. Extending the coverage of other apps in future research to include other Plasmodium species.

REFERENCES

Adu-Gyasi, D., Adams, M., Amoako, S., Mahama, E., Nsoh, M., Amenga-Etego, S., Baiden, F., Asante, K.P., Newton, S., and Owusu-Agyei, S. (2012). Estimating malaria parasite density: Assumed white blood cell count of 10,000/μl of blood is appropriate measure in Central Ghana. Malaria Journal, 11(1), 238. [Crossref]

Alonso, P. L., and Tanner, M. (2013). Public health challenges and prospects for malaria control and elimination. Nature Medicine, 19(2), 150–155. [Crossref]

Bashir, M., Sunday, E., Mohammed, B., Ali, R., Isa, H., Sambo, K. H., and Ishaq, I. (2019). Evaluation of efficacy of Rapid Diagnostic Tests compared to microscopy in the diagnosis of malaria infection. Parasitology Research, 118(4), 121–129. [Crossref]

Bharadwaj, M., Bengtson, M., Golverdingen, M., Waling, L., and Dekker, C. (2021). Diagnosing point-of-care diagnostics for neglected tropical diseases. PLOS Neglected Tropical Diseases, 15(6), e0009405. [Crossref]

Bhowmick, I. P., Chutia, D., Chouhan, A., Nishant, N., Raju, P. L. N., Narain, K., ... and Chhibber-Goel, J. (2021). Validation of a mobile health technology platform (FeverTracker) for malaria surveillance in India: Development and usability study. JMIR Formative Research, 5(11), e28951. [Crossref]

Dave, I. R., and Upla, K. P. (2017, February). Computer aided diagnosis of malaria disease for thin and thick blood smear microscopic images. In 2017 4th International Conference on Signal Processing and Integrated Networks (SPIN) (pp. 561–565). IEEE. [Crossref]

Delahunt, C. B., Mehanian, C., Hu, L., McGuire, S. K., Champlin, C. R., Horning, M. P., and Thompson, C. M. (2015). Automated microscopy and machine learning for expert-level malaria field diagnosis. In 2015 IEEE Global Humanitarian Technology Conference (GHTC) (pp. 393–399). IEEE. [Crossref]

Goyal, P., Pandey, S., and Jain, K. (2018). Deep learning for natural language processing. Apress.

Guinovart, C., Navia, M. M., Tanner, M., and Alonso, P. L. (2006). Malaria: burden of disease. Current Molecular Medicine, 6(2), 137–140. [Crossref]

Haghayegh, S., Kang, H. A., Khoshnevis, S., Smolensky, M. H., and Diller, K. R. (2020). A comprehensive guideline for Bland–Altman and intraclass correlation calculations to properly compare two methods of measurement and interpret findings. Physiological Measurement, 41(5), 055012. [Crossref]

Hase, F., Aldeghi, M., Hickman, R. J., Roch, L. M., Christensen, M., Liles, E., and Aspuru-Guzik, A. (2021). Olympus: a benchmarking framework for noisy optimization and experiment planning. Machine Learning: Science and Technology, 2(3), 035021. [Crossref]

Hawkes, M., and Kain, K. C. (2007). Advances in malaria diagnosis. Expert Review of Anti-infective Therapy, 5(3), 485–495. [Crossref]

Hunt, B., Ruiz, A. J., and Pogue, B. W. (2021). Smartphone-based imaging systems for medical applications: A critical review. Journal of Biomedical Optics, 26(4), 040902. [Crossref]

Kuzborskij, I., and Cesa-Bianchi, N. (2020). Locally-adaptive nonparametric online learning. In Advances in Neural Information Processing Systems (Vol. 33, pp. 1679–1689). Curran Associates, Inc.

Landier, J., Parker, D. M., Thu, A. M., Carrara, V. I., Lwin, K. M., Bonnington, C. A., ... and Nosten, F. H. (2016). The role of early detection and treatment in malaria elimination. Malaria Journal, 15, 363. [Crossref]

Lou, H., Duan, X., Guo, J., Liu, H., Gu, J., Bi, L., and Chen, H. (2023). DC-YOLOv8: Small-size object detection algorithm based on camera sensor. Electronics, 12(10), 2323. [Crossref]

Maturana, C. R., de Oliveira, A. D., Nadal, S., Bilalli, B., Serrat, F. Z., Soley, M. E., ... and Joseph-Munné, J. (2022). Advances and challenges in automated malaria diagnosis using digital microscopy imaging with artificial intelligence tools: A review. Frontiers in Microbiology, 13, 1006659. [Crossref]

McDermott, J. (2020). Convolutional Neural Networks, Image Classification w. Keras. LearnDataSci.

Mehanian, C., Jaiswal, M., Delahunt, C., Thompson, C., Horning, M., Hu, L., ... and Bell, D. (2017). Computer-automated malaria diagnosis and quantitation using convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision Workshops (pp. 116–125). [Crossref]

Park, I., and Kim, S. (2020). Performance indicator survey for object detection. In 2020 20th International Conference on Control, Automation and Systems (ICCAS) (pp. 284–288). IEEE. [Crossref]

Poostchi, M., Silamut, K., Maude, R. J., Jaeger, S., and Thoma, G. R. (2018). Image analysis and machine learning for detecting malaria. Translational Research, 194, 36–55. [Crossref]

Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 779–788). IEEE. [Crossref]

Rosado, L., Correia da Costa, J. M., Elias, D., and Cardoso, J. S. (2016). A review of automatic malaria parasites detection and segmentation in microscopic images. Anti-Infective Agents, 14(1), 11–22. [Crossref]

Torres, K., Bachman, C. M., Delahunt, C. B., Alarcon Baldeon, J., Alava, F., Gamboa Vilela, D., and Bell, D. (2018). Automated microscopy for routine malaria diagnosis: A field comparison on Giemsa-stained blood films in Peru. Malaria Journal, 17, 339. [Crossref]

Visser, T., Ramachandra, S., Pothin, E., Jacobs, J., Cunningham, J., Menach, A. L., and Aidoo, M. (2021). A comparative evaluation of mobile medical apps (AIWAs) for reading and interpreting malaria rapid diagnostic tests. Malaria Journal, 20(1), 1–12. [Crossref]

Wardhani, P., Butarbutar, T. V., Adiatmaja, C. O., Betaubun, A. M., and Hamidah, N. (2020). Performance comparison of two malaria rapid diagnostic tests with real-time polymerase chain reaction and gold standard of microscopy detection method. Infectious Disease Reports, 12(S1), 8731. [Crossref]

World Health Organization. (2022). WHO malaria policy advisory group (MPAG) meeting, October 2022. [Link]

Yahuza, K., Umar, A. M, Ebenezer, A. T., Baha’uddeen S. D, Yusha’u E. S. and Kaware, M. S. (2024a). Comparative Detection and Quantification Of Parasitemia From Blood Films Using Conventional Microscopy and AI Model. Biosciences Journal Fudma. 5(1):12 - 21 Issn:9876-5432

Yahuza, K., Aliyu, U. M., Salisu, B. D., Atalabi, E. T., Mukhtar, G. L., and Bashir, A. (2024b). Recent Advancements in Detection and Quantification of Malaria Using Artificial Intelligence. UMYU Journal of Microbiology Research, 9(2), 1–21. [Crossref]

Yang, F., Poostchi, M., Yu, H., Zhou, Z., Silamut, K., Yu, J., and Antani, S. (2019). Deep learning for smartphone-based malaria parasite detection in thick blood smears. IEEE Journal of Biomedical and Health Informatics, 24(5), 1427–1438. [Crossref]

Yoon, J., Jang, W. S., Nam, J., Mihn, D. C., and Lim, C. S. (2021). An automated microscopic malaria parasite detection system using digital image analysis. Diagnostics, 11(3), 527. [Crossref]

Zekar, L., and Sharman, T. (2020). Plasmodium falciparum malaria. In StatPearls. StatPearls Publishing. [Link]