in

Assessment of deep convolutional neural network models for species identification of forensically-important fly maggots based on images of posterior spiracles

[adace-ad id="91168"]

Of which at the third instar, the external morphology of larvae is quite similar; thus, the morphological identification used to differentiate between its genera or species, generally includes cephalophalyngeal skeleton, anterior spiracle, and posterior spiracles. The morphology of the posterior spiracle is one of the important characteristics for identification. A typical morphology of the posterior spiracle of third stage larvae was shown in Fig. 2. Based on studying under light microscopy, the posterior spiracle of M. domestica was clearly distinguished from the others. On the other hand, the morphology of the posterior spiracle of C. megacephala and A. rufifacies was quite similar. For C. megacephala and C. rufifacies, the peritreme, a structure encircling the three spiracular openings (slits), was incomplete and slits were straight as shown Fig. 2A,B, respectively. The complete peritreme encircling three slits was found in L. cuprina and M. domestica as shown in Fig. 2C,D, respectively. However, only the slits of M. domestica were sinuous like the M-letter (Fig. 2D). Their morphological characteristics found in this study were like the descriptions in the previous reports23,24,25.

Figure 2

Morphology of posterior spiracles of four different fly species after inverting the image colors; (A) Chrysomya (Achoetandrus) ruffifacies, (B) Chrysomya megacephala, (C) Lucilia cuprina, (D) Musca domestica.

Full size image

For model training, four of the CNN models used for species-level identification of fly maggots provided 100% accuracy rates and 0% loss. Number of parameter (#Params), model speed, model size, macro precision, macro recall, f1-score, and support value were also presented in Table 1. The result demonstrated that the AlexNet model provided the best performance in all indicators when compared among four models. The AlexNet model used the least number of parameters while the Resnet101 model used the most. For model speed, the AlexNet model provided the fastest speed, while the Densenet161 model provided the slowest speed. For the model size, the AlexNet model was the smallest, while the Resnet101 model was the largest which corresponded to the number of parameters used. Macro precision, macro recall, f1-score and support value of all models were the same.

Table 1 Comparison of model size, speed, and performances of each studied model (The text in bold indicates the best value in each category).
Full size table

As the training results presented in the supplementary data (Fig. S1), all models provided 100% accuracy and 0% loss in the early stage of training (< 10 epochs). This could be due to the training and testing processes with cropping specific portions of the fly images using our custom object detection model. Moreover, all images were from the laboratory strains, and their variation of morphological characteristics might be less than their wild counterparts. As a result, the training time was short, and the accuracy of the model was high. Of the four models tested, AlexNet demonstrated a good balance between performance and speed. This model can proceed the system the fastest and its model size was the smallest. The speed and accuracy of AlexNet made it useful for web-based and mobile applications that relied on both speed and reliable predictions. Speed was a factor in user satisfaction and would be important for future development such as video-based applications16. Therefore, we focused on the AlexNet results for the remainder of this article.

By using tSNE visualization, the AlexNet model can separate species explicitly into distinct groupings based on characteristics extracted from the model as shown in Fig. 3. All four species were separated with overlapping data of C. megacephala and C. rufifacies. It could be due to the similarity of morphological characteristics of both species24. This result indicated that the performance of the AlexNet model was equal to human practice.

Figure 3

tSNE visualization of the AlexNet model by dimensionality reduction of the penultimate features (The test data are shown in colors for different classes; yellow refers to data set of Musca domestica, green refers to data set of Lucilia cuprina, greenish blue refers to data set of Chrysomya megacephala, and dark purple refers to data set of Chrysomya (Achoetandrus) ruffifacies).

Full size image

The five hidden convolutional layers of AlexNet on four example images of posterior spiracles were visualized as shown in Fig. 4. In each layer, the posterior spiracle of each species was randomly selected for visualization. The color images were generated to clearly show the patterns. The multiple hidden layers could extract a rich hierarchy of patterns from the input image, including low-level, middle-level, and high-level patterns that can be observed. The lower layers extracted the detailed local patterns from images, such as the textures, margins, and others. The complexity and variation of the visualized patterns increased when it comes to the middle layers. The higher layers extracted specific pattern features.

Figure 4

Visualization of hidden convolutional layers in AlexNet for four example images (To clearly show the patterns, we generated the color images; (A) Chrysomya (Achoetandrus) ruffifacies, (B) Chrysomya megacephala, (C) Lucilia cuprina, (D) Musca domestica).

Full size image

The classification results (validation and test) for each image are displayed in the confusion matrices (Fig. 5) which show how predicted species (columns) correspond to the actual species (rows). The values along the diagonal indicate the number of correct predictions, whereas off-diagonal values indicate misclassifications. Interestingly, no misclassification was found after testing the model by using the test images (Fig. 5A). Therefore, the results indicated that the predictions of the AlexNet model match the taxonomic expert classification. The confusion matrix showed misclassification between C. megacephala and C. rufifacies (Fig. 5B), corresponding to the results of tSNE visualization. When the model was tested with the outsource images, the accuracy of the classification for C. megacephala, C. rufifacies, L. cuprina, and M. domestica was 94.94, 98.02, 98.35, 100%, respectively (Fig. 5B). The results from using the Heatmap program showed that the prediction accuracy of this model was still high (98.70–100%), depending on image conditions (Fig. 6).

Figure 5

The confusion matrix achieved by the AlexNet model: (A) Validation (B) Test dataset.

Full size image
Figure 6

Heatmap of attention maps of AlexNet on example images showing prediction accuracy (98.70–100%) of this model for classification of each fly species in different image conditions.

Full size image

The framework of the AlexNet model was demonstrated in Fig. 7. This model consists of 5 convolutional layers (Conv1, Conv2, Conv3, Conv4, and Conv5) and 3 fully connected (FC6, FC7, and FC8) layers19. The convolutional layers extract features, and then the fully connected layers combine the features to model the outputs. The example of characteristics of the images presented in all 5 convolutional layers were shown in Fig. 4.

Figure 7

Framework of the proposed interpretation architecture for the deep learning model, AlexNet, in this study. The AlexNet model contains eight learned layers with weights (five convolutional layers and three fully connected layers), namely Conv1 as a convolutional layer that accepts an input image tensor (224 × 224 × 3) and performs Convolution to obtain the position and strength of the input image properties causing tensor output (55 × 55 × 96), Conv2 as a convolutional layer that generates tensor output (27 × 27 × 256), Conv3 as a convolutional layer that creates tensor output (13 × 13 × 384), Conv4 as a convolutional layer that creates tensor output (13 × 13 × 384), Conv5 as a convolutional layer that creates tensor output (13 × 13 × 256), FC6 as a fully connected layer (4096) which flattens tensor output from Conv5 into a single vector by weight sharing, resulting in tensor output (4096 × 1), FC7 as a fully connected layer which performs the same actions as FC6 and generates tensor output (4096 × 1), and FC8 as a fully connected layer to generates tensor output (1000 × 1) which is the prediction result and which contains 1000 neurons corresponding to the 1000 natural image classes.

Full size image

Previously, CNNs have been used successfully to identify different cells or species11,12,13,14,15,16,17,18,26,27. This study also confirmed the efficiency of CNNs in identifying fly species.

Finally, we created a web application called “The Fly” by using our classification model for identifying species of fly maggots. The web application is available at https://thefly.ai. Users can identify species of fly maggots by uploading their images of posterior spiracles and the result with associated probability percentage will then be shown. This web application can be accessed and used on both desktop and mobile browsers. In terms of performance limitations, this web application was designed to identify only four species of fly maggots using images of posterior spiracles. This web application is the beginning step of the development of automatic species-identification for fly species in Order Diptera. More images of these four species and other species must be studied in the future. In addition, the results from this study will be applied to develop a feature as a microservice for the identification of fly maggots in a mobile application called iParasites which is currently available on AppStore and GooglePlay. We, nonetheless, wish to project that taxonomic experts are still important and critical for the development of this automatic identification by AI-based imaging system as mentioned in a previous report11.


Source: Ecology - nature.com

Q&A: Latifah Hamzah ’12 on creating sustainable solutions in Malaysia and beyond

Earlier snowmelt may lead to late season declines in plant productivity and carbon sequestration in Arctic tundra ecosystems