in

Integrating drone-borne thermal imaging with artificial intelligence to locate bird nests on agricultural land

Ethics statement

The work was carried under the ethical Permit Number VARELY/711/2016 issued by the regional environmental centre of Finland. Every possible effort was made to minimise disturbance.

Field system set-up

The study was performed in a farmland area in Southern Finland, near Lammi Biological Station (61° N, 25° E). We selected the lapwing as the study species, a common breeder in the study area. This ground nesting medium-sized farmland bird is listed as IUCN Near Threatened globally but Endangered within Europe17. The main threat to the species is agricultural intensification, and particularly the destruction of nests in arable land due to large scale mechanical operations, such as sowing17. In Finland, as in many other countries, a fraction of lapwing nests on arable land are located by volunteers, and their protection secured. However, locating these nests is highly time consuming and challenging, and most nests are left unprotected, and likely destroyed, every spring18.

We selected nine field parcels with a total of 22 lapwing nests in 2016, and three field parcels with nine nests in 2017. Study field parcels were intensively searched for nests by experienced observers during the week preceding the start of the recording work (see below), and individual nest locations were recorded with a hand-held GPS (model Garmin GPSmap 62 s, position accuracy ~ 3 m). Overall five of the field parcels were represented by ploughed fields (with irregular substrate owing to the upper soil being turned over) and seven by unploughed fields (direct sowing or winter cereals). Flight routes were constructed and saved in MapPilot 1.5.1 (Drones Made Easy, San Diego, USA) for areas around nests that covered approximately 0.5–1 ha (see Fig. 4). A drone cruised along the pre-programmed route, each flight lasting approximately 6–10 min (the time was constrained by the drone battery). Flights were done between 5-25.5.2016 and 16-18.5.2017 at altitudes of 15 or 25 m above ground level. Distance between transects was approximately 7 m for flights at 15 m of altitude and 12 m for flights at 25 m of altitude. We aimed to achieve a minimum of at least one flight over each field at each altitude (15 and 25 m) and each period of the day (07—11 a.m. and 14—20 p.m.). Overall, 54 flights were completed in 2016 and 19 in 2017. Thermal images were acquired by using a DJI Phantom 3 Advanced quadcopter with a FLIR Tau 2, 336 * 256 pixel, 9 mm lens thermal camera mounted on it. To ensure quality, thermal images were continuously saved during flights directly to a USB stick using a ThermalCapture unit (TeAx Technology GmbH, Wilnsdorf, Germany). Image acquisition occurred with the camera pointed in nadir position. Temperature, cloud cover, air humidity and wind speed data for each flight were accessed afterwards through Airdata.com service. In all circumstances, the incubating adult left the nest as we approached the edge of the field before the start of the drone flight.

Figure 4

Schematic representation of the key steps of this study, from extensive nest search at selected fields (A), to flying the drone carrying the thermal sensor along a pre-programmed route over the field (B), preparing the thermal images by extracting the coordinates of the box drawn around the nest (C), and finally applying a neural network deep learning algorithm to classify images as having or not having a nest (D). The satellite image in (A) and (B) is taken from Google maps (www.google.it/maps, map data: Google). Figure created in Microsoft Office PowerPoint 2016 (www.microsoft.com).

Full size image

Deep learning algorithm for nest identification

To automatically classify images as having or not having a nest, we trained a deep learning model (more details in extended methods in Supplementary information) to detect nests while minimising the occurrence of false positives (hereafter false presences) and false negatives (hereafter false absences). In doing so, we choose a convolutional neural network (CNN) approach19 and YOLOv3 training program20, as done in a similar study3.

We selected all images with nests in them (positive images, n = 1969) from each flight, and a larger number of randomly selected images without a nest (negative images, n = 3,469). We manually labelled all positive images using software ImageJ 1.8.021 with bounding boxes, which coordinates were then indexed by corresponding image names to be used by the YOLOv3. Next, we split the set of positive images into a training set and a test set by randomly selecting 70% of the flights (and all associated images) for training, and using the remaining 30% of flight images for testing. Selection was done at the flight rather than the image level to avoid that similar images from the same flight would be used in both training and testing, causing testing results to be positively biased.

Training was executed for 800 epochs, whereby an epoch is defined as the entire dataset passing forward and backward through the neural network (training program available at http://github.com/ultralytics/yolov3). After training was completed, a weights file was generated, which included the final value stored by each neuron, represented by floating point numbers in series. We then developed a program which read the configuration file of the YOLOv3 architecture and constructed the network according to the architecture specified in PyTorch implementation. Developing our own program was done to increase flexibility of expanding functionalities without interrupting unfamiliar code flow. The program read the trained weights file in series and stored the values to proper neurons, resulting in the construction of a deep learning model specifically for nest detection. This was then used to validate and test all the trained weight files to select the best model.

Through the testing, an output value was produced representing the confidence that an image included a nest. Values close to zero indicate that an image contained no nests with a high confidence, while values close to one that an image included a nest with a high confidence. In the cases where multiple spots appeared as possible nests, the overall confidence for that image was taken from the spot with the highest confidence values.

Performance of the neural network in discriminating images as having a nest or not was assessed by means of calculating four standard evaluation metrics for these types of systems: Precision, recall (propensity to avoid false positives and false negatives, respectively), accuracy (overall discrimination accuracy) and F1 score (the harmonic mean of precision and recall; more details in extended methods in Supplementary information). Performance was evaluated separately for the training and for the testing parts of the system.

Statistical analyses

We quantified the effect of environmental factors on the performance of the deep learning algorithm. We ran two separate analyses, one focused on factors affecting occurrence of false presences (i.e. the algorithm erroneously identifies an image without a nest as having a nest), and one on factors affecting occurrence of false absences (algorithm erroneously identifies an image with a nest as not having a nest). As the response of the former model we used the confidence as given by the algorithm for all the test images known to have no nest in them (values close to one indicate high probability of false presence). This model used data from images without a nest from all available flights (n = 3,469 images from 73 flights, as these were not used for training). The response for the false absence model was the confidence of the test images (n = 497 images with a nest from the 30% flights, n = 14 flights, not used for training) to having no nest in them (values close to one indicate high probability of false absence). Because the response variable is a proportion, we used beta regression with logit link. In each model we also included flight identity as a random factor to account for pseudo-replication stemming from multiple images per flight.

Models were run using the glmmTMB package22 in R version 3.6.123. Covariates considered are: air temperature (hereafter temperature), cloud cover, wind speed, air humidity (hereafter humidity), drone height (either 15 or 25 m above ground level) and substrate type (two classes: ploughed or un-ploughed). These variables are deemed potentially relevant in affecting the nest visibility in thermal images, and consequent performance of the deep learning algorithm. Air temperature, humidity, cloud cover and wind speed may reduce the thermal contrast between a nest (warm) and the remaining landscape, and create heterogeneous thermal landscape that hampers nest detection24. Moreover, ploughing alters the physical structure of the substrate, making it rougher and more heterogeneous, with many physical objects that may resemble a nest shape, thereby affecting nest detection accuracy. Drone height may affect detection accuracy due to the decreasing size and sharpness of a nest at higher altitudes.

Prior to analyses, we checked for outliers and ran variance inflation (VIF) analyses to assess the collinearity level between covariates. For the false presence model, humidity had a high VIF value of 3.9 being strongly correlated to temperature (R = − 0.7) and was excluded from following analyses. For the false absence model, humidity and wind speed had high VIF (28.8 and 2.3, both correlated with temperature, R = − 0.9 and 0.7, respectively), and were excluded from analyses. The remaining covariates had a VIF < 2, indicating low collinearity25. We then constructed a full model for each of the two response variables and performed model selection and multimodel averaging based on the set of best supported (with 95% confidence) models as given from Akaike weights26 using the MuMin package27.


Source: Ecology - nature.com

Solar-trackable super-wicking black metal panel for photothermal water sanitation

Impact of climate, rising atmospheric carbon dioxide, and other environmental factors on water-use efficiency at multiple land cover types