in

Neoptile feathers contribute to outline concealment of precocial chicks

Experiment 1: proof of principle

As a proof of principle, we designed the first experiment to test whether appendages may help to conceal the outline. We created an image of a uniformly light grey coloured circular object with a size of 2950 pixels (px)/250.0 mm circumference and 470 px/39.8 mm radius on a dark grey background using Adobe InDesign CS6 version 8.030. The initial setup started with no appendages added to the outline (Fig. 3a, ‘0’). We then added object-coloured appendages (i.e. lines of 1 Pt/4 px/0.4 mm thickness and 118 px/10.0 mm length) with regular intervals resembling protruding neoptile chick feathers orthogonally to the object outline (‘Basic Scenario’, Fig. 3a). The first image with appendages had 32 appendages added to the outline (Fig. 3a, ‘32′). We then doubled the number of appendages stepwise creating denser spaced appendages to the outline until the extended outline was completely filled (Fig. 3a, ‘full circle’). For the vision of a simulated predator, we used the spatial acuity from humans (Homo sapiens, 72 cycles per degree, cpd)36,45,46 in the basic scenario. The full details for the parameters are provided in Supplementary Table S1 (a–g).

Figure 3

(a) Basic Scenario: Seven stages of the artificial chick setup with varying number of thin, non-transparent appendages having all the same length. (b) Scenario 1: varying appendage thickness applied to the Basic Scenario. (c) Scenario 2: varying appendage transparency applied to the Basic Scenario. (d) Scenario 3: varying appendage length heterogeneity applied to the Basic Scenario. (e) Scenario 4: varying background complexity with chessboard backgrounds. (f) Scenario 5: high, medium and low spatial acuity applied to the Basic Scenario. (a–f) The analysed region of interest (ROI) is highlighted in red for clarification only. The figure was produced in Adobe Photoshop29 and InDesign30.

Full size image

To further explore the mechanism, we altered appendage characteristics, background and the spatial acuity of the predator. First, we increased appendage thickness to 2 Pt/8 pixels/ 0.7 mm (Scenario 1a) and 3 Pt/12 pixels/1.1 mm (Scenario 1b) resulting in decreased inter-appendage intervals (Fig. 3b and Supplementary Table S1, h–u). Second, we changed appendage transparency to 25% (Scenario 2a) and 50% transparency (Scenario 2b) (Fig. 3c and Supplementary Table S1, v to ai). Third, we varied the appendage length heterogeneity; half of the appendages having 50% of the length (Scenario 3a), and half of the appendages at 25% and one quarter at 50% of the original appendage length (Scenario 3b) (Fig. 3d and Supplementary Table S1, aj to aw). Fourth, we investigated the effect of background complexity on the detectability of the outline. As background, we used a chessboard pattern with large squares (346 pixels/29.3 mm, Scenario 4a) and with small squares (86 pixels/7.3 mm, Scenario 4b) (Fig. 3e and Supplementary Table S1, ax to bk). Fifth, we altered the spatial acuity to test whether or how the visual systems of different predators would affect detectability. We simulated the spatial acuity of a corvid predator (30 cpd, Scenario 5a) and canid predator (10 cpd, Scenario 5b) (Fig. 3f and Supplementary Table S1, bl to by), the two most common predators of ground-nesting plovers16,47,48. This range also covered other potential predators (Supplementary Table S2).

We did not account for differences in colour vision between different predators as the setup mostly consists of greyscale images that predominantly differ in luminance. Note that in many animals, visual acuity is greater for achromatic than chromatic stimuli34,49.

We conducted visual modelling and visual analysis using the Quantitative Colour Pattern Analysis (QCPA) framework27 integrated into the Multispectral Image Analysis and Calibration (MICA) toolbox50 for ImageJ version 1.52a51. We converted the generated images into multispectral images containing the red, green and blue channel in a stack and transformed them further into 32-bits/channel cone-catch images based on the human visual system, which are required by the framework. To create the luminance channel, we averaged the long and medium wave channel, which is thought to be representative of human vision52. We modelled the spatial acuity with Gaussian Acuity Control at a viewing distance of 1300 mm and a minimum resolvable angle (MRA) of 0.01389. To increase biological accuracy, we applied a Receptor Noise Limited (RNL) filter that reduces noise and reconstructs edges in the image. The RNL filter used the Weber fractions “Human 0.05” provided by the framework (longwave 0.05, mediumwave 0.07071, shortwave 0.1657), luminance 0.1, 5 iterations, a radius of 5 pixels and a falloff of 3 pixels as specified in van den Berg et al.27 (Supplementary Fig. S1).

To test for the detectability of the outline, we used LEIA27, which is conceptually similar to the boundary strength analysis34. Boundary strength analysis requires an image with clearly delineated (clustered) colour and luminance pattern elements. However, a large degree of subthreshold details, which may be still perceived by the viewer gets lost in the clustering process. LEIA has the advantage of not requiring such a clustered input and therefore can be directly applied to RNL filtered images. LEIA measures the edge intensity (i.e. the luminance contrast) locally at each position in the image. The output image displays ΔS values in a 32-bit stack of four slices, where each slice shows the values measured in different angles (horizontal, vertical and the two diagonals, for more details, see van den Berg et al.27).

We ran LEIA on the chosen region of interest (ROI) with the same Weber fractions used for the RNL filter. The ROI was a 180 pixel-wide band that included the area of the appendages extended by 30 pixels towards the object inside and towards the outside (Fig. 3a). We log-transformed the ΔS values as recommended for natural scenes53 to make the results comparable to the natural background images used in Experiment 2 (see below). To test whether the size of the ROI affected our results, we ran an additional analysis using a 1500 × 1500 pixel-wide rectangle surrounding the object as the ROI, which included a bigger area of the background and the full object inside (Supplementary Fig. S2).

We extracted the luminance ΔS values from the four slices of the output image stack in ImageJ and stored them in separate matrices for further analysis using R version 3.5.328. ImageJ generally assigned values outside the chosen ROI to zero. Thus, we first discarded all values of zero. We then set all negative values that arose as artefacts in areas without any edges to zero, in order to make them biologically meaningful. We then identified the parallel maximum (R function pmax ()) of the four interrelated direction matrices and transferred this value to a new matrix.

High luminance and colour contrasts imply high conspicuousness34. Consequently, a lower luminance contrast leads to lower conspicuousness and therefore, better camouflage. As the outline is an important cue for predators locating and identifying a prey item7, we assumed that especially low contrasts in the outline of an object improve camouflage. Thus, a reduction of edge intensity in the object outline by the appendages indicates a camouflage improvement. To test whether the object outline became less detectable we compared the edge intensity of the outline pixels in the basic scenario without appendages (Supplementary Table S1, a) with corresponding pixels from other scenarios. The outline pixels were characterised by high edge intensity and constituted a prominent peak. They comprised 1.59% of all pixels in the analysis focused on the contour region (see “Results”, Fig. 1a). For all scenarios, we calculated the mean edge intensity of the high edge intensity pixels (HEI pixels) and identified the changes with parameter variation. Unless otherwise stated we used R28 to produce graphs and panels.

As an alternative mechanism, we tested whether appendages create a transition zone with intermediate luminance around the object (Mean Luminance Comparison (MLC), Supplementary material). We calculated the mean luminance of the object inside up to the border (object region), the area covered by appendages (appendage region) and the background (background region). We predicted that the appendage region would be characterised by intermediate luminance between object and background and therefore provide a luminance transition zone to conceal the object outline.

Experiment 2: chick photographs

Using pictures of young snowy plover chicks hiding when approached by a simulated predator, we tested if protruding neoptile feathers helped to conceal the chicks’ outline and therefore improve their camouflage.

We studied snowy plovers in their natural environment at Bahía de Ceuta, Sinaloa, Mexico. Fieldwork permits were granted by the Secretaría de Medio Ambiente y Recursos Naturales (SEMARNAT). All field activities were performed in accordance with the approved ethical guidelines outlined by SEMARNAT. The breeding site consists of salt flats that are sparsely vegetated and surrounded by mangroves54. The predators of chicks are not well described but likely similar to the egg predator community that includes several mammalian predators such as racoon, opossum, coyote, bob cat, avian predators such as crested caracara Caracara cheriway and reptiles17. General field methodology is provided elsewhere55,56. In 2017, we took photographs of young (one to 3 days old) chicks hiding on the ground, that had already left the nest scrape. To photograph the chicks, two observers approached free-roaming families with two mobile hides within the period one hour after sunrise and one hour before sunset. At a distance of 100–200 m, one observer acted as ‘predator’, left the hide and openly approached the brood while the second observer kept watching the chicks. The chicks responded by crouching to the ground and staying motionless while the parents were alarming. The second observer directed the ‘predator’ to the approximate hiding place. When searching for the chicks, we took great care to reduce the number of steps to avoid modification of the ground through our tracks.

Once the first chick had been found, the second observer joined the ‘predator’ and took the chick photographs. We used a Nikon D7000 camera converted to full spectrum including the UV range (Optic Makario GmbH, Germany) and a Nikkor macro 105 mm lens that allows transmission of light at low wavebands. The equipment was chosen because calibration data were available for this combination50. Each hiding background was photographed with and without the chick using a UV pass filter for the UV spectrum and a UV/IR blocking filter (“IR-Neutralisationsfilter NG”, Optic Makario GmbH, Germany) for the visible spectrum. The camera was set to an aperture of f/8, ISO 400 and the pictures were stored in “RAW” file format. We used exposure bracketing to produce three images to ensure that at least one picture was not over or underexposed. A 25% reflectance standard (Zenith Polymer Diffuse Reflectance Standard provided by SPHEREOPTICS, Germany) placed in the corner of each picture enabled a subsequent standardizing of light conditions.

In total, we took pictures of 32 chicks from 15 families. For 21 chicks we obtained photographs suitable for further analyses with an unobstructed view to the entire chick and only one chick per photograph. Of these, we randomly selected pictures of 15 chicks. Unfortunately, it was not possible to obtain proper alignment of visual and UV pictures in ImageJ as either chick or camera moved slightly in the break between changing filters for the two settings. Therefore, we restricted our analyses to human colour vision and discarded the UV pictures for further analysis.

In each picture, we manually selected the chick outline and the feather-boundary as a basis for the ROIs (Fig. 2a–c). The chick outline included bill, legs, rings and all areas densely covered by feathers without background shining through. We then marked the feather-boundary, i.e., the smoothened line created by the protruding neoptile feather tips. In the next step, we transferred images of chicks with or without protruding feathers, i.e. cropped at feather-boundary or chick outline, respectively, and inserted them into a uniform or the natural background. First, we cropped the chick without protruding feathers and transferred it into a uniform black background. Second, we cropped the chick including all feathers and inserted it into exactly the same hiding spot on the picture of the natural background (Fig. 2b). Third, we cropped the chick excluding the protruding feathers and transferred it into the natural background (Fig. 2c).

We then proceeded with LEIA following the protocol of experiment 1 with the following changes. Again, the selected ROI was the contour region ranging from the chick outline extended by 30 pixels towards the chick inside to the feather-boundary extended by 30 pixels towards the outside. We excluded all areas of the ROI that showed a shadow of the chick as the chicks’ shadow was missing on the empty natural background images to which the cropped chicks were transferred to (Fig. 2a–c). We used the images of the cropped chicks on the black background to determine the threshold of the HEI pixels according to the protocol of experiment 1 for each chick separately. For each cropped chick that was transferred to the picture with the natural background, we compared the mean edge intensity of the HEI pixels provided by LEIA with and without protruding feathers (Fig. 2b,c) using a two-sided paired t-test.

We also calculated mean luminance differences for chick photographs. Details for this MLC are given in the supplementary material.


Source: Ecology - nature.com

Valuing wetlands

3 Questions: Claude Grunitzky MBA '12 on launching TRUE Africa University