(C) PLOS One
This story was originally published by PLOS One and is unaltered.
. . . . . . . . . .
Sticky Pi is a high-frequency smart trap that enables the study of insect circadian activity under natural conditions [1]
['Quentin Geissmann', 'Department Of Microbiology', 'Immunology', 'The University Of British Columbia', 'Vancouver', 'British Columbia', 'Michael Smith Laboratories', 'Faculty Of Land', 'Food Systems', 'Unceded XʼMəθkʼƏýəm Musqueam Territory']
Date: 2022-07
In the face of severe environmental crises that threaten insect biodiversity, new technologies are imperative to monitor both the identity and ecology of insect species. Traditionally, insect surveys rely on manual collection of traps, which provide abundance data but mask the large intra- and interday variations in insect activity, an important facet of their ecology. Although laboratory studies have shown that circadian processes are central to insects’ biological functions, from feeding to reproduction, we lack the high-frequency monitoring tools to study insect circadian biology in the field. To address these issues, we developed the Sticky Pi, a novel, autonomous, open-source, insect trap that acquires images of sticky cards every 20 minutes. Using custom deep learning algorithms, we automatically and accurately scored where, when, and which insects were captured. First, we validated our device in controlled laboratory conditions with a classic chronobiological model organism, Drosophila melanogaster. Then, we deployed an array of Sticky Pis to the field to characterise the daily activity of an agricultural pest, Drosophila suzukii, and its parasitoid wasps. Finally, we demonstrate the wide scope of our smart trap by describing the sympatric arrangement of insect temporal niches in a community, without targeting particular taxa a priori. Together, the automatic identification and high sampling rate of our tool provide biologists with unique data that impacts research far beyond chronobiology, with applications to biodiversity monitoring and pest control as well as fundamental implications for phenology, behavioural ecology, and ecophysiology. We released the Sticky Pi project as an open community resource on
https://doc.sticky-pi.com .
Funding: Q.G. was funded by the International Human Frontier Science Program Organization (LT000325/2019). This research (funding to P.K.A., C.H.H. and J.C.) is part of Organic Science Cluster 3, led by the Organic Federation of Canada in collaboration with the Organic Agriculture Centre of Canada at Dalhousie University, supported by Agriculture and Agri-Food Canada’s Canadian Agricultural Partnership - AgriScience Program. P.K.A. was supported by funding from Agriculture and Agri-Food Canada. This work was also supported by a Seeding Food Innovation grant from George Weston Ltd. to C.H.H. and J.C., and a Canada Research Chair award to C.H.H. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Here, we present and validate the Sticky Pi, an open-source generalist automatic trap to study insect chronobiology in the field. Our unique framework both automatises insect surveying and adds a novel temporal and behavioural dimension to the study of biodiversity. This work paves the way for insect community chronoecology: the organisation, interaction, and diversity of organisms’ biological rhythms within an ecological community.
While chronobiology requires a physiological and behavioural time scale (i.e., seconds to hours), insect surveys have primarily focused on the phenological scale (i.e., days to months). Compared to bird and mammal studies, where methodological breakthroughs in animal tracking devices have enabled the ecological study of the timing of behaviours, similar tools for invertebrates are lacking [ 11 ] or limited to specific cases [ 12 – 14 ]. Promisingly, portable electronics and machine learning are beginning to reach insect ecology and monitoring [ 15 ]. For example, “smart traps can now automatise traditional insect capture and identification” [ 16 ]. In particular, camera-based traps can passively monitor insects and use deep learning to identify multiple species. However, such tools are often designed for applications on a single focal species and, due to the large amount of data they generate as well as the complexity of the downstream analysis, camera-based traps have typically been limited to daily monitoring and have not previously been used to study insect circadian behaviours.
The emerging field of chronoecology has begun to integrate chronobiological and ecological questions to reveal important phenomena [ 4 , 5 ]. For example, certain prey can respond to predators by altering their diel activity [ 6 ], parasites may manipulate their host’s clock to increase their transmission [ 7 ], foraging behaviours are guided by the circadian clock [ 8 ], and, over evolutionary timescales, differences in diel activities may drive speciation [ 9 ]. However, because nearly all studies to date have been conducted on isolated individuals in laboratory microcosms, the ecological and evolutionary implications of circadian clocks in natural environments remain largely unknown [ 10 ].
In order to fully characterise ecological communities, we must go beyond mere species inventories and integrate functional aspects such as interspecific interactions and organisms’ behaviours through space and time [ 1 , 2 ]. Chronobiology, the study of biological rhythms, has shown that circadian (i.e., internal) clocks play ubiquitous and pivotal physiological roles, and that the daily timing of most behaviours matters enormously [ 3 ]. Therefore, understanding not only which species are present, but also when they are active adds a crucial, functional, layer to community ecology.
We asked to what extent the presence of previous insects on a trap impacted its subsequent capture (e.g., if trap became saturated by insects). We first observed that the cumulative number of all insects on all traps, did not appear to change over time ( S6A Fig ). Then, to statistically address this question for individual taxa, we reasoned that if the capture rate was linear, the number of insects captured in the first 3 days should be 50% of total (6 full days) N [0,3]days /N [0,6]days = 1/2. Thus, we tested whether the final, total, number of insects (from all taxa) explained the proportion of captured insects N [0,3]days /N [0,6]days (of a given taxa) in the first half of the experiment. We found that the number of insects captured in the first half of each trial was not different from 1/2 (intercept) and that the number of insects captured did not explain taxa’s capture rate (slope) ( S6B Fig , linear models, p−value>0.05 ∀ taxa, t tests on model coefficients).
In order to assess the activity pattern of a diverse community, we deployed 10 Sticky Pis in a raspberry field for 4 weeks, replacing sticky cards weekly. This figure shows a subset of abundant insect taxa that were detected by our algorithm with high precision (see Supporting information for the full dataset). (A) Average capture rate over time of the day (note that time was transformed to compensate for changes in day length and onset—i.e., Warped ZT: 0 h and 12 h represent the sunset and sunrise, respectively. See Methods section). (B) MDS of the populations shown in A. Similarity is based on the Pearson correlation between the average hourly activity of any 2 populations. Small points are individual bootstrap replicates, and ellipses are 95% confidence intervals (see Methods section). Insect taxa partition according to their temporal activity pattern (e.g., nocturnal, diurnal, or crepuscular). Error bars in A show standard errors across replicates (device×week). Individual facets in A were manually laid out to match the topology of B. The underlying data for this figure can be found on figshare [ 20 ]. MDS, multidimensional scaling; ZT, Zeitgeber time.
Berry fields are inhabited by a variety of insects for which we aimed to capture proof-of-concept community chronoecological data. In a separate trial, we placed 10 Sticky Pis in a raspberry field and monitored the average daily capture rate of 8 selected taxa ( Fig 6A ) over 4 weeks—we selected these 8 taxa based on the number of individuals, performance of the classifier ( Fig 3 ), and taxonomic distinctness ( S4 Fig shows the other classified taxa). We then defined a dissimilarity score and applied multidimensional scaling (MDS) to represent temporal niche proximity in 2 dimensions (see Methods section). We show that multiple taxa can be monitored simultaneously, and statistically partitioned according to their temporal niche ( Fig 6B ). Specifically, as shown in Fig 6A , sweat bees (Lasioglossum laevissimum), large flies (Calyptratae), and hoverflies (Syrphidae) show a clear diurnal activity pattern with a capture peak at solar noon (Warped Zeitgeber time [WZT] = 6h, see Methods section for WZT). Sciaridae gnats were also diurnal, but their capture rate was skewed towards the afternoon, with a peak around WZT = 7 h. The Typhlocybinae leafhopper was vespertine, with a single sharp activity peak at sunset (WZT = 11 h). The Psychodidae were crepuscular, exhibiting 2 peaks of activity, at dusk and dawn. Both mosquitoes (Culicidae) and moths (Lepidoptera) were nocturnal.
Our results corroborate a distinctive crepuscular activity pattern for male D. suzukii and other putative drosophilids. For example, 68.0% (CI 95% = [63.9, 71.3], 10,000 bootstrap replicates) of D. suzukii and 57.8% (CI 95% = [53.2, 61.6], 10,000 bootstrap replicates) of the other Drosophilids. occurred either in the 4 hours around dawn (WZT∈[8, 14] h) or dusk (WZT<2 or WZT>22 h)under a time-uniform capture null hypothesis, we would expect only of captures in these 8 hours. In contrast, Figitidae wasps were exclusively diurnal, with 83.0% CI 95% = [79.9,85.4], 10,000 bootstrap replicates) of all the captures occurring during the day (WZT<12), where we would expect only 50% by chance.
We deployed 10 Sticky Pis in a blackberry field for 7 weeks and attached apple cider vinegar baits to half of them (blue versus red for unbaited control). This figure shows specifically the males Drosophila suzukii , the other Drosophilidae flies, and the Figitidae wasps. (A) Capture rate over time, averaged per day, showing the seasonal occurrence of insect populations. (B) Average capture rate over time of the day (note that time was transformed to compensate for changes in day length and onset—i.e., Warped ZT: 0 h and 12 h represent the sunset and sunrise, respectively, see Methods section). Both males D. suzukii and the other drosophilids were trapped predominantly on the baited devices. Both populations exhibit a crepuscular activity. In contrast, Figitidae wasps have a diurnal activity pattern and are unaffected by the bait. Error bars show standard errors across replicates (device×week). The underlying data for this figure can be found on figshare [ 20 ]. ZT, Zeitgeber time.
To test the potential of the Sticky Pis to monitor wild populations of free-moving insects in the field, we deployed 10 traps in a blackberry field inhabited by the well-studied and important pest species D. suzukii (see Methods section). Like D. melanogaster, D. suzukii has been characterised as crepuscular both in the laboratory [ 21 ] and, with manual observations, in the field [ 22 ]. Since capture rates can be very low without attractants [ 22 ], we baited half (5) of our traps with apple cider vinegar (see Methods section). In addition to D. suzukii, we wanted to simultaneously describe the activity of lesser-known species in the same community. In particular, D. suzukii and other closely related Drosophila are attacked by parasitoid wasps [Hymenoptera: Figitidae], 2 of which (Leptopilina japonica and Ganaspis brasiliensis) have recently arrived in our study region [ 23 ]. Their diel activity has not yet been described. In Fig 5 , we show the capture rate of male D. suzukii, other putative Drosophilidae and parasitoid wasps throughout the 7-week trial ( Fig 5A ) and throughout an average day ( Fig 5B ).
Also as hypothesised, the fly populations held in constant light (LL) showed no detectable behavioural rhythm and had a constant average capture rate of 1.6 h −1 (SD = 0.62) ( Fig 4D and 4F ). The average autocorrelation with a 24 h lag for the 6 LL series was 0.03 and was not significantly different from zero (p−value>0.24, t test), which shows the absence of detectable 24-h rhythm. In contrast, the 5 series of the LD internal control had a large and significant autocorrelation value of 0.42 (p−value<10 −4 , t test). Collectively, these observations indicate that Sticky Pis have the potential to capture circadian behaviour in a free-flying insect population.
Vinegar flies, Drosophila melanogaster , were held in a large cage with a Sticky Pi. We conducted 2 experiments to show the effect of Light:Light (red; A, C, E) and Dark:Dark (blue; B, D, F) light-regimes on capture rate. Each was compared to a control population that remained in the entrainment conditions: Light:Dark, 12:12 h cycles (black). (A, B) Cumulative number of insects captured over time. Columns of the panels correspond to independent full replicates. We used 2 devices per condition, in each full replicate. (C, D) Capture rates over circadian time. As expected, capture rates in LD and DD show a clear crepuscular activity, but no activity peak in constant light. (E, F) Autocorrelation of capture rates. Each thin line represents a series (i.e., one device in one full replicate), and the thick line is the average autocorrelogram. The green dotted line shows the expectation under the hypothesis that there is no periodic pattern in capture rate (ACF). The underlying data for this figure can be found on figshare [ 20 ]. ACF, AutoCorrelation Function.
To test whether capture rate on a sticky card could describe the circadian activity of an insect population, we conducted a laboratory experiment on vinegar flies, Drosophila melanogaster [Diptera: Drosophilidae], either in constant light (LL) or constant dark (DD), both compared to control populations held in 12:12 h Light:Dark cycles (LD) ( Fig 4 ). From the extensive literature on D. melanogaster, we predicted a crepuscular activity LD and DD (flies are free-running in DD), but no rhythm in LL [ 19 ]. We placed groups of flies in a large cage that contained a single Sticky Pi (simplified for the laboratory and using infrared light; Methods section). The DD and LL experiments were performed independently and each compared to their own internal LD control. The use of a infrared optics and lighting resulted in lower quality images (i.e., reduced sharpness). However, in this simplified scenario, there were no occlusions, and the classification was binary (fly versus background). Therefore, we used a direct approach: We trained and applied independent Mask-RCNN to segment flies from their background. Then, rather than tracking insects (using our SIM), we extracted the raw counts from each frames and applied a low-pass filter (see Methods section). Consistent with previous studies on circadian behaviour of D. melanogaster, populations in both LD and DD conditions exhibited strong rhythmic capture rates, with an approximate period of 24 h: 23.8 h and 23.67 h, respectively. For example, their overall capture rate was approximately 0.6 h −1 between ZT22 and ZT23 h, but peaked at 9.5 h −1 between ZT01 and ZT02 h ( Fig 4C and 4E ). The average autocorrelation (measure of rhythmicity), with a 24 h lag, for the both DD populations and their internal control were high and significant: 0.34 (p−value<10 −3 , N = 6, t test) and 0.35 (p−value = 2×10 −3 , N = 6, t test), respectively.
The overall accuracy (i.e., the proportion of correct predictions) is 78.4%. Our dataset contained a large proportion of either “Background objects” and “Undefined insects” (16.2% and 22.4%, respectively). When merging these 2 less informative labels, we reach an overall 83.1% accuracy on the remaining 17 classes. Precision (i.e., the proportion of correct predictions given a predicted label) and recall (i.e., the proportion of correct prediction given an actual label) were high for the Typhlocybinae (leafhoppers) morphospecies (92% and 94%). For Drosophila suzukii [Diptera: Drosophilidae] (spotted-wing drosophila), an important berry pest, we labelled males as a separate class due to their distinctive dark spots and also reached a high precision (86%) and recall (91%)—see detail in S3 Table . These results show that performance can be high for small, but abundant and visually distinct taxa.
(A) Algorithm to classify insect tuboids. The first image as well as 5 randomly selected within the first day of data are selected. Each image is scaled and processed by a ResNet50 network to generate an output feature vector per frame. Each vector is augmented with the original scale of the object, and the element-wise median over the 6 frames is computed. The resulting average feature vector is processed by a last, fully connected, layer with an output of 18 labels. (B) Representative examples of the 18 different classes. Note that we show only 1 image, but input tuboids have multiple frames. All images were rescaled and padded to 224×224 px squares: the input dimensions for the ResNet50. The added blue scale bar, on the bottom left of each tile, represents a length of 2 mm (i.e., 31 px). (C) Classification performance, showing precision, recall and f1-score (the harmonic mean of the precision and recall) for each label. Row numbers match labels in B. See S3 Table for the full confusion matrix. Abbreviated rows in C are Macropsis fuscula (3), Drosophila suzukii males (4), drosophilids that are not male D. suzukii (5), Anthonomus rubi (11), Psyllobora vigintimaculata (12), Coleoptera that do not belong to any above subgroup (14), and Lasioglossum laevissimum (16).
To classify multiframe insect representations (i.e., tuboids), we based the Insect Tuboid Classifier ( Fig 3 ), on a Residual Neural Network (ResNet) architecture [ 18 ] with 2 important modifications: (i) We explicitly included the size of the putative insect as an input variable to the fully connected layer as size may be important for classification and our images have consistent scale; and (ii) Since tuboid frames provide nonredundant information for classification (stuck insects often still move and illumination changes), we applied the convolution layers on 6 frames sampled in the first 24 h and combined their outputs in a single prediction ( Fig 3A ). In this study, we trained our classifier on a dataset of 2,896 insect tuboids, trapped in 2 berry fields in the same location and season (see next result sections and Methods section). We defined 18 taxonomic labels, described in S1 Table , using a combination of visual identification and DNA barcoding of insects sampled from the traps after they were collected from the field ( S2 Table and Methods sections). Fig 3B and 3C shows representative insect images corresponding to these 18 labels (i.e., only 1 frame from a whole multiframe tuboid) and summary statistics on the validation dataset (982 tuboids). S3 Table present the whole confusion matrix for the 18 labels.
In order to track insects through multiple frames, we built a directed graph for each series of images; connecting instances on the basis of a matching metric, which we computed using a custom Siamese Neural Network ( S3A Fig and Methods section). We used this metric to track insects in a 3-pass process ( S3B Fig and Methods section). This step resulted in multiframe representations of insects through their respective series, which we call “tuboids.” S3 Video shows a time-lapse video of a series where each insect tuboid is boxed and labelled with a unique number.
To segment “insects from their “background, we based the Universal Insect Detector on Mask R-CNN [ 17 ] and trained it on 240 hand-annotated images from Sticky Pis as well as 120 “foreign” images acquired with different devices (see Methods section). On the validation dataset, our algorithm had an overall 82.9% recall and 91.2% precision ( S2 Fig ). Noticeably, recall increased to 90.5% when excluding the 25% smallest objects (area < 1,000 px. i.e., 2.12 mm2), indicating that the smallest insect instances are ambiguous. When performing validation on the foreign dataset of 20 images acquired with the Raspberry Pi camera HQ, we obtained a precision 96.4% and a recall of 92.2%, indicating that newly available optics may largely increase segmentation performance (although all experimental data in this study were obtained with the original camera, before the HQ became available).
Devices acquire images approximately every 20 minutes, which results in a 500 image-long series per week per device. Rows in the figure represent consecutive images in a series. Series are analysed in 3 main algorithms (left to right). First, the Universal Insect Detector applies a 2-class Mask R-CNN to segment insect instances (versus background), blue. Second, the SIM applies a custom Siamese network–based algorithm to track instances throughout the series (red arrows), which results in multiple frames for the same insect instance, i.e., “insect tuboids. Last, the Insect Tuboid Classifier uses an enhanced ResNet50 architecture to predict insect taxonomy from multiple photographs. SIM, Siamese Insect Matcher.
In order to classify captured insects, we developed a novel analysis pipeline, which we validated on a combination of still photographs of standard sticky traps and series of images from 10 Sticky Pis deployed in 2 berry fields for 11 weeks (see Methods section and next result sections). We noticed trapped insects often move, escape, are predated, become transiently occluded, or otherwise decay ( S2 Video ). Therefore, we used cross-frame information rather than independently segmenting and classifying insects frame by frame. Our pipeline operates in 3 steps (summarised below and in Fig 2 ): (i) the Universal Insect Detector segments insect instances in independent images assuming a 2-class problem: insect versus background; (ii) the Siamese Insect Matcher (SIM) tracks insect instances between frames, using visual similarity and displacement; and (iii) The Insect Tuboid Classifier uses information from multiple frames to make a single taxonomic prediction on each tracked insect instance.
(A, B) Assembled Sticky Pi. The device dimensions are 326×203×182 mm (d×w×h). (C) Exploded view, showing the main hardware components. Devices are open source, affordable, and can be built with off-the-shelf electronics and a 3D printer. Each Sticky Pi takes an image every 20 minutes using an LED backlit flash. (D) Full-scale image as acquired by a Sticky Pi (originally 1944×2592 px, 126×126 mm). (E) Magnification of the 500×500 px region shown in D.
We built the Sticky Pi ( Fig 1A–1C ), a device that captures insects on a sticky card and images them every 20 minutes. Compared to other methods, our device acquires high-quality images at high frequency, hence providing a fine temporal resolution on insect captures. Devices are equipped with a temperature and humidity sensor and have 2 weeks of autonomy (without solar panels). Sticky Pis are open source, 3D printed, and inexpensive (<200 USD). Sticky Pis can be fitted with cages to prevent small vertebrates from predating trapped insects. Another unique feature is their camera-triggered backlit flashlight, which enhances the contrast, reduces glare, and allows for nighttime imaging. Most sticky cards available on the market are thin and translucent, which allows for the transmission of light. White light was chosen for its versatility: Sticky cards with different absorption spectra can be used. For outdoor use, the camera’s built-in infrared-cut filter was not removed. Such filters, which remove infrared light, are standard in photography as they reduce chromatic aberrations. As a result, we can discern 3 mm-long insects on a total visible surface of 215 cm 2 ( Fig 1D and 1E ), which is sufficient to identify many taxa. In order to centralise, analyse, and visualise the data from multiple devices, we developed a scalable platform ( S1 Fig ), which includes a suite of services: an Application Programming Interface (API), a database, and an interactive web application ( S1 Video ). Deployment and maintenance instructions are detailed in our documentation (
https://doc.sticky-pi.com/web-server.html ).
Discussion
We have developed the Sticky Pi, a generalist and versatile insect smart trap that is open source, documented, and affordable (Fig 1). Uniquely, Sticky Pis acquires frequent images to finely describe when specific insects are captured. Since the main limitation to insect chronoecology is the lack of high-frequency population monitoring technologies [11], our innovation promises to spark discoveries at the frontier between 2 important domains: chronobiology and biodiversity monitoring. Furthermore, taking multiple images of the same specimen may improve classification performance. To adapt our tool to big data problems, we designed a suite of web services (S1 Fig) that supports multiple concurrent users, can communicate with distributed resources and may interoperate with other biodiversity monitoring projects and community science platforms [24]. Compared to other camera-based automatic insect traps [25], we opted for a decentralised solution. Our platform is explicitly designed to handle multiple concurrent traps: with unique identifiers for devices and images (and relevant metadata), an efficient mean of retrieving data wirelessly, and a dedicated database, with an API to store, query, and analyse the results. These features, together with a low device cost (<200 USD) will facilitate scaling to the landscape level.
Our device’s main limitation is image quality (Fig 1D and 1E). Indeed, high-performance segmentation and classification of insects were limited to specimens larger than 3 millimetres (S2 Fig), hence reducing the taxonomic resolution for small insects. We found that segmentation was globally very precise (>90%) and sensitive (recall > 90% for objects larger than 2 mm2). Furthermore, our machine learning pipeline (Fig 2) showed a high overall accuracy of the Insect Tuboid Classifier (83.1% on average, when merging background and undefined insects; see Fig 3). Camera technology is quickly improving and our segmentation results with the new Raspberry Pi camera HQ (12.3 Mpx, CS mount) are promising, with preliminary results showing both precision and recall greater than 90% overall, on the segmentation task. Some of the inaccuracy in segmentation results from transient occlusion or changes in the image quality. Therefore, tracking (using the SIM) likely improves recall as insects that are missed on some frames may be detected on previous or subsequent frames.
Another potentially limiting feature of our device is the frequency of the images taken (every 20 minutes). According to their context and questions, users can programme the hardware timer with a different interval. However, we judged 3 times per hour a good compromise between time resolution—an hourly resolution being necessary to study chronobiology or the impact of fast weather variations—and battery and data storage efficiency. Furthermore, a more frequent use of the flash light (e.g., every minute) may be more of a disturbance to wildlife [26].
In this respect, our image time lapse approach contrasts with continuous lighter-weight systems such as sensor-based traps, which are suited for high-frequency sampling and were recently employed to study diel activity of a single species [13,27]. However, sensor-based traps are often limited to scenarios with a priori-targeted species that respond to certain specific olfactory or visual baits—which considerably narrows their applicability [28]. In contrast, camera-based traps are more generalist as they can passively monitor insects, using machine learning to identify multiple species. Our system importantly captures and keeps individual insect specimens. While this destructive process comes with limitations, it is also essential in naive contexts, where we do not know a priori which insects may be present. Indeed, physical specimens are needed for visual or DNA-based taxonomic characterisation, in particular when working on diverse or undescribed communities [29]. Keeping individual insects would not be possible if animals were released or kept in a common container. Furthermore, trapping insects permanently greatly reduced the risk of recapturing the same individuals several times.
An important consideration using sticky card is their potential to become saturated with insects. In our study, we replaced traps weekly to limit this possibility. Furthermore, we found no statistical effect of the number of insects on the probability of capture for a given taxa (S6 Fig). However, we advise users to replace sticky cards often enough to limit this risk.
We corroborated circadian results that had historically been obtained on individually housed insects, using heterogeneous populations in large flight cages (Fig 4). This suggests that Sticky Pis could be an alternative tool for laboratory experiments on mixed populations of interacting insects. In the field, we monitored both the seasonal and diel activity of a well-studied pest species: spotted-wing drosophila (D. suzukii). Like others before [22], we concluded that wild D. suzukii was crepuscular (Fig 5). In the process, we also found that Figitidae wasps—natural enemies of D. suzukii—were distinctly diurnal and were most often detected later in the season. Finally, we characterised the diel activity of the flying insect community in a raspberry field, without targeting taxa a priori (Fig 6). With only 10 devices, over 4 weeks, we were able to reveal the diversity of temporal niches, showing coexisting insects with a wide spectrum of diel activity patterns—e.g., diurnal, crepuscular versus nocturnal; bimodal versus unimodal. An additional and noteworthy advantage of time-lapse photography is the incidental description of unexpected behaviours such as insect predation (S4 Video) and the acquisition of specimens that eventually escape from traps (S5 Video).
Importantly, any insect trap only gives a biased estimate of the number and activity on insects at a given time. For sticky cards, the probability of capturing an insect depends, at a minimum, on the flying activity, the size of the population and the trap attractiveness (and the “conversion rate”—i.e., the probability of trapping an insect given it is attracted). Importantly, Sticky Pis only capture mobile adult insects, and, therefore, cannot explicitly quantify the timing of important behaviours such as mating, feeding, egg laying, emergence, and quiescence. However, in many species, locomotion and dispersal are a prerequisite to other activities. Therefore, capture rate implicitly encapsulates a larger portion of the behavioural spectrum. In most cases, the daily variation of population size is likely negligible. In contrast, several factors may render trap attractiveness and conversion rates variable during the day. First, visual cues—which impact a trap’s capture rates [30,31]—vary for any capture substrate (e.g., light intensity and spectral qualities inevitably fluctuate). Second, insects’ ability to detect, avoid, or escape traps may temporarily differ. Last, the preference for certain trap features could itself be a circadian trait. For example, the responses of certain insects to colours [32] allelochemicals [33,34] and semiochemicals [35–37] is time modulated. While inconsistencies in trap attractiveness may, in some cases, narrow the scope of the conclusions that can be made with our tool, it also paves the way for research on the diel time budget of many insects. Indeed, studying the contrast in trapping rates between different, ecologically relevant, trap features (e.g., baits, colour, and location) could help to develop new and improved trapping methodologies while bridging chronobiology and behavioural ecology.
In addition to insect capture rates, Sticky Pis also monitor humidity and temperature, which are both crucial for most insects behaviour and demography [38,39]. This study was performed at the level of a small agricultural field, where the local spatial abiotic variations were small compared to the interday variations, and hourly temperatures are mostly confounded by the time of the day (see Methods section and reported abiotic conditions in S7 Fig). In this context, it was, therefore, difficult to statistically study the individual effects of time of the day and abiotic variables (temperature and humidity) on capture rate. We are confident that Sticky Pi could, scaled at the landscape level, with explicitly different microclimates, help address the interplay between abiotic variables and circadian processes.
In the last few years, we have seen applications of chronobiology to fields such as learning [40] and medicine [41]. We argue that chronobiological considerations could be equally important to biodiversity conservation and precision agriculture [42–44]. For example, plants’ defences [45,46] and insecticide efficiency [47,48] may change during the day, implying that agricultural practices could be chronobiologically targeted. In addition, modern agriculture is increasingly relying on fine-scale pest monitoring and the use of naturally occurring biological pest control [49,50]. Studying insect community chronoecology could help predict the strength of interactions between a pest and its natural enemies or measure the temporal patterns of recruitment of beneficial natural enemies and pollinators. Monitoring insect behaviours at high temporal resolution is critical for both understanding, forecasting, and controlling emerging insect pests in agriculture and, more broadly, to comprehend how anthropogenic activities impact behaviour and biodiversity of insect populations.
[END]
---
[1] Url:
https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3001689
Published and (C) by PLOS One
Content appears here under this condition or license: Creative Commons - Attribution BY 4.0.
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/plosone/