(C) PLOS One
This story was originally published by PLOS One and is unaltered.
. . . . . . . . . .
See Elegans: Simple-to-use, accurate, and automatic 3D detection of neural activity from densely packed neurons [1]
['Enrico Lanza', 'Center For Life Nano-', 'Neuro-Science Sapienza', 'Istituto Italiano Di Tecnologia', 'Iit', 'Rome', 'Valeria Lucente', 'D-Tails S.R.L.', 'Martina Nicoletti', 'Department Of Engineering']
Date: 2024-04
In the emerging field of whole-brain imaging at single-cell resolution, which represents one of the new frontiers to investigate the link between brain activity and behavior, the nematode Caenorhabditis elegans offers one of the most characterized models for systems neuroscience. Whole-brain recordings consist of 3D time series of volumes that need to be processed to obtain neuronal traces. Current solutions for this task are either computationally demanding or limited to specific acquisition setups. Here, we propose See Elegans, a direct programming algorithm that combines different techniques for automatic neuron segmentation and tracking without the need for the RFP channel, and we compare it with other available algorithms. While outperforming them in most cases, our solution offers a novel method to guide the identification of a subset of head neurons based on position and activity. The built-in interface allows the user to follow and manually curate each of the processing steps. See Elegans is thus a simple-to-use interface aimed at speeding up the post-processing of volumetric calcium imaging recordings while maintaining a high level of accuracy and low computational demands. (Contact:
[email protected] ).
Competing interests: Viola Folli is an employee and scientific advisor of D-tails s.r.l. Valeria Lucente and Ilaria Cavallo are employees of D-tails s.r.l." However, this does not alter our adherence to PLOS ONE policies on sharing data and materials.
Funding: This work was supported also by the National Institutes of Health with grant number R35 GM145319 ("Pan-neuronal functional imaging and anesthesia"). In relation to this, the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Copyright: © 2024 Lanza et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
We introduce See Elegans, an algorithm for automatic detection, tracking, and identification of C. elegans neurons, incorporating both sequence-dependent (as in [ 12 , 14 , 28 ]) and sequence-independent methods (as in [ 13 , 29 ]) to track cells undergoing limited arrangement deformations. It provides advantages over existing software like Trackmate [ 30 ] and ROIedit3D [ 14 ], as it uses the position of detected cells to locate neurons below the detection threshold and eliminates the need for the RFP channel for tracking. See Elegans performs automatic identification of about 20 neurons based on soma position, neuronal activity, and coherence. The GFP channel is sufficient for neuron identification, eliminating the need for further genetic modifications and complex optical setups. See Elegans demonstrated better performance compared to other publicly available algorithms, while it demonstrated high accuracy in neuronal identification of backward locomotion promoting neurons, with promising results on forward locomotion promoting ones. While the algorithm automates the core processes of detection, tracking, and neuron identification, it also includes a human-in-the-loop approach for initial parameter settings, thereby enhancing the adaptability of See Elegans to varied data sets and experimental conditions. Its user-friendly interface allows for efficient supervision, parameter adaptation, and result curation, significantly simplifying the process of extracting neuronal traces from calcium imaging recordings of C. elegans. The integration of these features makes See Elegans an easy-to-use and accurate tool, optimizing both the precision and efficiency of calcium imaging data processing.
Neuron tracking and segmentation challenges are addressed using methods ranging from traditional 3D blob segmentation to deep learning, each with its own limitations [ 12 – 15 ]. Interframe neuron tracking can either be sequence-dependent, prone to accumulating errors, or independent, with its effectiveness hinging on the accuracy of the base model assumptions.
Neuroscience seeks to unravel the relationship between neural dynamics and behavior, greatly aided by advanced imaging techniques that allow for single-cell resolution of brain activity [ 1 – 4 ]. The nematode Caenorhabditis elegans, with its fully mapped yet not entirely understood nervous system, is an ideal model for these studies [ 5 – 7 ]. Although producing mainly graded potentials, the transparency and genetic tractability of C. elegans contribute to its effectiveness as a model organism [ 8 – 10 ], particularly useful for studying human diseases and their effects on neuronal dynamics [ 11 ].
Of the 302 neurons in C. elegans, about a third are located in the head. See Elegans allows identifying a subset of key head neurons based on their position and signal correlation. As described in [ 28 ], neuronal calcium signals of constrained C. elegans recorded over ten-minute-long acquisitions show a stereotyped activity that can be reduced in dimensionality through PCA. In the axes decomposition, there are three groups of neurons that dominate over the rest in the absence of stimuli: neurons whose activation is associated with backward movement, neurons associated with forward movement, and neurons that activate during turns. Although recent works showed how neural correlations may change from constrained to unconstrained animals, some of the neuronal correlation properties still persist in moving ones [ 32 ] and can thus be useful to help with neuronal identification. Such stereotyped activity has been reported in multiple works ([ 28 , 33 – 37 ], some of which identification has been performed also on the basis of location and fluorescent signals). The identification step only requires the user to specify the anterior direction (i.e. the position of the nose tip in the recording), which is used by the algorithm to initially identify 6 candidates for the AVA, AVE, and AIB pairs. These neurons are expected to be close, symmetrically arranged [ 5 ], and with a high correlation among them [ 28 ]. Moreover, they are usually clearly visible and easy to recognize by their activity and position. The algorithm thus looks for a set of neurons that best fits these requirements and once it finds it, it creates a new coordinate system for the visualization and for further processing. In this system, the x, y, and z axes follow the anterior, dorsal and left directions respectively. All coordinates are divided by the average distance among the six identified neurons to compensate for scale differences, and the origin of the coordinate system is set to the mean point of the six neurons initially identified. The neuronal arrangement thus obtained is compared with a model distribution based on [ 5 ] in a coordinate system with the same scaling. Neuronal spots of the model are assigned to the observed ones by treating the task as a linear assignment problem again. However, this time the cost function between two neurons takes into account not only their position, but also their correlation and some additional rules derived from previous experiments and reported experiments: following the work of [ 28 ], the six correlated neurons may generally be identified as AVA, AVE, and AIB pairs [ 4 , 38 – 41 ]. Other identifiable neurons correlated with these ones are RIM pairs [ 40 ] and the VA and DA motor neurons [ 42 – 44 ]. The activity of all these neurons has been reported to be correlated with backward locomotion [ 38 , 39 , 42 ]. Another identifiable set of correlated neurons that are in anti-correlation with the first group includes the following neurons: RME [ 4 ], RID [ 45 ], RIB [ 46 ], AVB [ 4 ], VA [ 38 ], and DB [ 38 , 43 , 44 ] neuronal classes, whose activity is associated with forward locomotion. With these identification rules, the algorithm makes a guess on the identity of the neurons, which can be then modified by the user. It is worth noticing that, in conditions under which normal neuronal correlation is hindered, the user has to manually assign the identities of the affected neurons. Additionally, this process is strongly dependent on the identification of two pairs of three neurons co-activating more than once during the video (namely AVA, AVE, and AIB). Failing this initial step may result in a high rate of erroneous identifications. However, the interface allows the 3D visualization of subgroups of neurons showing high/low anti-/correlations with respect to a neuron of choice, thus providing visual help to the user during manual annotation. Once a few neurons are identified, these may be used as reference points for the comparison with the model distribution of neurons for a new guess about the class identity of the spots.
The images show the temporal progression (t1-t3) of an ROI from data set 2. Neuron x is visible at the center in t1 and t3, but not detected at t2. Its position is inferred from its nearest neighbors (a-h) in previous frames. The white bar represents 10 μm.
In this way, the position of the neuron in the frame where it is missing is inferred through the sum of the average position of 20 of its closest neighbors and the average distance between them and the missing neuron detected in the previous frame. Fig 1 shows an example of the position reconstruction based on a few neighboring spots and on Eq 1 . See Elegans uses this position inference technique also for the manual addition of neurons that cannot be automatically detected. This step occurs after the tracking process because it relies on full traces to reconstruct the trace of a missed spot. The user is asked to inspect the result of the tracking process and to annotate by hand the position of undetected neurons at any time during the recording where it is relatively well visible. The algorithm treats the added spot as a track that has only one time point and reconstructs it following the same steps described above. It is worth noticing that in its current form, the proposed algorithm can account for small deformations in the arrangement of the neurons but fails to properly track spots in the presence of significant mutual-position changes, as when the nematode strongly bends its nose. In particular, when a neuron undergoes movement exceeding the displacement threshold set in the second step and simultaneously falls below the detection threshold, the algorithm is faced with a potential ambiguity upon the neuron’s re-emergence. In such instances, there is a possibility that the neuron might be erroneously registered as a novel entity, or it could inadvertently impact the tracking of another neuron within its vicinity. Conversely, if a neuron’s reappearance occurs within the threshold, or if it traverses beyond the threshold but remains within detectable limits, the algorithm is designed to merge the respective tracking segments. This approach is particularly advantageous for reconstructing neuron positions, as it utilizes spatial information relative to adjacent neurons, thereby enhancing the accuracy of the neural trace, especially in scenarios where signal detection is intermittent.
The tracking process is divided into two steps. The first step allows the user to apply the Runge-Kutta algorithm to solve the linking problem representing it as a linear assignment problem (LAP), as in [ 30 , 31 ]. A simple interface allows the user to select a time crop of the acquisition to test the parameters for the LAP tracking: maximum distance, maximum time gap between spots, and non-linking costs. Once the parameters are set, the user can run the tracking step on the whole video. Alternatively to TrackMate, See Elegans also retrieves segment links based on the relative distance between them across all frames (see Methods ), independently from their order. This first part of the tracking process is suited to compensate for rigid body movements, with a sequence-dependent method (Runge-Kutta algorithm) and a sequence-independent one (based on relative distances). However, when neurons fall under the detection threshold, they cannot be linked. To compensate for such situations, the second step calculates the position of the missing neuron, taking into account its position at the previous frame with respect to 20 of its closest neighbors, similarly to [ 28 ]. So, if neuron A is missing in frame i but is visible in frame i − 1, and N is the set of its closest neighbors, then its position in frame i, , is calculated according to the following equation: (1)
The detection process allows the user to locate neuronal spots by convolving each volume of the acquisition with a 3D laplacian of gaussian filter (log filter), as in TrackMate, based on [ 31 ], and then by thresholding the image resulting from the element-wise product between the filtered image and the original one. The threshold, the size of the log filter, and the variance of its gaussian are user-defined. As visual feedback, a simple interface shows a real-time preview of the ROI centers found for the chosen parameter set.
To obtain activity traces and neuronal identities from raw images, See Elegans involves three steps: (1) cell segmentation, (2) tracking, and (3) identification. In the cell segmentation step, the algorithm detects cells in the image based on their size, shape, and intensity. Next, in the tracking step, the algorithm tracks the cells across time frames to obtain activity traces for each cell. Finally, in the identification step, the algorithm assigns identities to the cells based on their activity patterns. These three steps may be run separately, and their outputs can be used as inputs for downstream analysis. Further details can be found in the Materials and Methods section. The following sections report a description of the main characteristics of each of these steps.
Results
To measure the performance of the previously described steps, we applied See Elegans to three data sets and compared its output with other available tracking software. See Elegans can handle a variety of image types, including 2D and 3D stacks. The time intervals between frames can vary, and the system can process images with animals that are relatively stationary, with minimal movement. This is the typical case of many publications [28, 47–49]. To highlight the robustness and versatility of the proposed method, the data sets have been recorded with different instruments and techniques (see Methods) in three laboratories: the first two sets of data are acquired through spinning-disk confocal microscopy using different acquisition devices, while data set 3 was obtained through light-sheet microscopy applied to a nematode previously treated with tetramisole but without microfluidic chip confinement for movement blocking, and characterized by a lower signal-to-noise ratio on the focal plane compared to the first ones, representing a challenging scenario. Regarding animal movement, it is important to notice that it may impair the ability of the algorithm to track neurons across time. In particular, in the case of non-paralyzed nematodes, it is crucial to specify the maximum distance that a neuron can travel from one frame to another in the tracking step, to mitigate cross-frame linking errors.
Tracking Fig 5 highlights the advantages of the tracking step in our proposed algorithm by presenting a representative crop from data set 1. This crop features a neuron that falls below the detection threshold, a scenario that often poses challenges to tracking algorithms. The figure compares the results obtained from See Elegans with those from two other algorithms, TrackMate and ROIedit3D. This particular neuron was selected to exemplify the efficacy of See Elegans in restoring tracks of neurons that slip beneath the detection threshold. The reported ROIs are from time t1, t2, and t3, corresponding to seconds 15.0, 33.0, and 73.33 from the start of the recording respectively. At times t1 and t3, neuron 30 has a relatively good level of activation (above threshold), while at time t2 it shows a lower signal. Because of this, it is hardly detectable at t2 and is therefore difficult to track throughout the indicated times. As the figure shows, while See Elegans is able to track the neuron, TrackMate separates its trajectory into two segments, losing the information in the intervening time gap and assigning two different IDs to the same neuron. Instead, ROIedit3D is able to follow the neuron lost by TrackMate but is affected by the noise, and thus its performance seems to be compromised, as clearly evident from the corresponding trajectories reported in Fig 6. As a result, the fluorescent trace obtained by manually linking the incomplete segments associated with the same neuron in Trackmate (using different IDs), still has a gap, while the trace obtained with ROIedit3D is uninterrupted, but also affected by artifacts. It is worth noting that the specific neuronal track of neuron 30 may be retrieved in TrackMate with a different set of parameters, for example by lowering the detection threshold, extending the time gap for spot linking, or changing the size of the filter. However, these changes would affect the overall detection and tracking performance, not necessarily in a better way. In fact, a lower threshold would result in more false positives, while a longer time gap for segment linking may result in a higher number of wrong linkages between tracks of different neurons. In the given case, the parameters used in TrackMate were manually optimized to obtain the best results for the whole recording. In addition to the resulting fluorescent traces (panel A) for the recording shown in Figs 5 and 6 reports the absolute displacement of the central neurons (panel B), together with the absolute distance variation in time for 20 of its closest neurons (panel C). In particular, panel B shows that neuron 30 covers a distance comparable to its own size in less than two minutes. Being neuron 30 located in a densely packed area, such a movement is sufficient to produce artifacts. However, the mutual distance inference method allows the code to keep track of this neuron despite its significant displacement. Panel C reports the time variation of the distance between neuron 30 and the twenty closest neighbors used to infer its position. As the color map shows, some neighbors move closer while others move away with respect to the untracked neuron (displacements up to 2 μm in 100 seconds). The presence of such elastic deformations in the neuronal arrangement may further hinder the tracking process, making the use of position inference techniques, such as the one implemented in See Elegans, crucial. PPT PowerPoint slide
PNG larger image
TIFF original image Download: Fig 5. Tracking of ROIs falling under detection threshold displayed in temporal order. The top row shows raw data, while the following rows report the tracking results for See Elegans, TrackMate, and ROIedit3D. At t1, neuron 30 is visible in the ROI center, and See Elegans tracks it up to t3, while TrackMate loses it at t2 and ROIEdit3D at t1. The black bar represents 10 μm.
https://doi.org/10.1371/journal.pone.0300628.g005 PPT PowerPoint slide
PNG larger image
TIFF original image Download: Fig 6. Resulting traces of neuron 30, falling below the detection threshold. Panel A shows the traces for ground truth (GT), See Elegans (SE), TrackMate (TM), and ROIedit3D (RE3D). See Elegans captures the dynamics of GT, but TrackMate and RoiEdit3D present gaps and/or artifacts. TrackMate assigns two different IDs to the same neuron, hence the different colors. Panel B reports the absolute displacement of neuron 30 from t1 onwards. Panel C reports the difference between the distance of neuron 30 from 20 of its closest neighbors at t1 and at subsequent times. The plot reveals some neurons moving closer and some moving away with an excursion up to 2 μm (e.g., rows 6 and 16).
https://doi.org/10.1371/journal.pone.0300628.g006
[END]
---
[1] Url:
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0300628
Published and (C) by PLOS One
Content appears here under this condition or license: Creative Commons - Attribution BY 4.0.
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/plosone/