(C) PLOS One
This story was originally published by PLOS One and is unaltered.
. . . . . . . . . .



Synaptic reorganization of synchronized neuronal networks with synaptic weight and structural plasticity [1]

['Kanishk Chauhan', 'Department Of Physics', 'Astronomy', 'Ohio University', 'Athens', 'Ohio', 'United States Of America', 'Neuroscience Program', 'Alexander B. Neiman', 'Peter A. Tass']

Date: 2024-08

Abnormally strong neural synchronization may impair brain function, as observed in several brain disorders. We computationally study how neuronal dynamics, synaptic weights, and network structure co-emerge, in particular, during (de)synchronization processes and how they are affected by external perturbation. To investigate the impact of different types of plasticity mechanisms, we combine a network of excitatory integrate-and-fire neurons with different synaptic weight and/or structural plasticity mechanisms: (i) only spike-timing-dependent plasticity (STDP), (ii) only homeostatic structural plasticity (hSP), i.e., without weight-dependent pruning and without STDP, (iii) a combination of STDP and hSP, i.e., without weight-dependent pruning, and (iv) a combination of STDP and structural plasticity (SP) that includes hSP and weight-dependent pruning. To accommodate the diverse time scales of neuronal firing, STDP, and SP, we introduce a simple stochastic SP model, enabling detailed numerical analyses. With tools from network theory, we reveal that structural reorganization may remarkably enhance the network’s level of synchrony. When weaker contacts are preferentially eliminated by weight-dependent pruning, synchrony is achieved with significantly sparser connections than in randomly structured networks in the STDP-only model. In particular, the strengthening of contacts from neurons with higher natural firing rates to those with lower rates and the weakening of contacts in the opposite direction, followed by selective removal of weak contacts, allows for strong synchrony with fewer connections. This activity-led network reorganization results in the emergence of degree-frequency, degree-degree correlations, and a mixture of degree assortativity. We compare the stimulation-induced desynchronization of synchronized states in the STDP-only model (i) with the desynchronization of models (iii) and (iv). The latter require stimuli of significantly higher intensity to achieve long-term desynchronization. These findings may inform future pre-clinical and clinical studies with invasive or non-invasive stimulus modalities aiming at inducing long-lasting relief of symptoms, e.g., in Parkinson’s disease.

Synaptic weight and structural plasticity of neuronal networks determine their behavior, and abnormalities therein may underlie disordered states. Studying how different plasticity mechanisms govern network dynamics, particularly during (de)synchronization processes, holds clinical importance concerning, e.g., Parkinson’s disease. The marked difference between the timescales at which neuronal spiking activity (milliseconds), synaptic weight modifications (minutes-hours), and structural changes (hours-days) occur in the brain makes plastic network models computationally expensive, which may limit the scope of studies. Here, we present a leaky integrate-and-fire (LIF) neuron network model with a standard spike-timing-dependent plasticity (STDP) rule for weight plasticity and a stochastic structural plasticity (SP) method. The model is computationally efficient, allowing for detailed numerical analyses of network dynamics and structure. Combining the model with tools from network science, we show that structural reorganization resulting from SP can optimize the network for synchronization, elevating the level of synchrony while concurrently reducing overall network connections. This leads to the emergence of structural correlations between the natural firing rates of neurons and the number of their pre- and post-synaptic partners. Additionally, we demonstrate that synchronized networks that evolved with SP can be more robust against desynchronization stimulation.

Competing interests: PAT works as consultant for Boston Scientific Neuromodulation and is inventor on a number of patents for invasive and non-invasive neuromodulation. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Funding: PAT gratefully acknowledges funding support by the Vibrotactile Therapy Research Fund, by the John A. Blume Foundation, by the Alda Parkinson’s Research Fund, and by the Foundation for OCD Research (New Venture Fund 011665-2020-08-01, url: https://www.ffor.org/ ). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Data Availability: All relevant data are within the paper and its Supporting information file. The data and the codes for generating the data and figures are available on GitHub at https://github.com/kanishk-chauhan/SynapticReorganization-LIF-STDP-SP- .

Copyright: © 2024 Chauhan et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

To account for additional plasticity mechanisms, in this study, we also focus on the effect of SP on the response of neuronal networks to stimulation by comparing the desynchronization of random networks with frozen structures with those that undergo synaptic reorganization to reach a steady synchronized state. To this end, we use our model of plastic neuronal networks to assess the effectiveness of a stimulation protocol named Uncorrelated Multichannel Random Stimulation (UMRS), a technique similar to previously developed stimulation methods belonging to the CR family [ 21 , 33 , 79 ], in driving the network out of a pathological model state thereby inducing long-term desynchronization.

As shown computationally, stimulus responses of neural networks with STDP may significantly differ from stimulus responses in networks with fixed coupling strength, i.e., fixed synaptic weights [ 21 , 23 , 24 , 33 , 62 , 63 ]. STDP may cause a multistability [ 64 , 65 ], and properly designed stimuli may move networks from one attractor to another, qualitatively different attractor, in this way causing long-term stimulus effects that persist after cessation of stimulation [ 21 , 23 , 24 , 33 , 62 , 63 ]. In general, stimulus-induced reshaping of adaptive systems may have various applications in various fields of applications [ 66 ]. For instance, in a clinical context, fundamental predictions derived from stimulated networks with STDP were key for the development of novel therapies: Deep brain stimulation (DBS) is an established treatment for Parkinson’s disease [ 67 , 68 ]. While being the gold standard for the treatment of medically refractory Parkinson’s disease, DBS still has therapeutic limitations and may cause significant side effects [ 68 ]. For DBS, electrical stimuli are permanently and periodically delivered to specific target areas in the brain at rates greater than 100 Hz [ 67 – 69 ]. To specifically counteract Parkinson’s-related abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR) stimulation, a patterned multi-channel stimulation technique, was computationally developed [ 70 ]. To overcome the need for permanent stimulation, based on computational studies in neural networks with STDP, it was suggested to use CR stimulation to cause long-lasting desynchronization by an unlearning of abnormal synaptic connectivity [ 33 , 62 , 63 ]. Non-trivial qualitative stimulus-response predictions, e.g., the emergence of cumulative and long-term effects [ 33 , 63 ], were key to the development of the corresponding pre-clinical and clinical experimental and study protocols in the context of Parkinson’s, epilepsy and binge alcohol consumption [ 71 – 73 ]. Furthermore, these stimulus responses enabled the development of non-invasive CR stimulation techniques, e.g., for the vibrotactile treatment of Parkinson’s [ 74 , 75 ] or the acoustic treatment of chronic subjective tinnitus [ 76 – 78 ].

The timescales of neuronal spiking and changes in synaptic weights and network structure are rather distinct. Whereas the spiking activity may occur at a sub-second timescale, the synaptic weight and structural changes may take minutes to hours [ 58 ] and hours to days [ 59 , 60 ], respectively. Incorporating all three in a computational study thus poses the challenge of maintaining such distinct timescales while keeping the network model computationally less costly so that a detailed analysis of the network dynamics may be conducted. Here, we combine the LIF neuron model with a standard additive STDP rule and introduce a stochastic SP rule that adds and eliminates synaptic contacts based on the firing rate of a postsynaptic neuron and the weight of the contacts. The network model maintains the distinction between the timescales of neuronal activity and plasticity mechanisms. The stochastic SP method is computationally fast and we compare it with a prevailing method introduced in Ref. [ 61 ], referred to as Butz and van Ooyen SP (BvOSP) in this study, which is neuroscientifically informed but computationally costlier [ 20 ]. We show that our stochastic SP method produces network dynamics similar to those by BvOSP. We aim to study the effect of the two distinct plasticity mechanisms (synaptic weight and structural) on the dynamical states of the network and to understand the co-evolution of network activity and structure (synaptic reorganization).

The structural properties of the network have been studied using tools from network science, such as node degree distribution and correlation, clustering coefficient, average path length, and assortativity [ 54 – 56 ]. Networks of oscillators that evolve with SP show degree-frequency correlations, while such correlations may be absent in random graphs [ 50 , 53 ]. The assortativity is linked to certain network properties, e.g. stability, robustness, and information content [ 56 , 57 ]; real-world networks, such as neural networks and co-authorship networks are either assortative or dissasortative [ 56 ]. In practice, multiple measures are often used together as one metric may not suffice to describe and distinguish the structural properties of different networks. In the present study, we employ degree distribution, degree-frequency and degree-degree correlations, and degree assortativity measures to fully describe the network structure and distinguish between the structure that emerges due to synaptic reorganization in the presence of SP and that of random networks.

Activity-dependent changes in the structure of a network have been implemented in several studies that aimed to reproduce experimentally observed network behaviors and statistics of connectivity besides identifying the mechanisms underlying the synaptic reorganization of network [ 6 , 46 , 47 , 49 ]. Synchrony of a network of non-identical oscillators can be enhanced by employing specific alterations in the network structure [ 50 , 51 ] and in networks of FitzHugh Nagumo model neurons, synchrony can be enhanced by a combination of STDP and homeostatic SP (hSP) [ 52 ]. In networks of oscillators that may represent certain oscillatory neuronal networks, the synaptic reorganization could significantly affect the network dynamics and its response to stimulation compared to networks with fixed structure [ 53 ].

Neurons extend their neurites (axons and dendrites), which may reach each other to form potential synaptic contacts [ 43 , 44 ]. In principle, axons and dendrites of any two neurons may form potential synaptic contacts at multiple locations, some of which may turn into actual synaptic contacts as the synaptic elements (axonal boutons and dendritic spines) bridge the gap between the axons and the dendrites [ 43 , 45 ]. On the one hand, it allows for the formation of multiple contacts between a pair of neurons [ 43 , 46 , 47 ]. On the other hand, it makes nearby neurons more likely to connect, producing distance-dependent connectivity [ 48 ]. Structural changes in a network occur at a much longer timescale than synaptic weight change (due to STDP) and neuronal spiking dynamics [ 4 , 6 ].

SP is another noted form of plasticity that reorganizes the network structure via addition and elimination (pruning) of synaptic contacts depending on the activity of the neurons, referred to as synaptic reorganization, besides sprouting and reshaping of synaptic elements [ 4 , 6 ]. The addition and pruning of synaptic contacts may depend on several factors, such as the synaptic weight, firing rates of the neurons, and the physical distance between neurons [ 4 , 38 , 39 ]. In particular, weaker synaptic contacts are more prone to pruning than the strong ones [ 38 , 40 – 42 ]. The form of SP that may add or remove synaptic contacts to maintain a homeostatic set-point of the firing rate of the neurons is termed homeostatic SP [ 4 , 6 ].

The weight of a synaptic contact can change depending on the exact timing of the spikes of the pre- and post-synaptic neurons [ 25 , 26 ]. This mechanism is termed STDP [ 26 – 28 ]. Networks of LIF neurons with STDP can display bistablily by residing in either a synchronized or a desynchronized state [ 21 – 24 ], which mimic the pathological and physiological states, respectively, in the subthalamic nucleus (STN) and Basal Ganglia of patients with PD [ 29 – 31 ]. Stimulation can be employed to counteract abnormal synchrony and induce long-term desynchronization [ 24 , 32 – 34 ]. In the case of Alzheimer’s Disease (AD), the desynchronized spiking and decoupling of neurons is observed during disease progression [ 35 , 36 ]. The re-synchronization of the fast-spiking interneurons restores the gamma oscillations in the hippocampus, reducing the impairment of cognitive function in AD patients [ 37 ].

Models of plastic neuronal networks are used to study the functioning of specific brain areas in healthy and disordered conditions and to develop therapeutic stimulation methods [ 15 – 20 ]. Plastic networks of leaky integrate-and-fire (LIF) model neurons have been used to design and validate effective stimulation methods such as coordinated and random reset (CR and RR) stimulation [ 21 – 24 ], which leverage the synaptic weight plasticity to induce therapeutic effects.

Neurons form networks that are plastic in nature. The spiking dynamics of neurons, the weight (transmission efficiency) of the synaptic contacts, and the structure of the networks can change with time [ 1 – 4 ]. The plastic nature of neuronal networks enables learning and memory [ 5 , 6 ], stabilization of networks in spontaneous and experience-induced conditions [ 7 , 8 ], and recovery and rehabilitation after stroke and injuries [ 9 – 11 ]. Alterations in neural plasticity may support pathological conditions such as Parkinson’s disease (PD) [ 12 , 13 ] and epilepsy [ 14 ].

Model and methods

We use excitatory networks of conductance-based LIF neurons developed in [21, 79] to model synchronized and desynchronized states in the subthalamic nucleus. The unconnected neurons fire periodically with natural firing rates randomly distributed around the mean f 0 . The standard deviation of the natural firing rates, σ f , serves as a measure of heterogeneity or diversity and is used here as one of the control parameters. When the neurons are randomly connected and the synaptic weights are governed by an additive STDP rule, the network may settle in either a synchronous or asynchronous state, depending on the initial distribution of synaptic weights. We include SP in the network model, which allows time-dependent changes in the network structure according to the neuronal activity and the synaptic weights. We develop a stochastic model of SP in which synaptic contacts are modeled with a birth-death process (governing addition or pruning of contacts) with corresponding rates that depend on the firing rates of neurons and on the synaptic weights of the contacts. The maximal probabilities of addition and pruning are other control parameters of our model. We use an adiabatic approach for the SP dynamics to overcome the challenge of the diverse time scales of neuronal firing, STDP-induced weight dynamics, and extremely slow SP. In this approach, the network is assumed to reach a meta-steady state in between consecutive structural updates.

Network model We place N = m × m neurons on a regular square lattice of size L with spacing h = L/(m − 1). The spatial coordinates of a given neuron i that lies at lattice index (i x , i y ) are (1) and are small random jitters in the x- and y- coordinates of the neurons. The neurons are enumerated as i = i x + (i y − 1)m, where i = 1, …, N. The membrane potential of i-the neuron, is governed by [21] (2) where C is the membrane capacitance, g leak,i is the leak conductance, and V rest is the resting membrane potential. The time-varying synaptic conductance, g syn,i (t), and the reversal potential, V syn , determine the time-varying synaptic inputs from the presynaptic partners. I stim,i (t, l i,r ) is the stimulation current received by the neuron i at a distance l i,r from the r-th stimulation electrode [detailed in the ’Stimulation’ section], which could be set to 0 for all i to study the stead-state dynamics of the network. Lastly, I noise,i represents the noisy inputs from the other sources, e.g., other neuronal populations not included in the model. A neuron fires a spike whenever its membrane potential crosses the dynamic threshold, V th,i , governed by (3) τ th is the threshold time constant and V th,rest is the resting threshold potential. Once the neuron generates a spike, its membrane potential, V i (t), is kept at V spike and V th,i (t) is kept at V th,spike for a duration of τ spike . After that, the membrane potential is reset to V reset . The synaptic conductance, g syn,i , follows (4) Here, τ syn is the synaptic time constant, κ is the maximal coupling strength, t j,μ is the timing of the μ-th spike of the j-th presynaptic neuron, and t d is the synaptic time delay. In general, multiple contacts can exist from neuron j to neuron i, the weights of which are given by the elements, w i,j,k (t), of the weight matrix, W(t), where is the maximum number of contacts permitted between any pair of pre- and post-synaptic partners, and index k refers to the specific contact from neuron j and neuron i. The sum over k in Eq 4 refers to the summation over A i,j contacts, where A i,j (t) is the element of the N × N adjacency matrix, A(t). The adjacency matrix determines the network structure such that A i,j (t) assumes a value >0 if synaptic contacts from neuron j to i exist, and 0 otherwise. Therefore, index i refers to a postsynaptic neuron and j to its presynaptic partner. No self-loops are allowed, i.e., A i,i (t) = 0. The time-dependencies of the weight and adjacency matrices correspond to the weight and structural plasticity, respectively. The noisy input from other sources is modeled as independent Poisson spike train with the constant rate f noise , and is given by (5) where the synaptic noise conductance, g noise,i (t), follows (6) In Eq 6, the second term represents Poisson noise with the noise intensity κ noise . The summation is over Poisson spikes, where t i,μ is the timing of the μ-th Poisson spike fed to neuron i.

STDP We use the additive STDP rule as in Ref. [21]. The change in the weight of the synaptic contacts from a presynaptic neuron j to its postsynaptic partner i is given by (7) Thus, w i,j,k (t) → w i,j,k + δw i,j (q) ∀k. Here, q = t i − t j − t d is the time lag between the spikes of the neurons i and j. η ≪ 1 ensures a longer timescale of synaptic weight change compared to neuronal membrane potential dynamics. τ + is the long-term potentiation (LTP) time constant and τ R scales the long-term depression (LTD) time constant relative to τ + . The total synaptic weight change due to all possible values of time lag, . Thus, b determines the asymmetry of the STDP rule, such that b < 1 makes STDP potentiation dominant, b > 1 makes it depression dominant, and b = 1 makes it balanced. The parameters of the STDP rule, including the time delay t d are kept the same as in Refs. [21, 79], given in Table 1. Eq 7 is implemented as a set of differential equations for the weight matrix, W(t), and the traces χ(t), ψ(t) for pre- and post-synaptic spike trains [80], (8) where t i,μ is the timing of the μ-th spike of the postsynaptic neuron and t j,μ is that of its presynaptic partner. t i and t j represent the latest spike times that trigger potentiation and depression, respectively. PPT PowerPoint slide

PNG larger image

TIFF original image Download: Table 1. Parameters of the network model and stimulus. https://doi.org/10.1371/journal.pcbi.1012261.t001

SP Computational models of SP may include all aspects, from neurite outgrowth and retraction [81] to the generation and deletion of synaptic elements to the probabilistic formation of synaptic contacts [61]. BvOSP [49, 61] is a model that generates and deletes synaptic elements based on the average activity level of neurons and makes them available for synapse formation without modeling the activity-dependent outgrowth or retraction of neurites in order to form contacts and allows for multiple contacts between a pair of pre- and post-synaptic neurons, i.e., the adjacency matrix elements, A i,j ∈ {0, 1, 2, ‥}. Its algorithm is presented in detail in the S1 Text and Refs. [49, 61, 82]. It has been used to generate networks from completely unconnected neurons spontaneously, reproduce experimentally observed network reorganization after lesion [49, 82], and explain clinically observed therapeutic effects of CR [20], transcranial direct current stimulation [83], and transcranial magnetic stimulation [84]. In a recent study, BvOSP was used to show that hSP is responsible for biphasic changes in the connectivity of pyramidal neurons in mouse anterior cingulate cortex 24 and 48 hours after optogenetic stimulation [85]. The inclusion of BvOSP in computational studies aimed at investigating the evolution of networks with both weight and structural plasticity over long periods of time may require considerably long computation time, which could limit the extent to which the impact of SP and variations in its control parameters on the network dynamics and properties can be studied. Establishing a simpler model of SP that captures the essential mechanisms that govern changes in brain networks may allow for a reliable and more detailed numerical analysis. We propose a simple model, stochastic SP (described below), which directly builds or eliminates synaptic contacts between neurons without separately modeling the neurite outgrowth/retraction or generation/deletion of synaptic elements. We further simplify the model by allowing only a single synaptic contact from a given presynaptic neuron to its postsynaptic partner that accounts for its overall effect on the postsynaptic one, i.e., A i,j ∈ {0, 1}, and , although it can be easily extended to include multiple synaptic contacts between a pair of pre- and post-synaptic neurons. We validate the stochastic SP method by comparing our results with those obtained using BvOSP, where we allow for multiple contacts with so that A i,j ∈ {0, 10}. We base our stochastic SP model on the experimental evidence stated above. The probability of the addition of a synaptic contact between two neurons, P add , depends on the Euclidean distance between them and the firing rate of the postsynaptic neuron. Primarily, the postsynaptic neurons with firing rates below the homeostatic set-point (target) firing rate develop synaptic contacts with nearby neurons. The synaptic contacts are pruned depending on both synaptic weight and the firing rate of postsynaptic neurons, which together determine the pruning probability, P prn . In particular, the synaptic contacts that are weaker or deliver inputs to neurons with a firing rate above the target firing rate are more likely to be pruned, consistent with experimental observations [41, 42]. The addition of a synaptic contact from a neuron j to i changes the adjacency matrix element A i,j from 0 to 1 while pruning changes A i,j from 1 to 0. The newly added contacts are given a small random weight, following experimental evidence [41]. The dependence of the pruning and addition probabilities on the postsynaptic neuron’s firing rate models a homeostatic process whereby the neuron’s activity is maintained at a target firing rate, f T . The probability of addition of incoming contacts increases with a decrease in the neuron’s firing rate below f T , while its pruning probability increases with an increase in the neuron’s firing rate above f T . The firing rate of the i-th neuron is calculated by low-pass filtering its spike train as, (9) where t i,μ is the timing of the μ-th spike of i-th neuron and τ slow is the time constant. The process of addition is purely homeostatic, i.e., weight-independent. The probability of addition is given by (10) where P h is the maximal probability of the homeostatic SP (both addition and pruning), l 0 is the decay constant for distance dependence, and is the Euclidean distance between neurons i and j. Since the neurons are arranged on L × L square lattice, the network structure becomes distance-independent for . The firing rate dependence function, G(.), is a logistic function defined as (11) where the parameters Ω 0 and ν determine the midpoint and the slope, respectively. In our model, a synaptic contact can be pruned independently by either the homeostatic or weight-dependent processes. The homeostatic component of the pruning is given by (12) We assume non-zero probabilities of addition and pruning when the firing rate of a post-synaptic neuron matches its target rate, f T , i.e., . The steepness of the logistic function is characterized by the parameter Δf, such that , and , for pruning and addition, respectively. This gives The weight-dependent component of pruning is of the form (13) where w min ≪ 1 and P w is the maximal probability of weight-dependent pruning. The total probability of pruning is, (14) Fig 1 exemplifies the SP probabilities. Since only one contact can be built from neuron j to i, in our stochastic SP model, the subscript k in Eqs 13 and 14 can be dropped. PPT PowerPoint slide

PNG larger image

TIFF original image Download: Fig 1. SP probabilities for the typical parameter values. A: Probabilities of homeostatic SP versus the firing rate of a post-synaptic neuron. The vertical dashed line marks the target firing rate, f T . B: Total SP probabilities versus synaptic weight according to Eqs 10 and 14 when the firing rate of the neuron equals the target rate. The parameters are: Δf = 1 Hz, , f T = 4.5 Hz, w min = 0.001, l = 0.1, l 0 = 0.5; P h = 1 for A and P h = 0.01, P w = 1 for B. https://doi.org/10.1371/journal.pcbi.1012261.g001

Measures The degree of synchrony of a network is measured using the Kuramoto order parameter [21, 86], (15) where ϕ i (t′) is the phase of i-th neuron that is calculated using the timings of two consecutive spikes, t i,μ and t i,μ + 1 , ranges from 0 for complete absence of synchrony to 1 for perfect synchrony. We calculate the network-averaged firing rate, 〈f〉(t) = (1/N)∑ i f i (t) and the coefficient of variation, CV(t), of firing rate defined as the ratio of the standard deviation of firing rate, , to the mean, 〈f〉(t): CV(t) = σ f (t)/〈f〉(t). The combination of order parameter and CV of the firing rate can be used as indicators of synchrony and frequency locking for non-identical oscillators. CV = 0, combined with the order parameter close to 1 indicates a perfectly frequency-locked state. Non-zero CV values indicate non-identical frequencies, e.g., due to incomplete synchronization and/or cluster states with different frequencies. The synaptic weights are studied using the average incoming synaptic weight, W i (t), of individual neurons and the network-averaged synaptic weight, 〈W〉(t) (16) 〈W〉(t) ∈ [0, 1], where 0 indicates an uncoupled network and 1 a strongly coupled one. 〈W〉(t) = 0 (1) may indicate desynchrony (synchrony). W i → 1 (0) if neuron i is strongly (weakly) driven by its presynaptic partners via its incoming contacts. We also study the distribution of the synaptic weight of individual contacts in the steady synchronized states of the network. To analyze the structure of the network, we use incoming and outgoing node degree densities (in-NDD and out-NDD) of individual neurons, given by (17) and the network-averaged NDD, . β(t) can range from 0 for a fully unconnected network to 1, marking an all-to-all connected network. For ease of use, we drop the superscript for the in-NDD hereafter, i.e., . We employ the Pearson correlation coefficient to characterize the assortativity of the networks [56, 87, 88]. In (dis)assortative networks, the nodes (here, neurons) tend to connect to other nodes with (dis)similar properties on average. We determine the (dis)assortativity of a network for in- and out-degrees of the neurons as follows [89]. Let ϵ, υ ∈ {in, out} be the degree-type and and be the corresponding degrees of the pre- and post-synaptic neurons, respectively, connected by the e-th edge (contact). The Pearson correlation coefficient is given by (18) where E = βN(N − 1) is the total number of contacts. The averages, and . The standard deviations, and . We calculate the Pearson correlation coefficient between the in-NDDs of the pre- and post- synaptic neurons (in-in), the out-NDDs (out-out), and between the in-NDD of presynaptic neurons and the out-NDD of postsynaptic neurons (in-out) and vice versa (out-in) [89–91]. Negligible time variations in the network-averaged measures characterize the steady states. We use the network-averaged firing rate, 〈f〉, and synaptic weight, 〈W〉, to determine the approach to a steady state as follows. We integrate the model equations in time intervals of T = 60 s until both 〈W〉 and 〈f〉 converge with a given relative accuracy, Υ, (19) where n = 1, 2, …, n min , ‥, n max is the number of the convergence intervals (iterations). n min is the minimum number of intervals and n max is the maximum number of iterations required to achieve the relative accuracy of Υ = 10−3.

Stimulation Stimulation is used to induce therapeutic effects in pathological conditions, such as epilepsy [92–94] and Parkinson’s disease [72, 75, 95, 96], where a synchronized state is associated with pathology while incoherent spiking of neurons is observed in a healthy state. We use a multichannel stimulation protocol, UMRS (illustrated in Fig 2), where N s electrodes at fixed locations deliver stimulation independently. Each stimulation site, r, receives uncorrelated stimulus at random times with exponentially distributed inter-pulse intervals, similar to the temporal randomness of the RR stimulation [21, 22]. The stimulus from the r-th electrode received by neuron i at a distance l i,r from the electrode is given by a s I 0 X r (t)D(l i,r ), where X r (t) is the charge-balanced stimulation current of the r-th electrode. The dimensionless parameter a s ∈ [0, 1] scales the magnitude, I 0 , of the stimulus, and D(l i,r ) determines the spatial drop in stimulus with distance from the electrode. The total amount of stimulus received by i-th neuron is (20) PPT PowerPoint slide

PNG larger image

TIFF original image Download: Fig 2. UMRS stimulus administered at N s = 4 sites. The sites of stimulation are chosen to be the centers of the 4 quadrants in the square network plane. The left panel shows the example stimulus pattern, X(t), with F s = 100 Hz for a 50 ms duration. Each site of stimulation receives an independent stimulus and the time interval between stimulus events (each event comprising a positive pulse followed by a negative pulse after a short gap) is exponentially distributed. The right panel shows the spatial variation of stimulus with distance from the electrodes, controlled by D(l i,r ), for all 4 electrodes placed at the centers of four quadrants. Neurons are marked with crosses at their positions. https://doi.org/10.1371/journal.pcbi.1012261.g002 The time-dependent charge-balanced stimulus current, X r (t), is a random sequence of stimulus events consisting of positive and negative pulses. Each rectangular positive pulse of duration and amplitude 1, followed by a short gap, , and then by a rectangular negative pulse of duration and amplitude . The value of the magnitude I 0 is chosen as . Intervals between excitatory pulses are exponentially distributed with the mean inter-pulse interval τ UMRS . We imposed a minimum interval between subsequent stimulus events at τ Λ = 7.692 ms, corresponding to a maximum stimulation frequency of 130 Hz [21, 22]. Thus, the mean stimulation frequency, F s = (τ UMRS + τ Λ )−1. For scaling the stimulus with distance, we use a Gaussian function, , for simplicity. The parameter σ s determines the area of stimulus spread and can be associated with the number of neurons that effectively receive the stimulus. For a fraction, γ, of neurons or the area of the network that we intend to stimulate with each electrode, the value of σ s can be determined as follows: Assume that below 1% of the maximal intensity, the stimulus is not effective, i.e., it does not affect the spiking of neurons. The area covered by the stimulus from each electrode is π(3σ s )2 ≈ γL2, where L is the network size, since D(l i,r ) drops to ≈ 0.01 at l i,r = 3σ s . Therefore, (21) since .

[END]
---
[1] Url: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1012261

Published and (C) by PLOS One
Content appears here under this condition or license: Creative Commons - Attribution BY 4.0.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/plosone/