This story was originally published in PLOS One Journal:
URL:
https://journals.plosone.org.
This content has not been altered
Licensed under Creative Commons Attribution (CC BY) license .
url:
https://journals.plos.org/plosone/s/licenses-and-copyright
(C) PLOS One [1]
------------
Mathematical framework for place coding in the auditory system
['Alex D. Reyes', 'Center For Neural Science', 'New York University', 'New York', 'United States Of America']
Date: None
Abstract In the auditory system, tonotopy is postulated to be the substrate for a place code, where sound frequency is encoded by the location of the neurons that fire during the stimulus. Though conceptually simple, the computations that allow for the representation of intensity and complex sounds are poorly understood. Here, a mathematical framework is developed in order to define clearly the conditions that support a place code. To accommodate both frequency and intensity information, the neural network is described as a space with elements that represent individual neurons and clusters of neurons. A mapping is then constructed from acoustic space to neural space so that frequency and intensity are encoded, respectively, by the location and size of the clusters. Algebraic operations -addition and multiplication- are derived to elucidate the rules for representing, assembling, and modulating multi-frequency sound in networks. The resulting outcomes of these operations are consistent with network simulations as well as with electrophysiological and psychophysical data. The analyses show how both frequency and intensity can be encoded with a purely place code, without the need for rate or temporal coding schemes. The algebraic operations are used to describe loudness summation and suggest a mechanism for the critical band. The mathematical approach complements experimental and computational approaches and provides a foundation for interpreting data and constructing models.
Author summary One way of encoding sensory information in the brain is with a so-called place code. In the auditory system, tones of increasing frequencies activate sets of neurons at progressively different locations along an axis. The goal of this study is to elucidate the mathematical principles for representing tone frequency and intensity in neural networks. The rigorous, formal process ensures that the conditions for a place code and the associated computations are defined precisely. This mathematical approach offers new insights into experimental data and a framework for constructing network models.
Citation: Reyes AD (2021) Mathematical framework for place coding in the auditory system. PLoS Comput Biol 17(8): e1009251.
https://doi.org/10.1371/journal.pcbi.1009251 Editor: Lyle J. Graham, Université Paris Descartes, Centre National de la Recherche Scientifique, FRANCE Received: July 14, 2020; Accepted: July 6, 2021; Published: August 2, 2021 Copyright: © 2021 Alex D. Reyes. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability: The code for the simulations can be downloaded at
https://github.com/AlexDReyes/ReyesPlosComp.git. Funding: The author received no specific funding for this work. Competing interests: The authors have declared that no competing interests exist.
Introduction Many sensory systems are organized topographically so that adjacent neurons have small differences in the receptive fields. The result is that minute changes in the sensory features causes an incremental shift in the spatial distribution of active neurons. This is has led to the notion of a place code where the location of the active neurons provides information about sensory attributes. In the auditory system, the substrate for a place code is tonotopy, where the preferred frequency of each neuron varies systematically along one axis [1]. Tonotopy originates in the cochlea [2, 3] and is inherited by progressively higher order structures along the auditory pathway [4]. The importance of a place code [5] is underscored by the fact that cochlear implants, arguably the most successful brain-machine interface, enable deaf patients to discriminate tone pitch simply by delivering brief electrical pulses at points of the cochlea corresponding to specific frequencies [6, 7]. Although frequency and intensity may be encoded in several ways [8], there are regimes where place-coding seems advantageous. Humans are able to discriminate small differences in frequencies and intensities even for stimuli as brief as 5–10 ms [9–13]. Therefore, the major computations have already taken place within a few milliseconds. This is of some significance because in this short time interval, neurons can fire only bursts of 1–2 action potentials [14, 15], indicating that neurons essentially function as binary units. Therefore, it seems likely that neither frequency nor intensity can be encoded via the firing rate of individual cells since the dynamic range would be severely limited. Similarly, coding schemes based on temporal or ‘volley’ schemes are difficult to implement at the level of cortex because neurons can phase-lock only to low frequency sounds [16–18]. However, a purely place code cannot be used for dynamically complex sound; indeed, coding and perception are enhanced significantly when temporal and rate cues are factored in [8, 12, 19–22] and when longer duration stimuli are used [9–12]. There are several challenges with implementing a purely place coding scheme. First, the optimal architecture for representing frequency is not well-defined. Possible functional units include individual neurons, cortical columns [23, 24], or overlapping neuron clusters [25]. The dimension of each unit ultimately determines the range and resolution at which frequencies and intensities that can be represented and discriminated. Second, how both frequency and intensity can be encoded with a place code is unclear, particularly for brief stimuli when cells function mostly as binary units. Third, the rules for combining multiple stimuli is lacking. Physiological sounds are composed of pure tones with differing frequencies and intensities, resulting in potentially complex spatial activity patterns in networks. Finally, the role of inhibition in a place coding scheme has not been established. Here, a mathematical model is developed in order to gain insights into: 1) the functional organization of the auditory system that supports a place coding scheme for frequency and intensity: and 2) the computations that can be performed in networks. To simplify analyses and to reveal the inherent advantages and limitations, the model focuses on how simple tones are represented and combined with a pure place code and excludes the dynamic variables that mediate temporally complex sounds. The approach is to use mathematical principles to construct the acoustic and neural spaces, find a mapping between the spaces, and then develop the algebraic operations. The predictions of the math model are then tested with simulations. With this formal approach, the variables that are important for a place coding scheme are defined precisely.
Application to loudness summation In the auditory system, the perceived loudness of band limited noise or simultaneously presented tones depends on whether the frequency components are within the so-called critical band (CB) of frequencies [34–36]. An important property is that increasing the bandwidth of the noise does not increase the perceived loudness until the bandwidth exceeds CB, after which it increases linearly [37]. Moreover, this property is maintained at different sound intensities, indicating that CB does not change. The origin of the CB is unclear and there is debate as to whether it is peripheral involving mainly excitatory processes [38, 39], or central, which may also recruit inhibition [40–42]. The tonotopic axis is often divided into 24 CBs, each uniquely identified by the center frequency [35]. In the following, algebraic operations are used to describe features of loudness summation and to suggest network mechanisms. A band-limited noise stimulus, or more generally a complex stimulus with multiple tones, may be expressed, after discretization, as a set of increasing frequencies, say: F m = {f 1 , f 2 , …, f n }. The ‘bandwidth’ is defined as the difference between the highest and lowest frequency components (BW = f n − f 1 ). In neural space, the stimulus results in an interval that is the union of individual excitatory intervals generated by each tonal component: , where λ is the length of each interval and is the same for all intervals. The model assumes that for multi-tone stimulus, one of the tones is dominant and generates inhibitory intervals ( and ) that abut an excitatory interval with no overlap ( ), as in a so-called lateral inhibitory configuration (see S1 Appendix for formal definitions). Physiologically, the dominant tone may correspond to the tone at the center of a CB [35] or to the tone with the lowest frequency, which has been shown to mask higher frequency components [43]. The union of these 3 intervals is defined to be the critical interval: . The boxed inset in Fig 9 shows the relationship between H m (gray), (blue), and the two inhibitory intervals (red). The length of the interval h l that results from the interaction of these intervals is given by and is taken to be a proxy for loudness perception. As shown in S1 Appendix, |h l | is equal to as long as H m ⊂ H CI . PPT PowerPoint slide
PowerPoint slide PNG larger image
larger image TIFF original image Download: Fig 9. Algebra of loudness summation. Predicted interval lengths resulting from the interaction of multi-tone stimulus delivered simultaneously. Boxed inset, overlapping synaptic intervals (H m , gray) generated by stimulus with 3 frequency components. Tic marks show location of interval centers (x α , x d , x β ) along tonotopic axis. The dominant tone (blue) also generates two inhibitory side bands (red). Plot shows resultant length ( ) after the operations (see text) as the number of intervals in H m is increased (abscissa). Green bars in insets show portion of H m that was not cancelled by inhibition. Dotted vertical line marks deviation of curves from a constant value. Compare with Figs 9 of [37].
https://doi.org/10.1371/journal.pcbi.1009251.g009 Fig 9 shows the result graphically when H m is lengthened by adding more tones to the stimulus. |h l | is constant ( ) until the number of components is such that H m exceeds the boundaries of the critical interval. In this example, the deviation occurs when the number of intervals, and hence the number of frequency components, exceed 9 (dotted vertical line). The CB is then f 9 − f 1 . Increasing the intensity of each component of F m causes an increase in the length of the interval components of H m . As shown in S1 Appendix, the CB will not change provided that the lateral inhibitory configuration is maintained and the lengths of the inhibitory intervals are constant. Under this condition, |H m | and H CI | increase equally (compare lower and upper curves in Fig 9). Because increases, there is an increase in baseline (upward shift of curves) without a change in CB. An all-excitatory version without inhibition will not reproduce the data: the critical interval would then be and since , |h l | will be greater than if H m has more than one component and will grow with increasing number of tonal components. Unlike the data, the curves would have no flat region. The operations also describe a related experiment where instead of noise, the stimuli consisted of 4 tones whose frequency separations were varied systematically [37] (Fig B of S1 Appendix). The above analysis elucidates the general requirements for loudness summation. While there is some evidence for a dominant tone [43] and inhibitory processes [42], the extent of the inhibitory intervals is less clear and is likely to reflect the combined effect of the individual excitatory and inhibitory intervals generated by other tones in the stimulus. The precise mechanisms needs to be systematically explored with more detailed analyses, simulations, and experiments.
Discussion The aim of this study was to develop a mathematical framework for a place-code and derive the underlying principles for how tones of varying frequencies and intensities are represented, assembled, and modulated in networks of excitatory and inhibitory neurons. The analyses are not intended to replicate the detailed aspects of biological networks and dynamic behavior but rather to clarify the minimal conditions that must be met for a viable place coding scheme, to aid in the interpretation of experimental data, and to provide a blueprint for developing computational models. The advantage of this formal approach is that it ensures that the terms and advantages/limitations of a purely place-coding model are defined precisely, providing a foundation for examining the role of other auditory cues that enhance coding and perception (see below). In addition, the mathematical rules effectively constrain the computations that may be performed with a purely place code. Place code framework in auditory processing: Evidence and implications The model has several implications with regards to auditory processing. In this section, the advantages of the place coding framework are discussed and experimental data are interpreted within the context of the mathematical framework. Representation of frequency and sound pressure. A key feature of the model is that the ‘functional unit’ of neural space is a set of contiguous neurons that have flexible borders. The associated mathematical architecture is a collection of half-open intervals of varying lengths. The model provides a framework for encoding both frequency and intensity (or sound pressure) with a purely place-coding scheme. This is advantageous for brief stimuli where firing rate and spike timing [8, 12, 19] may not be available (see Introduction). Some information may be carried by single spike latency [20]; however, spike latencies depend on other variables besides frequency and does not appear to have the dynamic range to represent the full range of audible sound pressure levels [44]. Frequency and intensity discrimination does improve with stimulus duration, suggesting that the other variables play complementary roles in improving coding and perception [9–12, 22]. A network with flexible functional units is also advantageous for maintaining both high resolution frequency and pressure representations. This can be appreciated by comparing the resolutions attainable with the classical columnar organization [23, 25] (the stimulus is assumed brief so that firing rate information is unavailable; see Introduction). In this scheme, the neural space is divided into non-overlapping columns with fixed dimensions and distinct borders. The frequency of a stimulus is encoded by the location of the active column and sound pressure by the number of active neurons within the column (i.e. population rate code). The relation between the maximum number of achievable frequency and sound pressure levels is given by (see S1 Appendix). Intuitively, to maximize the number of frequency levels, the columns should be as small as possible so that more can fit along the tonotopic axis; however, this reduces the number of pressure levels that can be encoded because there are fewer neurons within a column. In contrast, for a network with flexible borders, the relation is: . Fewer neurons ( ) are needed to represent the full range of frequency and pressure levels as compared to columns ( ). The advantage of a columnar organization is that the components of a multi-frequency stimulus remain separated in neural space. With flexible units, two intervals generated by two tones with small frequency differences and/or high intensities can fuse into a single interval and hence be perceived as a single tone. As discussed below, ambiguities in perception of complex stimulus are more consistent with a flexible unit organization. Relation between Δf and frequency difference limen. In the model, the acoustic space is discretized to reflect the resolution limits on frequency and intensity perception imposed by neural space composed of neurons. The number of frequency levels and Δf is determined by the number of intervals that can be contained within the neural space (Eq 2). Though the model was introduced with Δx equivalent to the diameter of a cell in a single layer (Fig 2), Δx (and hence Δf) can be much smaller if several layers of neurons are considered (Fig A of S1 Appendix). The frequency difference limen (Δf DL ), gives the smallest difference in frequency of two tones that can be discriminated by subjects. The measured Δf DL does not have a fixed value but depends on a number of stimulus parameters including duration, intensity, and test frequencies [10, 45]. Moreover, Δf DL , which is related to the psychophysical measure of sensitivity (‘d-prime’, [46, 47]), is affected by unspecified sources of internal noise within subjects such as trial-to-trial variability in pitch perception [48]. For these reasons, Δf DL is likely to be larger than Δf. Thus, Δf may be viewed as the lower bound for Δf DL for a purely place-coding scheme that would be realized under optimal, noiseless conditions. Addition operation. The addition operation applied to synaptic intervals is defined as their union: . An important consequence is that if the intervals overlap, they will fuse into a single, longer interval. Under physiological conditions, this would occur if tones of a multifrequency stimulus have small differences in frequencies. This is in line with psychophysical experiments, which show that subjects perceive tones with small differences in frequencies as a single tone [43, 49] and have difficulties distinguishing the individual components of a multi-frequency stimulus [50, 51]. Another consequence is that addition of two overlapping non-empty intervals is sublinear: |h α | + |h β | > |h α + h β |. If one interval is also a subset of the other (h α ⊂ h β ), then the sum is equal to the larger of the two intervals: |h α | + |h β | = |h β |. This scenario would occur when binaural inputs converge onto a common site. Consistent with the prediction, electrophysiological recordings from neurons in inferior colliculus show that the frequency response areas (FRAs, assumed to be representative of activity spread, see below) evoked binaurally is equal to the larger of two responses evoked monaurally [52]. Similarly, assuming that loudness perception is linked to the length of the interval, a possible psychophysical analog is that a tone presented binaurally to a subject is perceived to be less than twice as loud as monaural stimulation [53]. The apparent sublinear effects can be explained by the properties of addition operation, though inhibitory processes may also contribute. Multiplication operation and distributive properties. Multiplication of two synaptic intervals is defined as the set minus operation: h α ⋅ h β = h β \ h α . The operation removes from the multiplicand (h β ) elements that it has in common with the multiplier (h α ), thereby shortening it. The effect of inhibition can be inferred from the FRAs of neurons. Applying GABA blockers causes the FRAs to widen [54, 55]. If the FRA can be used as a proxy for the spatial extent of activated neurons (see below), then the result is consistent with inhibition shortening the synaptic intervals. The manner in which multiplication distributes over addition has important implications for combining information from multiple sources. In auditory cortex, excitatory pyramidal neurons receive convergent afferent inputs from the thalamus and other pyramidal cells [56, 57]. The two afferents also appear to innervate a common set of local inhibitory neurons [33, 57]. The fact that multiplication is left distributive (Eq 7) means that the effect can be estimated by measuring the effects of inhibition (hI) on each excitatory inputs ( , ) separately and then summing the results: . However, because multiplication is not right distributive (Eq 8) a similar approach cannot be used to examine two sources of inhibition acting on a single excitatory interval. The analyses, for example, suggest that the combined effects of two types of inhibitory neurons on excitatory cells [31] should be examined by activating both interneurons simultaneously rather than separately. More generally, the representation of complex sound with a place coding scheme cannot be predicted by combining the representations of individual components if the inhibition generated by each component interact. As shown in Eq 10 and Fig 4C (bottom), the response of two tones presented simultaneously is not a simple combination of the responses to each tone separately. It should be emphasized that this conclusion was derived mathematically from the distributive properties; it is not trivially related to non-linearities contributed by e.g. inhibitory conductances or voltage gated channels since the model has no biophysical variables. Assumptions and limitations As evidenced by cochlear implants, at least rudimentary pitch perception can be achieved with a purely place code [6, 7]. However, extracting the auditory features completely requires additional cues. Firing rate and spike timing information has been shown to enhance coding and perception [8, 12, 19–22]. Indeed, some neurons are specialized for extracting precise temporal information [16, 58]. Moreover, frequency and intensity discrimination improves with stimulus duration [9–12], indicating the contribution of dynamic processes at the synaptic [59] and network [32] levels. Sound localization [60] and beat generation [5], both of which use phase information, cannot be implemented with a purely place code. Perception of a fundamental frequency absent from a harmonic (missing fundamental [61]) also cannot be explained with a place code as the model predicts that only intervals generated by sound can be perceived. Finally, variables that affect the intervals and operations on intervals such as non-linearities due to biophysical properties of cells (Figs 6 and 7, see below) and cochlea [62] are absent from the model. The formal approach used here can in principle be used to incorporate these variables, with the place-coding framework as a starting point. The mathematical model is based on two salient features of the auditory system. One is that the neural space is organized tonotopically. Tonotopy has been described in most neural structures in the auditory pathway, from the cochlea and auditory nerve [2, 3, 63, 64] to brainstem areas [4, 65, 66] to at least layer 4 of primary auditory cortex. Whether tonotopy is maintained throughout cortical layers is controversial, with some studies (all in mice) showing clear tonotopy [67–70] and others showing a more ‘salt-and-pepper’ organization [70–72]. A salt-and-pepper organization suggests that the incoming afferents are distributed widely in the neural space rather than confined to a small area. The model needs a relatively prominent tonotopy to satisfy the requirement that synaptic intervals encompass a contiguous set of cells. A second requirement is that the size of the synaptic interval and activated area increase with the intensity of the sound. Intensity-related expansion of response areas occurs in the cochlea [27, 28, 73] and can also be inferred from the excitatory frequency-response areas (FRAs) of individual neurons. The excitatory FRAs, which document the firing of cells to tones of varying frequencies and intensities, are typically “V-shaped”. At low intensities, neurons fire only when the tone frequencies are near its preferred frequency (tip of the V). At higher intensities, the range of frequencies that evoke firing increases substantially [68, 74]. If adjacent neurons have comparably-shaped FRAs but have slightly different preferred frequencies, an increase in intensity would translate to an increase in the spatial extent of activated neurons. For mathematical convenience, the location of the synaptic intervals was identified by the leftmost point (closed end) of the interval, with increases in intensity signaled by a lengthening of the interval in the rightward (high frequency) direction. Similar behavior has been observed in the cochlea albeit in the opposite direction: an increase in the intensity causes response area to increase towards low frequency region of the basilar membrane while the high frequency cutoff remains fixed [3, 28, 73]. The choice of the leftmost point to tag the interval is arbitrary and any point in the interval will suffice provided an analogous point can be identified uniquely in each interval in the set. Experimentally, using the center of mass of active neurons as the identifier might be more practical. For simplicity, both Δf and Δp are kept constant along the tonotopic axis, which is inaccurate because the range of frequencies and sound pressure changes with frequency and sound pressure level. To represent the full ranges, the frequency and pressure can be transformed into an octave and decibel scales prior to mapping to neural space. The algebraic operations were derived from set theoretic operations and the magnitude of the underlying synaptic inputs were irrelevant. Under biological conditions, the input magnitude determines the degree to which biophysical, synaptic, and network processes become engaged, which will affect the length of the synaptic intervals and activated areas. Not surprisingly, the results of the network simulations deviated quantitatively from the mathematical predictions in some regimes (compare Fig 4 to Figs 6 and 7). Most of the discrepancies in the simulations were because the magnitudes of the synaptic inputs were Gaussian distributed along the tonotopic axis. In biological networks, the discrepancies may be exacerbated by the presence of threshold processes such as regenerative events [75, 76]. The underlying algebraic operations may be obscured in regimes such as these. The model incorrectly assumes that the strength of inhibition is sufficiently strong to fully cancel excitation. This facilitated analysis because the effect of multiplication depends solely on the overlap between the multiplicand and multiplier. As the simulations with the feedforward network showed, the excitation cannot be fully canceled by inhibition owing to synaptic delay. Moreover, the balance may be spatially non-homogeneous: in center-surround suppression, excitation dominates at the preferred frequency with inhibition becoming more prominent at non-preferred frequencies [54, 55, 74]. To apply multiplication to biological systems, it may be necessary to define empirically an “effective” inhibitory field that takes into account for E -I imbalances. For convenience, the simulations that were used to test the analyses predictions used a network model based on cortical circuits where the properties of the cells and patterns of connections betwen E and I cells have been fully characterized [32, 33]. However, the results should generalize to other network types provided the stimuli are brief (50 ms) so that cells fire only a single action potential. The mathematical model treats neurons as binary units and so only the first action potential is important. Hence, if the stimulus is brief and suprathreshold, the results obtained with a network consisting of e.g. repetitively firing cortical neurons [15, 33] or transiently firing bushy cells [58] will be qualitatively similar. The results are likely to differ with longer duration stimuli, which would allow various time- and voltage-dependent channels to become active and engage recurrent connections. It would also be important to confirm the operations for combining tones using cochlear/auditory nerve models that implements tonotopy derived directly from the basilar membrane [77, 78].
Methods Simulations were performed with a modified version of a network model used previously [32]. Briefly, the model is a 200 x 200 cell network composed of 75% excitatory (E) and 25% inhibitory (I) neurons. The connection architecture, synaptic amplitudes/dynamics, and intrinsic properties of neurons were based on experimental data obtained from paired whole-cell recordings of excitatory pyramidal neurons and inhibitory fast-spiking and low threshold spiking interneurons [33]. For this study, the low-threshold spiking interneurons and the recurrent connections between the different cell types were removed, leaving only the inhibitory connections from fast spiking interneurons to pyramidal neurons. The connection probability between the inhibitory fast-spiking cells and the excitatory pyramidal cells was Gaussian distributed with a standard deviation of 75 μm and peak of 0.4 [33]. Both E and I cells received excitatory synaptic barrages from an external source. The synaptic barrages to each cell (50 ms duration) represented the activity of a specified number of presynaptic neurons. The average number (n in (x, y)) of inputs that each neuron at location x, y received followed a Gaussian curve so that cells at the center of the network received more inputs (Fig 5A, bottom). For each run, the number was randomized by drawing a number from a Gaussian distribution with mean n in (x, y) and a standard deviation 0.25 * n in (x, y) so that the synaptic fields and activated areas varied from trial to trial. Excitatory synaptic currents were evoked in the E and I cell populations and inhibitory synaptic currents in the E cell population after the I cells fired (insets in Fig 5A). The spatial extents of the synaptic inputs were varied by changing the standard deviations of the external drive. In some simulations, the E and I cell populations were uncoupled and received separate inputs that could be varied independently of each other. The neurons are adaptive exponential integrate-and-fire units with parameters adjusted to replicate pyramidal and fast spiking inhibitory neuron firing (see [32] for the parameter values). The synaptic field was defined as the area of the network where the net synaptic currents to the cells exceeded rheobase, the minimum current needed to evoke an action potential in the E cells (I Rh , inset in Fig 5B, bottom panel). I Rh was estimated by calculating the net synaptic current near firing threshold (V θ ): I net = g exc * (V θ − E exc )+ g inh * (V θ − E inh ) where g exc , g inh are the excitatory and inhibitory conductances, respectively, and E exc = 0 mV, E inh = −80 mV are the reversal potentials. For the E cells, rheobase is approximately -0.27 nA. The spatial extent of the synaptic field or activated area was quantified as the diameter of a circle fitted to the outermost points (maroon circles in Fig 5B). In simulations with multiple components, the spatial extents were quantified as the total length of the projection onto the tonotopic axis (orange bar in Fig 5B, bottom panel). The diameters and lengths have units of cell number but can be converted to microns by multiplying by 7.5 μm, the distance between E cells in the network. For all plots, the data points are plotted as mean +/- standard deviation compiled from 20–100 sweeps.
Supporting information S1 Appendix. Detailed description of mathematical analyses and proofs. Fig. A: Projections of multiple layers of staggered neurons on tonotopic axis decreases Δx. Fig. B: Algebra of loudness summation applied to stimuli consisting of 4 tones with equally spaced frequencies.
https://doi.org/10.1371/journal.pcbi.1009251.s001 (PDF)
Acknowledgments I thank L-S Young for her insightful critiques and A. Bose for commenting on an early version of manuscript.
[1] Url:
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1009251
(C) Plos One. "Accelerating the publication of peer-reviewed science."
Licensed under Creative Commons Attribution (CC BY 4.0)
URL:
https://creativecommons.org/licenses/by/4.0/
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/plosone/