Showing posts with label neural code. Show all posts
Showing posts with label neural code. Show all posts

Monday, December 12, 2022

#70. How the Cerebellum May Adjust the Gains of Reflexes [neuroscience]

NE


Red, theory; black, fact




The cerebellum is a part of the brain involved in ensuring accuracy in the rate, range, and force of movements and is well known for its regular matrix-like structure and the many theories it has spawned. I myself spent years working on one such theory in a basement, without much to show for it. The present theory occurred to me decades later on the way home from a conference on brain-mind relationships at which many stimulating posters were presented.

Background on the cerebellum

The sensory inputs to the cerebellum are the mossy fibers, which drive the granule cells of the cerebellar cortex, whose axons are the parallel fibers. The spatial arrangement of the parallel fibers suggests a bundle of raw spaghetti or the bristles of a paint brush. These synapse on Purkinje cells at synapses that are probably plastic and thus capable of storing information. The Purkinje (pur-kin-gee) cells are the output cells of the cerebellar cortex. Thus, the synaptic inputs to these cells are a kind of watershed at which stimulus data becomes response data. The granule-cell axons are T-shaped: one arm of the T goes medial (toward the midplane of the body) and the other arm goes lateral (the opposite). Both arms are called parallel fibers. Parallel fibers are noteworthy for not being myelinated; the progress of nerve impulses through them is therefore steady and not by jumps. The parallel fibers thus resemble a tapped delay line, and Desmond and Moore proposed this in 1988.

The space-time graph of one granule-cell impulse entering the parallel-fiber array is thus V-shaped, and the omnibus graph is a lattice, or trellis, of intersecting Vs.

The cerebellar cortex is also innervated by climbing fibers, which are the axons of neurons in the inferior olive of the brainstem. These carry motion error signals and play a teacher role, teaching the Purkinje cells to avoid the error in future. Many error signals over time install specifications for physical performances in the cerebellar cortex. The inferior olivary neurons are all electrically connected by gap junctions, which allows rhythmic waves of excitation to roll through the entire structure. The climbing fibers only fire on the crests of these waves. Thus, the spacetime view of the cortical activity features climbing fiber impulses that cluster into diagonal bands. I am not sure what all this adds up to, but what would be cute?


A space-time theory

Cute would be to have the climbing fiber diagonals parallel to half of the parallel-fiber diagonals and partly coinciding with the half with the same slope. Two distinct motor programs could therefore be stored in the same cortex depending on the direction of travel of the olivary waves. This makes sense, because each action you make has to be undone later, but not necessarily at the same speed or force. The same region of cortex might therefore store an action and it’s recovery.


The delay-line theory revisited

As the parallel-fiber impulses roll along, they pass various Purkinje cells in order. If the response of a given Purkinje cell to the parallel-fiber action potential is either to fire or not fire one action potential, then the timing of delivery of inhibition to the deep cerebellar neurons could be controlled very precisely by the delay-line effect. (The Purkinje cells are inhibitory.) The output of the cerebellum comes from relatively small structures called the deep cerebellar nuclei, and there is a great convergence of Purkinje-cell axons on them, which are individually connected by powerful multiple synapses. If the inhibition serves to curtail a burst of action potentials in the deep-nucleus neuron triggered by a mossy-fiber collateral, then the number of action potentials in the burst could be accurately controlled. Therefore, the gain of a single-impulse reflex loop passing through the deep cerebellar nucleus could be accurately controlled. Accuracy in gains would plausibly be observed as accuracy in the rate, range, and force of movements, thus explaining how the cerebellum contributes to the control of movement. (Accuracy in the ranges of ballistic motions may depend on the accuracy of a ratio of gains in the reflexes ending in agonist vs. antagonist muscles.)


Control of the learning process

If a Purkinje cell fires too soon, the burst in the deep-nucleus neuron will be curtailed too soon, and the gain of the reflex loop will therefore be too low. The firing of the Purkinje cell will also disinhibit a spot in the inferior olive due to inhibitory feedback from the deep nucleus to the olive. I conjecture that if a movement error is subsequently detected somewhere in the brain, this results in a burst of synaptic release of some monoamine neuromodulator into the inferior olive, which potentiates the firing of any recently-disinhibited olivary cell. On the next repetition of the faulty reflex, that olivary cell reliably fires, causing long-term depression of concurrently active parallel fiber synapses. Thus, the erroneous Purkinje cell firing is not repeated. However, if the firing of some other Purkinje cell hits the sweet spot, this success is detected somewhere in the brain and relayed via monoamine inputs to the cerebellar cortex where the signal potentiates the recently-active parallel-fiber synapse responsible, making the postsynaptic Purkinje cell more likely to fire in the same context in future. Purkinje cell firings that are too late are of lesser concern, because their effect on the deep nucleus neuron is censored by prior inhibition. Such post-optimum firings occurring early in learning will be mistaken for the optimum and thus consolidated, but these consolidations can be allowed to accumulate randomly until the optimum is hit.


Role of other motor structures

The Laplace transform was previously considered in this blog to be a neural code, and its output is a complex number giving both gain and phase information. To convert a Laplace transform stored as poles (points where gain goes to infinity) in the cerebral cortex into actionable time-domain motor instructions, the eigenfunctions corresponding to the poles, which may be implemented by damped spinal rhythm generators, must be combined with gains and phases. If the gains are stored in the cerebellum as postulated above, where do the phases come from? The most likely source appears to be the basal ganglia. These structures are here postulated to comprise a vast array of delay elements along the lines of 555 timer chips. However, a delay is not a phase unless it is scaled to the period of an oscillation. This implies that each oscillation frequency maps in the basal ganglia to an array of time delays, of which none are longer than the period. These time delays would be applied individually to each cycle of an oscillation. Such an operation would be simplified if each cycle of the oscillation were represented schematically by one action potential.


Photo by Robina Weermeijer on Unsplash


Monday, August 15, 2016

#13. The Neural Code, Part II: the Thalamus [neuroscience, engineering]

EN   NE

Red, theory; black, fact

Schematic of thalamus circuits


Thalamic processing as Laplace transform

The thalamus may perform a Laplace transform (LT). All the connections shown are established anatomical facts, and are based on the summary diagram of lateral geniculate nucleus circuitry of Steriade et al. (Steriade, M., Jones E. G. and McCormick, D. A. (1997) Thalamus, 2 Vols. Amsterdam: Elsevier).  I have added feedback from cortex as a context-sensitive enabling signal for the analytical process.

Thalamic electrophysiology

The thalamic low-threshold spike (LTS) is a slow calcium spike that triggers further spiking that appears in extracellular recordings as a distinctive cluster of four or five sodium spikes. The thalamus also has an alternative response mode consisting of fast single spikes, which is observed at relatively depolarized membrane potentials.

The thalamic low-threshold spike as triggered by a hyperpolarization due to a negative electric current pulse injected into the neuron through the recording electrode. ACSF, normal conditions; TTX, sodium spikes deleted pharmacologically. From my thesis, page 167.

Network relationships of the thalamus

Depolarizing input to thalamus from cortex is conjectured to be a further requirement for the LTS-burst complex. This depolarization is conjectured to take the form of a pattern of spots, each representing a mask to detect a specific pole of the stimulus that the attentional system is looking for in that context.

The complex frequency plane is where LTs are graphed, usually as a collection of points. Some of these are "poles," where gain goes to infinity, and others are "zeroes," where gain goes to zero. I assume that the cerebral cortex-thalamus system takes care of the poles, while the superior and inferior colliculi take care of the zeroes. 

If this stimulus is found, the pattern of poles must still be recognized. This may be accomplished through a cortical AND-element wired up on Hebbian principles among cortical neurons. These neurons synapse on each other by extensive recurrent collaterals, which might be the anatomical substrate of the conjectured AND-elements. Explosive activation of the AND network would then be the signal that the expected stimulus has been recognized, as Hebb proposed long ago, and the signal would then be sent forward in the brain via white matter tracts to the motor cortex, which would output a collection of excitation spots representing the LT of the desired response.

Presumably, a reverse LT is then applied, possibly by the spinal grey, which I have long considered theoretically underemployed in light of its considerable volume. If we assume that the cerebral cortex is highly specialized for representing LTs, then motor outputs from cerebellum and internal globus pallidus would also have to be transformed to enable the cortex to represent them. In agreement with this, the motor cortex is innervated by prominent motor thalami, the ventrolateral (for cerebellar inputs) and the ventroanterior (for pallidal inputs).

Brain representation of Laplace transforms

The difficulty is to see how a two-dimensional complex plane can be represented on a two-dimensional cerebral cortex without contradicting the results of receptive field studies, which clearly show that the two long dimensions of the cortex represent space in egocentric coordinates. This just leaves the depth dimension for representing the two dimensions of complex frequency.

A simple solution is that the complex frequency plane is tiled by the catchment basins of discrete, canonical poles, and all poles in a catchment basin are represented approximately by the nearest canonical pole. It then becomes possible to distinguish the canonical poles in the cerebral cortex by the labelled-line mechanism (i.e., by employing different cell-surface adhesion molecules to control synapse formation.)

Layer 1 of cortex is mostly processes, which leaves us with five cortical cell layers not yet assigned to functions. Four of them might correspond to the four quadrants of  the complex frequency plane, which differ qualitatively in the motions they represent. The two granule-cell layers 2 and 4 are interleaved with the two pyramidal-cell layers 3 and 5. The two granule layers might be the top and bottom halves of the left half-plane, which represent decaying, stabilized motions. The two pyramidal layers might represent the top and bottom halves of the right half-plane, which represent dangerously growing, unstable motions. Since the latter represent emergency conditions, the signal must be processed especially fast, requiring fast, large-diameter axons. Producing and maintaining such axons requires correspondingly large cell bodies. This is why I assign the relatively large pyramidal cells to the right half-plane.

Intra-thalamic operations

The brain may calculate the Laplace transform just the way it is defined: the integral of the product of the input time-domain function and an exponentially decaying or growing sinusoid (eigenfunction). A pole would be recognized after a finite integration time as the integrand rising above a threshold. This thresholding is plausibly done in cortical layer 4, against a background of elevated inhibition controlled by the recurrent layer-6 collaterals that blocks intermediate calculation results from propagating further into the cortex. The direct projections from layer 6 down to thalamus would serve to trigger the analysis and rescale eigenfunction tempo to compensate for changes in behavioural tempo. Alternation of LTS-bursting activity between thalamic reticular neurons and thalamic principal neurons would be the basis of the oscillatory activity involved in implementing the eigenfunctions. This alternation or ping-ponging is precedented by the spindling mechanism of light sleep and thalamic oscillation is further precedented by the phenomenon of Parkinsonian tremor cells, which are in the thalamus. Furthermore, when you are recording it in real time, the thalamic LTS has a bouncy look.

Mutual inhibition of reticular thalamic neurons (established but not shown in my schematic) would be the basis of the integrator, and multiplication of functions would be done by silent inhibition in the triadic synapses via the known disinhibitory pathway from the reticular thalamus through the thalamic inhibitory interneurons. These interneurons are not shown in the schematic. (Still unaccounted for is direct participation of the reticular thalamic afferents in triadic synapses, which my schematic shows.)

A negative feedback system will be necessary to dynamically rejig the thalamus so that the same pole maps to the same spot despite changes in tempo. Some of the corticothalamic cells (layer 6) could be part of this system (layer 6 cells are of two quite different types).

Consequences for object recognition

The foregoing system could be used to extract objects from the incoming data by in effect assuming that the elements or features of an object always share the same motion and therefore will be represented by the same set of poles. An automatic process of object extraction may therefore be implemented as a tendency for Hebbian plasticity to involve the same canonical pole at two different cortical locations that are connected by recurrent axon collaterals.

Sunday, August 7, 2016

#12. The Neural Code, Part I: the Hippocampus [neuroscience, engineering]

EN   NE

Red, theory; black, fact


Santiago Ramón y Cajal (1911) [1909] Histologie du Système nerveux de l'Homme et des Vertébrés, Paris: A. Maloine. The French copyright expired in 2004.


"Context information" is often invoked in neuroscience theory as an address for storing more specific data in memory, such as whatever climbing fibers carry into the cerebellar cortex (Marr theory), but what exactly is context, as a practical matter?

Requirements for a Learning Context Signal

First, it must change on a much longer timescale than whatever it addresses. Second, it must be accessible to a moving organism that follows habitual, repetitive pathways in patrolling its territory.

The Fourier Transform as Context Signal

Consideration of the mainstream theory that the hippocampus (see illustration) prepares a cognitive map of the organism's spatial environment suggests that context is a set of landmarks. It seems that a landmark will be any stimulus that appears repetitively. Since only rhythmically repeating functions have a classical discrete-frequency Fourier transform, the attempt to calculate such a transform could be considered a filter for extracting rhythmic signals from the sensory input. 

From Repeating to Rhythmic 

However, this is not enough for a landmark extractor because landmark signals are only repetitive, not rhythmic. Let us suppose, however, that variations in the intervals between arrivals at a given landmark are due entirely to programmed, adaptive variations in the overall tempo of the organism's behavior. A tempo increase will upscale all incoming frequencies by the same factor, and a tempo decrease will downscale them all by the same factor. Since these variations originate within the organism, the organism could send a "tempo efference copy" to the neuronal device that calculates the discrete Fourier transform, to slide the frequency axis left or right to compensate for tempo variations. 

Thus, the same landmark will always transform to the same set of activated spots in the frequency-amplitude-phase volume. 

A Possible Neuronal Mechanism

I conjecture that the hippocampus calculates a discrete-frequency Fourier transform of all incoming sensory data, with lowest frequency represented ventrally and highest dorsally, and a with a linear temporal spectrum represented between. 

Tempo Compensation 

The negative feedback device that compensates tempo variations would be the loop through medial septum. The septum is the central hub of the network in which the EEG theta rhythm can be detected. This rhythm may be a central clock of unvarying frequency that serves as a reference for measuring tempo variations, possibly by a beat-frequency principle. 

Fourier Transform by Re-entrant Calculation 

The hippocampus could calculate the Fourier transform by exploiting the mathematical fact that a sinusoidal function differentiated four times in succession gives exactly the starting function, if its amplitude and frequency are both numerically equal to one. This could be done by the five-synapse loop from dentate gyrus (DG) to hippocampal CA3 to hippocampal CA1 to subiculum (sub) to entorhinal cortex (EC), and back to dentate gyrus. The dentate gyrus looks anatomically unlike the others and may be the input site where amplitude standardization operations are performed, while the other four stages would be the actual differentiators. 

Differentiation would occur by the mechanism of a parallel shunt pathway through slowly-responding inhibitory interneurons, known to be present throughout the hippocampus.

The two long spatial dimensions of the hippocampus would represent frequency and amplitude by setting up gradients in the gain of the differentiators. A given spot in the hippocampal array would map the input function to itself only for one particular combination of frequency and amplitude. 

The self-mapping sets up a positive feedback around the loop that makes the spot stand out functionally. All the concurrently active spots would constitute the context. This context could in principle reach the entire cerebral cortex via the fimbria fornix, mammillary bodies, and tuberomamillary nucleus of the hypothalamus, the latter being histaminergic.

Learning and Novelty 

The cortex may contain a novelty-detection function, source of the well-documented mismatch negativity found in oddball evoked-potential experiments. A stimulus found to be novel would go into a short term memory store in cortex. If a crisis develops while it is there, it is changed into a flash memory and wired up to the amygdala, which mediates visceral fear responses. In this way, a conditioned fear stimulus could be created. If a reward registers while the stimulus is in short term memory, it could be converted to a conditioned appetitive stimulus by a similar mechanism.

It Gets Bigger

I conjecture that all a person's declarative and episodic memories together are nothing more nor less than the context data that were instrumental in conferring conditioned status on particular stimuli.

Context is Required for Novelty

To become such a memory, a stimulus must first be found to be novel, and this is much less likely in the absence of a context signal; to put it another way, it is the combination of the context signal and the sensory stimulus that is found to be novel. Absent the context, and almost no simple stimulus will be novel. This may be the reason why at least one hippocampus must be functioning if declarative or episodic memories are to be formed.

Saturday, June 18, 2016

#5. Why We Dream [neuroscience]

NE

Red, theory; black, fact

The Conjunction of Jupiter and Venus


We Dream Because We Learn

Operant conditioning is the learning process at the root of all voluntary behaviour. The process was discovered in lab animals such as pigeons by B.F. Skinner in the 1950s and can briefly be stated as "If the ends are achieved, the means will be repeated." (Gandhi said something similar about revolutionary governments.)

Learning in the Produce Isle

Operant conditioning is just trial-and-error, like evolution itself, only faster. Notice how it must begin: with trying moves randomly--behavioral mutations. However, the process is not really random like a DNA mutation.  Clearly, common sense plays a role in getting the self-sticky polyethylene bag open for the first time, but any STEM-educated person will want to know just what this "common sense" is and how you would program it. Ideally, you want the  creativity and genius of pure randomness, AND the assurance of not doing anything crazy or even lethal just because some random-move generator suggested it. You vet those suggestions.

How Dreams Help Learning

That, in a nutshell, is dreaming: vetting random moves against our accumulated better judgment to see if they are safe--stocking the brain with pre-vetted random moves for use the next day when we are stuck. This is why the emotions associated with dreaming are more often unpleasant than pleasant: there are more ways to go wrong than to go right. The vetting is best done in advance (e.g., while we sleep) because there's no time in the heat of the action the next day, and trial-and-error with certified-safe "random" moves is already time-consuming without having to do the vetting on the spot as well.  

A Possible Neurobiological Mechanism

Dreams are loosely associated with brain electrical events called "PGO waves," which begin with a burst of action potentials ("nerve impulses") in a few small brainstem neuron clusters, then spread to the visual thalamus, then to the primary visual cortex. I theorize that each PGO wave creates a new random move that is installed by default in memory in cerebral cortex, and is then tested in the inner theatre of dreaming to see what the consequences would be. In the event of a disaster foreseen, the move would be scrubbed from memory, or better yet, added as a "don't do" to the store of accumulated wisdom. Repeat all night.

If memory is organized like an AI knowledge base, then each random move would actually be a connection from a randomly-selected but known stimulus to a randomly-selected but known response, amounting to adding a novel if-then rule to the knowledge base.

Support For a Requirement for Vetting 

In "Evolution in Four Dimensions" [1st ed.] Jablonka and Lamb make the point that epigenetic, cultural, and symbolic processes can come up with something much better than purely random mutations: variation that has been subjected to a variety of screening processes.

An Observation and an Exegesis

Oddly, my nightmares happen just after a turn of good fortune for me. However, in our evolutionary past, my kind of good fortune may have meant bad fortune for someone else, and that someone else will now be highly motivated to kill me in my sleep. Unless I have a nightmare and thus sleep poorly or with comforting others. The dream that warned the Wise Men not to return to Herod may have been just such a nightmare, which they were wise enough to interpret correctly. The content was probably not an angelic vision, but more like Ezekiel's valley of dry bones vision in reverse.