Monday, August 29, 2016

#15. The Insurance of the Heart [evolutionary psychology]

Red, theory; black, fact.

8-29-2016
We live in an uncertain world, the best reason to buy insurance while you can. Insurance is too good a trick for evolution to have missed. When food is plentiful, as it now is in my country, people get obese, as they are now doing here, so that they can live on their fat during possible future hard times. They don't do this consciously; it's in their genes.

However, eating has only an additive effect on your footprint on society's demand for resources; how many kids you have affects your footprint multiplicatively. Thus, the effectiveness of biological insurance taken out in children foregone during times of plenty would be greater than that taken out in food consumed. Such a recourse exists (See Deprecated, Part 8); how well and long remembered the family name you bequeath to your children affects your footprint exponentially. (I assume that a good or bad "name" affects the reproductive success of all your descendants having that name until you are finally forgotten.) Compared to exponential returns, everything else is chump change. ("Who steals my purse steals trash." - Shakespeare)

There remains the problem of food going to waste during times of plenty because social forces prevent a quick population increase. I conjecture that the extra energy available is invested by society in contests of various sorts (think of the Circus Maximus during the heyday of ancient Rome) that act as a proxy to evolutionary selection pressure, whereby the society accelerates it's own evolution. Although natural selection pressure is maximal during the hard times, relying on these to do all your evolving for you can make you extinct; better to do some "preventative evolution" ahead of time.

Postscript 3
Since future environmental demands are partly unforeseeable, a good strategy would be to accelerate one's evolution in multiple directions, keeping many irons in the fire. Indeed, in the Olympics just concluded, thirty-nine sports were represented.

The power of these contests is maximized by using the outcomes as unconditioned stimuli that are associated with the family names of the winners and losers: the conditioned stimuli. In this way, one acquires a good or bad "name" that will affect the reproductive success of all who inherit it, an exponential effect. To ground this discussion biologically, it must be assumed that the contests are effective in isolating carriers of good or bad genes (technically, alleles), and that the resulting "name" is an effective proxy for natural selection in altering the frequency of said genes. To keep the population density stable during all this, winners must be balanced by losers. The winners are determined and branded in places like the ball diamond, and the losers are determined and branded in the courts.

Tuesday, August 16, 2016

#14. Three Stages of Abiogenesis [evolution, chemistry]

The iconic Miller experiment on the origin of life

Abiogenesis chemistry outside the box

EV    CH    
Red, theory; black, fact

Repair, growth, reproduction

"Abiogenesis" is the term for life originating from non-life.
Self-repair processes will be important in abiogenesis because life is made of metastable molecules that spontaneously break down and have to be continually repaired, which results in continuous energy dissipation. I will assume that self-repair in non-reproducing molecules is what eventually evolved into self-replication and life.

I also assume that the self repair process was fallible, so that it occasionally introduced a mutation. Favorable mutations would have increased the longevity of the self-repairing molecules. Nevertheless, a given cohort of these molecules would relentlessly decrease in numbers, but they would have been continuously replenished in the juvenile form by undirected chemistry on the early Earth. Eventually, at least one of them was able to morph self-repair into self-replication, and life began. I call this process of refinement of non-reproducing molecules "longitudinal evolution" by analogy to a longitudinal cohort study in medical science. The process bears an interesting resemblance to carcinogenesis, where an accumulation of mutations in long-lived cells also leads to an ability to self-replicate autonomously. Carcinogenesis is difficult to prevent, and so must be considered a facile process, suggesting that longitudinal evolution to the threshold of life was also facile.

A simple self-repairing molecule

The "enzyme ring" shown above is an example of a possible self-repairing molecule that I dreamt up. It is a ring of covalently-bonded monomers that are individually large enough to have good potential for catalyzing reactions, like globular proteins, but are small enough to be present in multiple copies like the standardized building blocks that one wants for templated synthesis.

If the covalent bond between a given pair of monomers breaks, the ring is held together by multiple, parallel secondary valence forces and hydrophobic interactions, until the break can be repaired by the ring's catalytic members. With continuing lack of repair, the ring eventually opens completely, and effectively "dies." To bring the necessary catalysts to the break site reliably, no matter where it is, I assume that multiple copies of the repair enzyme are present in the ring, and are randomly distributed. I also assume a temperature cycle like that of the polymerization chain reaction technology that repeatedly makes the ring single-stranded during the warm phase and allows it to collapse into a self-adhering, linear, double-stranded form during the cool phase. This could simply be driven by the day-night cycle. In the linear form, the catalytic sites are brought close to the covalent bond sites, and can repair any that are broken using small-molecule condensing agents such as cyanogen, which are arguably present on the early Earth under Miller-Urey assumptions. When the ring collapses, it does so at randomly selected fold diameters, so that only a few catalytic monomers are needed, since each will eventually land next to all covalent bonds in the ring except those nearby, which it cannot reach because of steric hindrance and/or bond angle restrictions. The other catalytic monomers in the ring will take care of these.

How it would grow

The mutation process of the enzyme ring could result from random ring-expansion and ring-contraction events, the net effect being to replace one kind of monomer with another. Expansion would most likely begin with intercalation of a free monomer between the bound ones at the high-curvature regions at the ends of the linear conformation. The new monomer would be held in place by the multiple, weak parallel bonds alluded to above. It could become incorporated into the ring if it intercalates at a site where the covalent bond is broken. Two bond-repair events would then suffice to sew it into the ring. The ring-contraction process would the the time-reversed version of this. 

In addition, an ability to undergo ring expansion allows the enzyme ring to start small and grow larger. This is important because, on entropy grounds, a long polymer is very unlikely to spontaneously cyclize. The energy-requiring repair process will bias the system to favor net ring expansion. Thus, we see how easily self-repair can become growth.

How it would reproduce

If large rings can split in two while in the linear conformation, the result is reproduction, without even a requirement for templated synthesis. Thus, we see how easily growth can become reproduction.

Onward to the bacterium

To get from reproduction-competent enzyme rings to something like a bacterium, the sequence of steps might have been multiplication, coacervate droplet formation, cooperation within the confines of the droplet, and specialization. The first specialist subtypes may have been archivists, forerunners of the circular genome of bacteria; and gatekeepers, forerunners of the plasma membrane with its sensory and transporter sites. Under these assumptions, DNA would not have evolved from RNA; both would represent independently originated lines of evolution, but forced to develop many chemical similarities by the demands of templated information transfer.

Back to chemistry

During the classic experiment in abiogenesis, the Miller-Urey experiment, amino acids were formed in solution, but no-one has been able to show how these could subsequently have polymerized to functional protein catalysts. The origin of the monomers in my enzyme ring thus needs to be explained. However, the formation of relatively large amounts of insoluble, dark-colored "tars" is apparently facile under the Miller-Urey reaction conditions. The carbon in this tar is not necessarily lost to the system forever, like a coal deposit. In present-day anoxic environments relevant to the early Earth, at least three-quarters of modern biomass returns to the atmosphere as marsh gas. The driving force for these reactions seems to be not enthalpy reduction, but entropy increase.
Seen in the library of the University of Ottawa

Retrofractive synthesis

I therefore propose that if you wait long enough, and a diversity of trace-metal ions is present, then the abiogenesis tar will largely break down again, releasing large, prefab molecular chunks into solution. Reasoning from what is known of coal chemistry, these chunks may look something like asphaltenes, illustrated above, but relatively enriched in hydrophilic functional groups to make them water soluble. Hydrolysis reactions, for example, can simultaneously depolymerize a big network and introduce such groups (e.g., carboxylic acid groups). I propose that these asphaltene analogs are the optimally-sized monomers needed to form the enzyme ring.

Monday, August 15, 2016

#13. The Neural Code, Part II: the Thalamus [neuroscience, engineering]

A hypothetical scheme of the thalamus, a central part of your brain.

EN     NE     
Red, theory; black, fact.

Thalamic processing as Laplace transform

More in Deprecated, Part 1. I postulate that the thalamus performs a Laplace transform (LT). All the connections shown are established anatomical facts, and are based on the summary diagram of lateral geniculate nucleus circuitry of Steriade et al. (Steriade, M., Jones E. G. and McCormick, D. A. (1997) Thalamus, 2 Vols. Amsterdam: Elsevier).  What I have added is feedback from cortex as a context-sensitive enabling signal for the analytical process. I originally guessed that the triadic synapses are differentiators, but now I think that they are function multipliers.

Thalamic electrophysiology

The thalamic low-threshold spike (LTS) is a slow calcium spike that triggers further spiking that appears in extracellular recordings as a distinctive cluster of four or five sodium spikes. The thalamus also has an alternative response mode consisting of fast single spikes, which is observed at relatively depolarized membrane potentials.

The thalamic low-threshold spike as triggered by a hyperpolarization due to an electric current pulse injected into the neuron through the recording electrode. ACSF, normal conditions; TTX, sodium spikes deleted pharmacologically. From my thesis, page 167.

Network relationships of the thalamus

Depolarizing input to thalamus from cortex is conjectured to be a further requirement for the LTS-burst complex. This depolarization is conjectured to take the form of a pattern of spots, each representing a mask to detect a specific pole of the stimulus that the attentional system is looking for in that context.

The complex frequency plane is where LTs are graphed, usually as a collection of points. Some of these are "poles," where gain goes to infinity, and others are "zeroes," where gain goes to zero. I assume that the cerebral cortex-thalamus system takes care of the poles, while the superior and inferior colliculi take care of the zeroes. 

If this stimulus is found, the pattern of poles must still be recognized. This may be accomplished through a cortical AND-element wired up on Hebbian principles among cortical neurons. These neurons synapse on each other by extensive recurrent collaterals, which might be the anatomical substrate of the conjectured AND-elements. Explosive activation of the AND network would then be the signal that the expected stimulus has been recognized, as Hebb proposed long ago, and the signal would then be sent forward in the brain via white matter tracts to the motor cortex, which would output a collection of excitation spots representing the LT of the desired response.

Presumably, a reverse LT is then applied, possibly by the spinal grey, which I have long considered theoretically underemployed in light of its considerable volume. If we assume that the cerebral cortex is highly specialized for representing LTs, then motor outputs from cerebellum and internal globus pallidus would also have to be transformed to enable the cortex to represent them. In agreement with this, the motor cortex is innervated by prominent motor thalami, the ventrolateral (for cerebellar inputs) and the ventroanterior (for pallidal inputs).

Brain representation of Laplace transforms

The difficulty is to see how a two-dimensional complex plane can be represented on a two-dimensional cerebral cortex without contradicting the results of receptive field studies, which clearly show that the two long dimensions of the cortex represent space in egocentric coordinates. This just leaves the depth dimension for representing the two dimensions of complex frequency.

03-01-2020:
A simple solution is that the complex frequency plane is tiled by the catchment basins of discrete, canonical poles, and all poles in a catchment basin are represented approximately by the nearest canonical pole. It then becomes possible to distinguish the canonical poles in the cerebral cortex by the labelled-line mechanism (i.e., by employing different cell-surface adhesion molecules to control synapse formation.)

Recalling that layer 1 of cortex is mostly processes, this leaves us with five cortical cell layers not yet assigned to functions. Four of them might correspond to the four quadrants of  the complex frequency plane, which differ qualitatively in the motions they represent. The two granule-cell layers 2 and 4 are interleaved with the two pyramidal-cell layers 3 and 5. The two granule layers might be the top and bottom halves of the left half-plane, which represents decaying, stabilized motions. The two pyramidal layers might represent the top and bottom halves of the right half-plane, which represents dangerously growing, unstable motions. Since the latter represent emergency conditions, the signal must be processed especially fast, requiring fast, large-diameter axons. Producing and maintaining such axons requires correspondingly large cell bodies. This is why I assign the relatively large pyramidal cells to the right half-plane.

Intra-thalamic operations

It is beginning to look like the thalamus computes the Laplace transform just the way it is defined: the integral of the product of the input time-domain function and an exponentially decaying or growing sinusoid (eigenfunction). A pole would be recognized after a finite integration time as the integrand rising above a threshold. This thresholding is plausibly done in cortical layer 4, against a background of elevated inhibition controlled by the recurrent layer-6 collaterals that blocks intermediate calculation results from propagating further into the cortex. The direct projections from layer 6 down to thalamus would serve to trigger the analysis and rescale eigenfunction tempo to compensate for changes in behavioral tempo. Reverberation of LTS-bursting activity between thalamic reticular neurons and thalamic principal neurons would be the basis of the oscillatory activity involved in implementing the eigenfunctions. This is precedented by the spindling mechanism and the phenomenon of Parkinsonian tremor cells.

Mutual inhibition of reticular thalamic neurons would be the basis of the integrator and multiplication of functions would be done by silent inhibition in the triadic synapses (here no longer considered to be differentiators) via the known disinhibitory pathway from the reticular thalamus. 

A negative feedback system will be necessary to dynamically rejig the thalamus so that the same pole maps to the same spot despite changes in tempo. Some of the corticothalamic cells (layer 6) could be part of this system (layer 6 cells are of two quite different types), as well as the prominent cholinergic projections to the RT.

Consequences for object recognition

The foregoing system could be used to extract objects from the incoming data by in effect assuming that the elements or features of an object always share the same motion and therefore will be represented by the same set of poles. An automatic process of object extraction may therefore be implemented as a tendency for Hebbian plasticity to involve the same canonical pole at two different cortical locations that are connected by recurrent axon collaterals.

Sunday, August 7, 2016

#12. The Neural Code, Part I: the Hippocampus [neuroscience, engineering]

EN    NE    
Red, theory; black, fact.

"Context information" is often invoked in neuroscience theory as an address for storing more specific data in memory, such as whatever climbing fibers carry into the cerebellar cortex (Marr theory), but what exactly is context, as a practical matter?

First, it must change on a much longer timescale than whatever it addresses. Second, it must also be accessible to a moving organism that follows habitual, repetitive pathways in patrolling its territory. Consideration of the mainstream theory that the hippocampus prepares a cognitive map of the organism's spatial environment suggests that context is a set of landmarks. It seems that a landmark will be any stimulus that appears repetitively. Since only rhythmically repeating functions have a classical discrete-frequency Fourier transform, the attempt to calculate such a transform could be considered a filter for extracting rhythmic signals from the sensory input. 

However, this is not enough for a landmark extractor because landmark signals are only repetitive, not rhythmic. Let us suppose, however, that variations in the intervals between arrivals at a given landmark are due entirely to programmed, adaptive variations in the overall tempo of the organism's behavior. A tempo increase will upscale all incoming frequencies by the same factor, and a tempo decrease will downscale them all by the same factor. Since these variations originate within the organism, the organism could send a "tempo efference copy" to the neuronal device that calculates the discrete Fourier transform, to slide the frequency axis left or right to compensate for tempo variations. 

Thus, the same landmark will always transform to the same set of activated spots in the frequency-amplitude-phase volume. I conjecture that the hippocampus calculates a discrete-frequency Fourier transform of all incoming sensory data, with lowest frequency represented ventrally and highest dorsally, and a with a linear temporal spectrum represented between. 

The negative feedback device that compensates tempo variations would be the loop through medial septum. The septum is the central hub of the network in which the EEG theta rhythm can be detected. This rhythm may be a central clock of unvarying frequency that serves as a reference for measuring tempo variations, possibly by a beat-frequency principle. 

The hippocampus could calculate the Fourier transform by exploiting the mathematical fact that a sinusoidal function differentiated four times in succession gives exactly the starting function, if its amplitude and frequency are both numerically equal to one. This could be done by the five-synapse loop from dentate gyrus to hippocampal CA3 to hippocampal CA1 to subiculum to entorhinal cortex, and back to dentate gyrus. The dentate gyrus looks anatomically unlike the others and may be the input site where amplitude standardization operations are performed, while the other four stages would be the actual differentiators. 

Differentiation would occur by the mechanism of a parallel shunt pathway through slowly-responding inhibitory interneurons, known to be present throughout the hippocampus. The other two spatial dimensions of the hippocampus would represent frequency and amplitude by setting up gradients in the gain of the differentiators. A given spot in the array maps the input function to itself only for one particular combination of frequency and transformed (i.e., output) amplitude. 

The self-mapping sets up a reverberation around the loop that makes the spot stand out functionally. All the concurrently active spots would constitute the context. This context could in principle reach the entire cerebral cortex via the fimbria fornix, mammillary bodies, and tuberomamillary nucleus of the hypothalamus, the latter being histaminergic.

The cortex may contain a novelty-detection function, source of the well-documented mismatch negativity found in oddball evoked-potential experiments. A stimulus found to be novel would go into a short term memory store in cortex. If a crisis develops while it is there, it is changed into a flash memory and wired up to the amygdala, which mediates visceral fear responses. In this way, a conditioned fear stimulus could be created. If a reward registers while the stimulus is in short term memory, it could be converted to a conditioned appetitive stimulus by a similar mechanism.

 I conjecture that all a person's declarative and episodic memories together are nothing more nor less than those that confer conditioned status on particular stimuli.

To become such a memory, a stimulus must first be found to be novel, and this is much less likely in the absence of a context signal; to put it another way, it is the combination of the context signal and the sensory stimulus that is found to be novel. Absent the context, and almost no simple stimulus will be novel. This may be the reason why at least one hippocampus must be functioning if declarative or episodic memories are to be formed.

Wednesday, August 3, 2016

#11. Revised Motor Scheme [neuroscience]


How skilled behavior may be generated, on the assumption that it is acquired by an experimentation-like process.

Red, theory; black, fact.

Please find above a revised version of the motor control theory presented in the last blog. The revision was necessitated by the fact that there is no logical reason why a motor command cannot go to both sides of the body at once to produce a mid line-symmetrical movement. The prediction is that mid-line-symmetrical movements are acquired one side at a time whenever the controlling corticofugal pathway allows the two sides to move independently.