Monday, August 29, 2016

#15. The Insurance of the Heart [evolutionary psychology]

Red, theory; black, fact.

8-29-2016
We live in an uncertain world, the best reason to buy insurance while you can. Insurance is too good a trick for evolution to have missed. When food is plentiful, as it now is in my country, people get obese, as they are now doing here, so that they can live on their fat during possible future hard times. They don't do this consciously; it's in their genes.

However, eating has only an additive effect on your footprint on society's demand for resources; how many kids you have affects your footprint multiplicatively. Thus, the effectiveness of biological insurance taken out in children foregone during times of plenty would be greater than that taken out in food consumed. Such a recourse exists (See Deprecated, Part 8); how well and long remembered the family name you bequeath to your children affects your footprint exponentially. (I assume that a good or bad "name" affects the reproductive success of all your descendants having that name until you are finally forgotten.) Compared to exponential returns, everything else is chump change. ("Who steals my purse steals trash." - Shakespeare)

There remains the problem of food going to waste during times of plenty because social forces prevent a quick population increase. I conjecture that the extra energy available is invested by society in contests of various sorts (think of the Circus Maximus during the heyday of ancient Rome) that act as a proxy to evolutionary selection pressure, whereby the society accelerates it's own evolution. Although natural selection pressure is maximal during the hard times, relying on these to do all your evolving for you can make you extinct; better to do some "preventative evolution" ahead of time.

Postscript 3
Since future environmental demands are partly unforeseeable, a good strategy would be to accelerate one's evolution in multiple directions, keeping many irons in the fire. Indeed, in the Olympics just concluded, thirty-nine sports were represented.

The power of these contests is maximized by using the outcomes as unconditioned stimuli that are associated with the family names of the winners and losers: the conditioned stimuli. In this way, one acquires a good or bad "name" that will affect the reproductive success of all who inherit it, an exponential effect. To ground this discussion biologically, it must be assumed that the contests are effective in isolating carriers of good or bad genes (technically, alleles), and that the resulting "name" is an effective proxy for natural selection in altering the frequency of said genes. To keep the population density stable during all this, winners must be balanced by losers. The winners are determined and branded in places like the ball diamond, and the losers are determined and branded in the courts.

Tuesday, August 16, 2016

#14. Three Stages of Abiogenesis [evolution, chemistry]

The iconic Miller experiment on the origin of life

Abiogenesis chemistry outside the box

EV    CH    
Red, theory; black, fact

Repair, growth, reproduction

"Abiogenesis" is the term for life originating from non-life.
Self-repair processes will be important in abiogenesis because life is made of metastable molecules that spontaneously break down and have to be continually repaired, which results in continuous energy dissipation. I will assume that self-repair in non-reproducing molecules is what eventually evolved into self-replication and life.

I also assume that the self repair process was fallible, so that it occasionally introduced a mutation. Favorable mutations would have increased the longevity of the self-repairing molecules. Nevertheless, a given cohort of these molecules would relentlessly decrease in numbers, but they would have been continuously replenished in the juvenile form by undirected chemistry on the early Earth. Eventually, at least one of them was able to morph self-repair into self-replication, and life began. I call this process of refinement of non-reproducing molecules "longitudinal evolution" by analogy to a longitudinal cohort study in medical science. The process bears an interesting resemblance to carcinogenesis, where an accumulation of mutations in long-lived cells also leads to an ability to self-replicate autonomously. Carcinogenesis is difficult to prevent, and so must be considered a facile process, suggesting that longitudinal evolution to the threshold of life was also facile.

A simple self-repairing molecule

The "enzyme ring" shown above is an example of a possible self-repairing molecule that I dreamt up. It is a ring of covalently-bonded monomers that are individually large enough to have good potential for catalyzing reactions, like globular proteins, but are small enough to be present in multiple copies like the standardized building blocks that one wants for templated synthesis.

If the covalent bond between a given pair of monomers breaks, the ring is held together by multiple, parallel secondary valence forces and hydrophobic interactions, until the break can be repaired by the ring's catalytic members. With continuing lack of repair, the ring eventually opens completely, and effectively "dies." To bring the necessary catalysts to the break site reliably, no matter where it is, I assume that multiple copies of the repair enzyme are present in the ring, and are randomly distributed. I also assume a temperature cycle like that of the polymerization chain reaction technology that repeatedly makes the ring single-stranded during the warm phase and allows it to collapse into a self-adhering, linear, double-stranded form during the cool phase. This could simply be driven by the day-night cycle. In the linear form, the catalytic sites are brought close to the covalent bond sites, and can repair any that are broken using small-molecule condensing agents such as cyanogen, which are arguably present on the early Earth under Miller-Urey assumptions. When the ring collapses, it does so at randomly selected fold diameters, so that only a few catalytic monomers are needed, since each will eventually land next to all covalent bonds in the ring except those nearby, which it cannot reach because of steric hindrance and/or bond angle restrictions. The other catalytic monomers in the ring will take care of these.

How it would grow

The mutation process of the enzyme ring could result from random ring-expansion and ring-contraction events, the net effect being to replace one kind of monomer with another. Expansion would most likely begin with intercalation of a free monomer between the bound ones at the high-curvature regions at the ends of the linear conformation. The new monomer would be held in place by the multiple, weak parallel bonds alluded to above. It could become incorporated into the ring if it intercalates at a site where the covalent bond is broken. Two bond-repair events would then suffice to sew it into the ring. The ring-contraction process would the the time-reversed version of this. 

In addition, an ability to undergo ring expansion allows the enzyme ring to start small and grow larger. This is important because, on entropy grounds, a long polymer is very unlikely to spontaneously cyclize. The energy-requiring repair process will bias the system to favor net ring expansion. Thus, we see how easily self-repair can become growth.

How it would reproduce

If large rings can split in two while in the linear conformation, the result is reproduction, without even a requirement for templated synthesis. Thus, we see how easily growth can become reproduction.

Onward to the bacterium

To get from reproduction-competent enzyme rings to something like a bacterium, the sequence of steps might have been multiplication, coacervate droplet formation, cooperation within the confines of the droplet, and specialization. The first specialist subtypes may have been archivists, forerunners of the circular genome of bacteria; and gatekeepers, forerunners of the plasma membrane with its sensory and transporter sites. Under these assumptions, DNA would not have evolved from RNA; both would represent independently originated lines of evolution, but forced to develop many chemical similarities by the demands of templated information transfer.

Back to chemistry

During the classic experiment in abiogenesis, the Miller-Urey experiment, amino acids were formed in solution, but no-one has been able to show how these could subsequently have polymerized to functional protein catalysts. The origin of the monomers in my enzyme ring thus needs to be explained. However, the formation of relatively large amounts of insoluble, dark-colored "tars" is apparently facile under the Miller-Urey reaction conditions. The carbon in this tar is not necessarily lost to the system forever, like a coal deposit. In present-day anoxic environments relevant to the early Earth, at least three-quarters of modern biomass returns to the atmosphere as marsh gas. The driving force for these reactions seems to be not enthalpy reduction, but entropy increase.
Seen in the library of the University of Ottawa

Retrofractive synthesis

I therefore propose that if you wait long enough, and a diversity of trace-metal ions is present, then the abiogenesis tar will largely break down again, releasing large, prefab molecular chunks into solution. Reasoning from what is known of coal chemistry, these chunks may look something like asphaltenes, illustrated above, but relatively enriched in hydrophilic functional groups to make them water soluble. Hydrolysis reactions, for example, can simultaneously depolymerize a big network and introduce such groups (e.g., carboxylic acid groups). I propose that these asphaltene analogs are the optimally-sized monomers needed to form the enzyme ring.

Monday, August 15, 2016

#13. The Neural Code, Part II: the Thalamus [neuroscience, engineering]

A hypothetical scheme of the thalamus, a central part of your brain.

EN     NE     
Red, theory; black, fact.

Thalamic processing as Laplace transform

More in Deprecated, Part 1. I postulate that the thalamus performs a Laplace transform (LT). All the connections shown are established anatomical facts, and are based on the summary diagram of lateral geniculate nucleus circuitry of Steriade et al. (Steriade, M., Jones E. G. and McCormick, D. A. (1997) Thalamus, 2 Vols. Amsterdam: Elsevier).  What I have added is feedback from cortex as a context-sensitive enabling signal for the analytical process. I originally guessed that the triadic synapses are differentiators, but now I think that they are function multipliers.

Thalamic electrophysiology

The thalamic low-threshold spike (LTS) is a slow calcium spike that triggers further spiking that appears in extracellular recordings as a distinctive cluster of four or five sodium spikes. The thalamus also has an alternative response mode consisting of fast single spikes, which is observed at relatively depolarized membrane potentials.

The thalamic low-threshold spike as triggered by a hyperpolarization due to an electric current pulse injected into the neuron through the recording electrode. ACSF, normal conditions; TTX, sodium spikes deleted pharmacologically. From my thesis, page 167.

Network relationships of the thalamus

Depolarizing input to thalamus from cortex is conjectured to be a further requirement for the LTS-burst complex. This depolarization is conjectured to take the form of a pattern of spots, each representing a mask to detect a specific pole of the stimulus that the attentional system is looking for in that context.

The complex frequency plane is where LTs are graphed, usually as a collection of points. Some of these are "poles," where gain goes to infinity, and others are "zeroes," where gain goes to zero. I assume that the cerebral cortex-thalamus system takes care of the poles, while the superior and inferior colliculi take care of the zeroes. 

If this stimulus is found, the pattern of poles must still be recognized. This may be accomplished through a cortical AND-element wired up on Hebbian principles among cortical neurons. These neurons synapse on each other by extensive recurrent collaterals, which might be the anatomical substrate of the conjectured AND-elements. Explosive activation of the AND network would then be the signal that the expected stimulus has been recognized, as Hebb proposed long ago, and the signal would then be sent forward in the brain via white matter tracts to the motor cortex, which would output a collection of excitation spots representing the LT of the desired response.

Presumably, a reverse LT is then applied, possibly by the spinal grey, which I have long considered theoretically underemployed in light of its considerable volume. If we assume that the cerebral cortex is highly specialized for representing LTs, then motor outputs from cerebellum and internal globus pallidus would also have to be transformed to enable the cortex to represent them. In agreement with this, the motor cortex is innervated by prominent motor thalami, the ventrolateral (for cerebellar inputs) and the ventroanterior (for pallidal inputs).

Brain representation of Laplace transforms

The difficulty is to see how a two-dimensional complex plane can be represented on a two-dimensional cerebral cortex without contradicting the results of receptive field studies, which clearly show that the two long dimensions of the cortex represent space in egocentric coordinates. This just leaves the depth dimension for representing the two dimensions of complex frequency.

03-01-2020:
A simple solution is that the complex frequency plane is tiled by the catchment basins of discrete, canonical poles, and all poles in a catchment basin are represented approximately by the nearest canonical pole. It then becomes possible to distinguish the canonical poles in the cerebral cortex by the labelled-line mechanism (i.e., by employing different cell-surface adhesion molecules to control synapse formation.)

Recalling that layer 1 of cortex is mostly processes, this leaves us with five cortical cell layers not yet assigned to functions. Four of them might correspond to the four quadrants of  the complex frequency plane, which differ qualitatively in the motions they represent. The two granule-cell layers 2 and 4 are interleaved with the two pyramidal-cell layers 3 and 5. The two granule layers might be the top and bottom halves of the left half-plane, which represents decaying, stabilized motions. The two pyramidal layers might represent the top and bottom halves of the right half-plane, which represents dangerously growing, unstable motions. Since the latter represent emergency conditions, the signal must be processed especially fast, requiring fast, large-diameter axons. Producing and maintaining such axons requires correspondingly large cell bodies. This is why I assign the relatively large pyramidal cells to the right half-plane.

Intra-thalamic operations

It is beginning to look like the thalamus computes the Laplace transform just the way it is defined: the integral of the product of the input time-domain function and an exponentially decaying or growing sinusoid (eigenfunction). A pole would be recognized after a finite integration time as the integrand rising above a threshold. This thresholding is plausibly done in cortical layer 4, against a background of elevated inhibition controlled by the recurrent layer-6 collaterals that blocks intermediate calculation results from propagating further into the cortex. The direct projections from layer 6 down to thalamus would serve to trigger the analysis and rescale eigenfunction tempo to compensate for changes in behavioral tempo. Reverberation of LTS-bursting activity between thalamic reticular neurons and thalamic principal neurons would be the basis of the oscillatory activity involved in implementing the eigenfunctions. This is precedented by the spindling mechanism and the phenomenon of Parkinsonian tremor cells.

Mutual inhibition of reticular thalamic neurons would be the basis of the integrator and multiplication of functions would be done by silent inhibition in the triadic synapses (here no longer considered to be differentiators) via the known disinhibitory pathway from the reticular thalamus. 

A negative feedback system will be necessary to dynamically rejig the thalamus so that the same pole maps to the same spot despite changes in tempo. Some of the corticothalamic cells (layer 6) could be part of this system (layer 6 cells are of two quite different types), as well as the prominent cholinergic projections to the RT.

Consequences for object recognition

The foregoing system could be used to extract objects from the incoming data by in effect assuming that the elements or features of an object always share the same motion and therefore will be represented by the same set of poles. An automatic process of object extraction may therefore be implemented as a tendency for Hebbian plasticity to involve the same canonical pole at two different cortical locations that are connected by recurrent axon collaterals.

Sunday, August 7, 2016

#12. The Neural Code, Part I: the Hippocampus [neuroscience, engineering]

EN    NE    
Red, theory; black, fact.

"Context information" is often invoked in neuroscience theory as an address for storing more specific data in memory, such as whatever climbing fibers carry into the cerebellar cortex (Marr theory), but what exactly is context, as a practical matter?

First, it must change on a much longer timescale than whatever it addresses. Second, it must also be accessible to a moving organism that follows habitual, repetitive pathways in patrolling its territory. Consideration of the mainstream theory that the hippocampus prepares a cognitive map of the organism's spatial environment suggests that context is a set of landmarks. It seems that a landmark will be any stimulus that appears repetitively. Since only rhythmically repeating functions have a classical discrete-frequency Fourier transform, the attempt to calculate such a transform could be considered a filter for extracting rhythmic signals from the sensory input. 

However, this is not enough for a landmark extractor because landmark signals are only repetitive, not rhythmic. Let us suppose, however, that variations in the intervals between arrivals at a given landmark are due entirely to programmed, adaptive variations in the overall tempo of the organism's behavior. A tempo increase will upscale all incoming frequencies by the same factor, and a tempo decrease will downscale them all by the same factor. Since these variations originate within the organism, the organism could send a "tempo efference copy" to the neuronal device that calculates the discrete Fourier transform, to slide the frequency axis left or right to compensate for tempo variations. 

Thus, the same landmark will always transform to the same set of activated spots in the frequency-amplitude-phase volume. I conjecture that the hippocampus calculates a discrete-frequency Fourier transform of all incoming sensory data, with lowest frequency represented ventrally and highest dorsally, and a with a linear temporal spectrum represented between. 

The negative feedback device that compensates tempo variations would be the loop through medial septum. The septum is the central hub of the network in which the EEG theta rhythm can be detected. This rhythm may be a central clock of unvarying frequency that serves as a reference for measuring tempo variations, possibly by a beat-frequency principle. 

The hippocampus could calculate the Fourier transform by exploiting the mathematical fact that a sinusoidal function differentiated four times in succession gives exactly the starting function, if its amplitude and frequency are both numerically equal to one. This could be done by the five-synapse loop from dentate gyrus to hippocampal CA3 to hippocampal CA1 to subiculum to entorhinal cortex, and back to dentate gyrus. The dentate gyrus looks anatomically unlike the others and may be the input site where amplitude standardization operations are performed, while the other four stages would be the actual differentiators. 

Differentiation would occur by the mechanism of a parallel shunt pathway through slowly-responding inhibitory interneurons, known to be present throughout the hippocampus. The other two spatial dimensions of the hippocampus would represent frequency and amplitude by setting up gradients in the gain of the differentiators. A given spot in the array maps the input function to itself only for one particular combination of frequency and transformed (i.e., output) amplitude. 

The self-mapping sets up a reverberation around the loop that makes the spot stand out functionally. All the concurrently active spots would constitute the context. This context could in principle reach the entire cerebral cortex via the fimbria fornix, mammillary bodies, and tuberomamillary nucleus of the hypothalamus, the latter being histaminergic.

The cortex may contain a novelty-detection function, source of the well-documented mismatch negativity found in oddball evoked-potential experiments. A stimulus found to be novel would go into a short term memory store in cortex. If a crisis develops while it is there, it is changed into a flash memory and wired up to the amygdala, which mediates visceral fear responses. In this way, a conditioned fear stimulus could be created. If a reward registers while the stimulus is in short term memory, it could be converted to a conditioned appetitive stimulus by a similar mechanism.

 I conjecture that all a person's declarative and episodic memories together are nothing more nor less than those that confer conditioned status on particular stimuli.

To become such a memory, a stimulus must first be found to be novel, and this is much less likely in the absence of a context signal; to put it another way, it is the combination of the context signal and the sensory stimulus that is found to be novel. Absent the context, and almost no simple stimulus will be novel. This may be the reason why at least one hippocampus must be functioning if declarative or episodic memories are to be formed.

Wednesday, August 3, 2016

#11. Revised Motor Scheme [neuroscience]


How skilled behavior may be generated, on the assumption that it is acquired by an experimentation-like process.

Red, theory; black, fact.

Please find above a revised version of the motor control theory presented in the last blog. The revision was necessitated by the fact that there is no logical reason why a motor command cannot go to both sides of the body at once to produce a mid line-symmetrical movement. The prediction is that mid-line-symmetrical movements are acquired one side at a time whenever the controlling corticofugal pathway allows the two sides to move independently.

Saturday, July 30, 2016

#10. The Two–test-tube Experiment: Part II [neuroscience]

Red, theory; black, fact.

At this point we have a problem. The experimenting-brain theory predicts zero hard-wired asymmetries between the hemispheres. However, the accepted theory of hemispheric dominance postulates that this arrangement allows us to do two things at once, one task with the left hemisphere and the other task with the right. The accepted theory is basically a parsimony argument. However, this argument predicts huge differences between the hemispheres, not the subtle ones actually found.

My solution is that hard-wired hemispheric dominance must be seen as an imperfection of symmetry in the framework of the experimenting brain caused by the human brain being still in the process of evolving, combined with the hypothesis that brain-expanding mutations individually produce small and asymmetric expansions. (See Post 45.) Our left-hemispheric speech apparatus is the most asymmetric part of our brain and these ideas predict that we are due for another mutation that will expand the right side, thereby matching up the two sides, resulting in an improvement in the efficiency of operant conditioning of speech behavior.

These ideas also explain why speech defects such as lisping and stuttering are so common and slow to resolve, even in children, who are supposed to be geniuses at speech acquisition.
This is how the brain would have to work if fragments of skilled behaviors are randomly stored in memory on the left or right side, reflecting the possibility that the two hemispheres play experiment versus control, respectively, during learning.
The illustration shows the theory of motor control I was driven to by the implications of the theory of the dichotomously experimenting brain already outlined. It shows how hemispheric dominance can be reversed independently of the side of the body that should perform the movement specified by the applicable rule of conduct in the controlling hemisphere. The triangular device is a summer that converges the motor outputs of both hemispheres into a common output stream that is subsequently gated into the appropriate side of the body. This arrangement cannot create contention because at any given time, only one hemisphere is active. Anatomically, and from stroke studies, it certainly appears that the outputs of the hemispheres must be crossed, with the left hemisphere only controlling the right body and vice-versa.

However, my theory predicts that in healthy individuals, either hemisphere can control either side of the body, and the laterality of control can switch freely and rapidly during skilled performance so as to always use the best rule of conduct at any given time, regardless of the hemisphere in which it was originally created during REM sleep.

The first bit is calculated and stored in the basal ganglia. It would be output from the reticular substantia nigra (SNr) and gate sensory input to thalamus to favor one hemisphere or the other, by means of actions at the reticular thalamus and intermediate grey of the superior colliculus. The second bit would be stored in the cerebellar hemispheres and gate motor output to one side of the body or the other, at the red nucleus. Conceivably, the two parts of the red nucleus, the parvocellular and the magnocellular, correspond to the adder and switch, respectively, that are shown in the illustration.

Under these assumptions, the corpus callosum is needed only to distribute priming signals from the motor/premotor cortices to activate the rule that will be next to fire, without regard for which side that rule happens to be on. The callosum would never be required to carry signals forward from sensory to motor areas. I see that as the time-critical step, and it would never depend on getting signals through the corpus callosum, which is considered to be a signaling bottleneck.

How would the basal ganglia identify the "best" rule of conduct in a given context? I see the dopaminergic compact substantia nigra (SNc) as the most likely place for a hemisphere-specific "goodness" value to be calculated after each rule firing, using hypothalamic servo-error signals processed through the habenula as the main input for this. The half of the SNc located in the inactive hemisphere would be shut down by inhibitory GABAergic inputs from the adjacent SNr. The dopaminergic nigrostriatal projection would permanently potentiate simultaneously-active corticostriatal inputs (carrying context information) to medium spiny neurons (MSNs) of enkephalin type via a crossed projection, and to MSNs of substance-P type via uncrossed projections. The former MSN type innervates the external globus pallius (GPe), and the latter type innervates the SNr. These latter two nuclei are inhibitory and innervate each other. 

I conjecture that this arrangement sets up a winner-take-all kind of competition between GPe and SNr, with choice of the winner being exquisitely sensitive to small historical differences in dopaminergic tone between hemispheres. The "winner" is the side of the SNr that shuts down sensory input to the hemisphere on that side. The mutually inhibitory arrangement could also plausibly implement hysteresis, which means that once one hemisphere is shut down, it stays shut down without the need for an ongoing signal from the striatum to keep it shut down.

Each time the cerebral cortex outputs a motor command, a copy goes to the subthalamic nucleus (STN) and could plausibly serve as the timing signal for a "refresh" of the hemispheric dominance decision based on the latest context information from cortex. The STN signal presumably removes the hysteresis mentioned above, very temporarily, then lets the system settle down again into possibly a new state.

We now need a system that decides that something is wrong, and that the time to experiment has arrived. This could plausibly be the role of the large, cholinergic inter neurons of the striatum. They have a diverse array of inputs that could potentially signal trouble with the status quo, and could implement a decision to experiment simply by reversing the hemispheric dominance prevailing at the time. Presumably, they would do this by a cholinergic action on the surrounding MSNs of both types.

Finally, there is the second main output of the basal ganglia to consider, the inner pallidal segment (GPi). This structure is well developed in primates such as humans but is rudimentary in rodents and even in the cat, a carnivore. It sends its output forward, to motor thalamus. I conjecture that its role is to organize the brain's knowledge base to resemble block-structured programs. All the instructions in a block would be simultaneously primed by this projection. The block identifier may be some hash of the corticostriatal context information. A small group of cells just outside the striatum called the claustrum seems to have the connections necessary for preparing this hash. Jump rules, that is, rules of conduct for jumping between blocks, would not output motor commands, but block identifiers, which would be maintained online by hysteresis effects in the basal ganglia.

The cortical representation of jump rules would likely be located in medial areas, such as Brodmann 23, 24, 31, and 32. BA23-24 is classed as limbic system, and BA31-32 is situated between this and neocortex. This arrangement suggests that, seen as a computer, the brain is capable of executing programs with three levels of indentation, not counting whatever levels may be encoded as chromatin marks in the serotonergic neurons. Dynamic changes in hemispheric dominance might have to occur independently in neocortex, medial cortex, and limbic system.

Sunday, July 24, 2016

#9. The Two–test-tube Experiment: Part I [neuroscience]

Your Brain is Like This.

Red, theory; black, fact.

The the motivating challenge of this post is to explain the hemispheric organization of the human brain. That is, why we seem to have two very similar brains in our heads, the left side and the right side.

Systems that rely on the principle of trial-and-error must experiment. The genetic intelligence mentioned in previous posts would have to experiment by mutation/natural selection. The synaptic intelligence would have to experiment by operant conditioning. I propose that both these experimentation processes long ago evolved into something slick and simplified that can be compared to the two–test-tube experiment beloved of lab devils everywhere.

Remember that an experiment must have a control, because "everything is relative." Therefore, the simplest and fastest experiment in chemistry that has any generality is the two-test-tube experiment; one tube for the "intervention," and one tube for the control. Put mixture c in both tubes, and add chemical x to only the intervention tube. Run the reaction, then hold the two test tubes up to the light and compare the contents visually (Remember that ultimately, the visual system only detects contrasts.) Draw your conclusions.

My theory is basically this: the two hemispheres of the brain are like the two test tubes. Moreover, the two copies of a given chromosome in a diploid cell are also like the two test tubes. In both systems, which is which varies randomly from experiment to experiment, to prevent phenomena analogous to screen burn. The hemisphere that is dominant for a particular action is the last one that produced an improved result when control passed to it from the other. The allele that is dominant is the last one that produced an improved result when it got control from the other. Chromosomes and hemispheres will mutually inhibit to produce winner-take-all dynamics in which at any given time only one is exposed to selection, but it is fully exposed. 

These flip-flops do not necessarily involve the whole system, but may be happening independently in each sub-region of a hemisphere or chromosome (e.g., Brodmann areas, alleles). Some objective function, expressing the goodness of the organism's overall adaptation, must be recalculated after each flip-flop, and additional flip flopping suppressed if an improvement is found when the new value is compared to a copy of the old value held in memory. In case of a worsening of the objective function, you quickly flip back to the allele or hemisphere that formerly had control, then suppress further flip flopping for awhile, as before. 

The foregoing implies multiple sub-functions, and these ideas will not be compelling unless I specify a brain structure that could plausibly carry out each sub-function. For example, the process of comparing values of the objective function achieved by left and right hemispheres in the same context would be mediated by the nigrostriatal projection, which has a crossed component as well as an uncrossed component. More on this next post.