Friday, November 25, 2016

#20. The Two-clock Universe [physics]

Red, theory; black, fact.

The arrow of time is thought to be thermodynamic in origin, namely the direction in which entropy (disorder of an isolated system) increases. Entropy is one of the two main extensive variables of thermodynamics, the other being volume. I would like to propose that since we live in an expanding universe, the direction of cosmological volume increase makes sense as a second arrow of time; it's just not our arrow of time.

One of the outstanding problems of cosmology is the nature of dark energy, thought to be responsible for the recently discovered acceleration of the Hubble expansion. Another problem is the nature of the inflationary era that occurred just after the Big Bang (BB), introduced to explain why the distribution of matter in the universe is smoother than predicted by the original version of the BB.

Suppose that the entropy of the universe slowly oscillates between a maximal value and a minimal value, like a mass oscillating up and down on the end of a spring, whereas the volume of the universe always smoothly increases. Thus, entropy would trace out a sinusoidal wave when plotted against volume.

If the speed of light is only constant against the entropic clock, then the cosmological acceleration is explainable as an illusion due to the slowing of the entropic increase that occurs when nearing the top of the entropy oscillation, just before it reverses and starts down again. The cosmological volume increase will look faster when measured by a slower clock.

The immensely rapid cosmological expansion imputed to the inflationary era would originate analogously, as an illusion caused by the slowness of the entropy oscillation when it is near the bottom of its cycle, just after having started upward again.

These ideas imply that entropy at the cosmological scale has properties analogous to those of a mass-and-spring system, namely inertia (ability to store energy in movement) and stiffness (ability to store energy in fields). The only place it could get these properties appears to be from the subatomic particles of the universe and their fields. Thus, there has to be a hidden network of relationships among all the particles in the universe to create and maintain this correspondence. Is this the meaning of quantum-mechanical entanglement and quantum-mechanical conservation of information? However, if the universe is closed, properties of the whole universe, such as a long circumnavigation time at the speed of light, could produce the bounce.

These ideas also imply the apocalyptic conclusion that all structures in the present universe will be disassembled in the next half-period of the entropy oscillation. The detailed mechanism of this may be an endothermic, resonant absorption of infrared and microwave photons that have circumnavigated a closed universe and returned to their starting point. Enormous amounts of phase information would have to be preserved in intergalactic space for billions of years to make this happen, and here is where I depend heavily on quantum mechanical results. I have not figured out how to factor in the redshift due to volume expansion.&&

Sunday, October 30, 2016

#19. Explaining Science-religion Antipathy also Explains Religion [evolutionary psychology]

Red, theory; black, fact.

I will be arguing here that the Darwinian selective advantage to humans of having a propensity for religion is that it regulates the pace of introduction of new technology, which is necessitated by the disruptive side effects of new technology.

If this sounds like a weak argument, perhaps people have been chronically underestimating the costs to society of the harmful side effects of new technology, ever since there have been people. Take the downside of the taming of fire, for instance. You can bet that the first use would have been military, just as in the case of nuclear energy. Remember that everything was covered in forests in those days; there must have been an appalling time of fire until kin selection slowly put a stop to it. The lake-bottom charcoal deposits will still be there, if anyone cares to look for them. (Shades of Asimov's story "Nightfall.")

The sedimentary record does not seem to support the idea that the smoke from such a time of fire caused a planetary cooling event sufficient to trigger the last ice age. However, the mere possibility helps to drive home the point, namely that prehistoric, evolutionary-milieu technology was not necessarily too feckless to produce enough disruption to constitute a source of selection pressure.

Natural selection could have built a rate-of-innovation controller by exaggerating people's pleasure at discovering a new, unexplored phenomenon, until they bog down in rapture at that moment and never progress to the next step of actually experimenting or exploring. The latter activities would be just upstream of the nominally controlled process, the introduction of new technology. People's tendency for "rapture capture" would be causally linked via genetically specified neural pathways to the kinds of hardships caused by technological side effects, thereby completing a negative feedback loop that would work like a steam engine governor.

I conjecture that all present-day religions are built on this phenomenon of "rapture capture." This may explain why the most innovative country, the USA, is also the most religiose, according to Dawkins, writing in "The God Delusion." An Einsteinian sense of wonder at the cosmos that, according to Dawkins, most scientists feel, could be a mild, non-capturing version of the same thing. The unlikely traits attributed to God, omnipotence, omni this and that, could have instrumental value in intensifying the rapture.

Another possible name for what I have been calling rapture could be "arcanaphilia." A basic insight for me here was that religion is fundamentally hedonistic. I do not seem to be straying too far from Marx's statement that "Religion is the opiate of the people."

These ideas help to explain why some sciences such as astronomy and chemistry began as inefficient protosciences (e.g., astrology, alchemy): they were inhibited from the start by an excessive sense of wonder, until harder heads eventually prevailed (Galileo, Lavoisier). Seen as a protoscience, the Abrahamic religions could originally have been sparked by evidence that "someone is looking out for us" found in records of historical events such as those the ancient Israelites compiled (of which the Dead Sea Scrolls are a surviving example). That "someone" would in reality be various forms of generation-time compensation, one of which I have been calling the "intermind" in these pages. Perhaps when the subject of study is in reality a powerful aspect of ourselves as populations, the stimulus for rapture capture will be especially effective, explaining why religion has yet to become an experimental science.

By the way, there is usually no insurmountable difficulty in experimenting on humans so long as the provisions of the Declaration of Helsinki are observed: volunteer basis only; controlled, randomized, double-blind study; experiment thoroughly explained to volunteers before enrollment; written consent obtained from all volunteers before enrollment; approval of the experimental design obtained in advance from the appropriate institutional ethics committee; and the experiment registered online with the appropriate registry.

Religions seem to be characterized by an unmistakable style made up of little touches that exaggerate the practitioner's sense of wonder and mystery, thus, their arcanaphilic "high." I refer to unnecessarily high ceilings in places of worship, use of enigmatic symbols, putting gold leaf on things, songs with Italian phrases in the score, such as "maestoso," wearing colorful costumes, etc. I shall refer to all the elements of this style collectively as "bractea," Latin for tinsel or gold leaf. I propose the presence of bractea as a field mark for recognizing religions in the wild. By this criterion, psychiatry is not a religion, but science fiction is.

It seems to me that bractea use can easily graduate into the creation of formal works of art, such as canticles, stained glass windows, statues of the Buddha, and the covers of science fiction magazines. Exposure to concentrations of excessive creativity in places of worship can be expected to drive down the creativity of the worshipers by a negative feedback process evolved to regulate the diversity of the species memeplex, already alluded to in my post titled, "The Intermind: Engine of History?"

This effect should indirectly reduce the rate of introduction of new technology, thereby feeding into the biological mission of religion. Religion could be the epi-evolutionary solution, and the artistic feedback could be the evolutionary solution, to the disorders caused by creativity. Bractea would represent a synergy between the two.

Sunday, September 25, 2016

#17. Hell's Kitchen [evolutionary psychology]

Red, theory; black, fact.

Ever since the assassination of JFK in '63, people of my generation have been wondering why the Americans kill off their best and brightest. It's not just the Americans, of course. The same thing happened to Gandhi and Our Savior no less.

I think a homey kitchen metaphor nails it. Once you have emptied the milk carton of all its milk, you can use it to dispose of the grease. That is, by the logic of "The Insurance of the Heart," once tremendous acclaim has been conferred on someone's name, the physical person no longer matters for the purposes of enhancing the name their descendents will inherit, and so can safely be used to draw the fire of the genetic undesirables; the resulting tremendous indignation will confer bad odor on the name of said undesirable for quite long enough to eradicate their meh genes in all copies.

Thus, Booth's genes were eradicated to make way for Lincoln's, and Oswald's genes were eradicated to make way for Kennedy's, without overall change in population density.

If the intermind could be said to have thoughts, this is what they would be like. Clearly, it's not God.

Wednesday, September 21, 2016

#16. The Intermind, Engine of History? [evolutionary psychology]

Red, theory; black, fact.

9-21-2016
This post is a further development of the ideas in the post, "What is intelligence? DNA as knowledge base." It was originally published 9-21-2016 and extensively edited 10-09-2016 with references added 10-11-2016 and 10-30-2016. Last modified: 10-30-2016.

In "AviApics 101" and "The Insurance of the Heart," I seem to be venturing into human sociobiology, which one early critic called "An outbreak of neatness." With the momentum left over from "Insurance," I felt up for a complete human sociobiological theory, to be created from the two posts mentioned.

However, what I wrote about the "genetic intelligence" suggests that this intelligence constructs our sociobiology in an ad hoc fashion, by rearranging a knowledge base, or construction kit, of "rules of conduct" into algorithm-like assemblages. This rearrangement is (See Deprecated, Part 7) blindingly fast by the standards of classical Darwinian evolution, which only provides the construction kit itself, and presumably some further, special rules equivalent to a definition of an objective function to be optimized. The ordinary rules translate experiences into the priming of certain emotions, not the emotions themselves, 

Thus, my two sociobiological posts are best read as case studies of the products of the genetic intelligence. I have named this part the intermind, because it is intermediate in speed between classical evolution and learning by operant conditioning. (All three depend on trial-and error.) The name is also appropriate in that the intermind is a distributed intelligence, acting over continental, or a least national, areas. If we want neatness, we must focus on its objective function, which is simply whatever produces survival. It will be explicitly encoded into the genes specifying the intermind, (For more on multi-tier, biological control systems with division of labor according to time scale, see "Sociobiology: the New Synthesis," E. O. Wilson, 1975 & 2000, chapter 7.)

Let us assume that the intermind accounts for evil, and that this is because it is only concerned with survival of the entire species and not with the welfare of individuals. Therefore, it will have been created by group selection of species. (Higher taxonomic units such as genus or family will scarcely evolve because the units that must die out to permit this are unlikely to do so, because they comprise relatively great genetic and geographical diversity.* However, we can expect adaptations that facilitate speciation. Imprinted genes may be one such adaptation, which might enforce species barriers by a lock-and-key mechanism that kills the embryo if any imprinted gene is present in either two or zero active copies.) Species group selection need act only on the objective function used by epigenetic trial-and-error processes.

In these Oncelerian times, we know very well that species survival is imperiled by loss of range and by loss of genetic diversity. Thus, the objective function will tend to produce range expansion and optimization of genetic diversity. My post "The Insurance of the Heart" concluded with a discussion of "preventative evolution," which was all about increasing genetic diversity. My post "AviApics 101" was all about placing population density under a rigid, negative feedback control, which would force excess population to migrate to less-populated areas, thereby expanding range. Here we see how my case studies support the existence of an intermind with an objective  function as described above.

However, all this is insufficient to explain the tremendous cultural creativity of humans, starting at the end of the last ice age with cave paintings, followed shortly thereafter by the momentous invention of agriculture. The hardships of the ice age must have selected genes for a third, novel component, or pillar, of the species objective function, namely optimization of memetic diversity. Controlled diversification of the species memeplex may have been the starting point for cultural creativity and the invention of all kinds of aids to survival. Art forms may represent the sensor of a feedback servomechanism by which a society measures its own memeplex diversity, measurement being necessary to control.

A plausible reason for evolving an intermind is that it permits larger body size, which leads to more internal degrees of freedom and therefore access to previously impossible adaptations. For example, eukaryotes can phagocytose their food; prokaryotes cannot. However, larger body size comes at the expense of longer generation time, which reduces evolvability. A band of high frequencies in the spectrum of environmental fluctuations therefore develops where the large organism has relinquished evolvability, opening it to being out competed by its smaller rivals. 

The intermind is a proxy for classical evolution that fills the gap, but it needs an objective function to provide it with its ultimate gold standard of goodness of adaptations. Species-replacement group selection makes sure the objective function is close to optimal. This group selection process takes place at enormously lower frequencies than those the intermind is adapting to, because if the timescales were  too similar, chaos would result. For example, in model predictive control, the model is updated on a much longer cycle than are the predictions derived from it.

12-25-2016
Today, when I was checking to see if I was using the word "cathexis" correctly (I wasn't), I discovered the Freudian term "collective unconscious," which sounds close to my "intermind" concept.

* 3-12-2018
I now question this argument. Why can't there be as many kinds of group selection as taxonomic levels? Admittedly, the higher-level processes would be mind-boggling in their slowness, but in evolution, there are no deadlines.

Monday, August 29, 2016

#15. The Insurance of the Heart [evolutionary psychology]

Red, theory; black, fact.

8-29-2016
We live in an uncertain world, the best reason to buy insurance while you can. Insurance is too good a trick for evolution to have missed. When food is plentiful, as it now is in my country, people get obese, as they are now doing here, so that they can live on their fat during possible future hard times. They don't do this consciously; it's in their genes.

However, eating has only an additive effect on your footprint on society's demand for resources; how many kids you have affects your footprint multiplicatively. Thus, the effectiveness of biological insurance taken out in children foregone during times of plenty would be greater than that taken out in food consumed. Such a recourse exists (See Deprecated, Part 8); how well and long remembered the family name you bequeath to your children affects your footprint exponentially. (I assume that a good or bad "name" affects the reproductive success of all your descendants having that name until you are finally forgotten.) Compared to exponential returns, everything else is chump change. ("Who steals my purse steals trash." - Shakespeare)

There remains the problem of food going to waste during times of plenty because social forces prevent a quick population increase. I conjecture that the extra energy available is invested by society in contests of various sorts (think of the Circus Maximus during the heyday of ancient Rome) that act as a proxy to evolutionary selection pressure, whereby the society accelerates it's own evolution. Although natural selection pressure is maximal during the hard times, relying on these to do all your evolving for you can make you extinct; better to do some "preventative evolution" ahead of time.

Postscript 3
Since future environmental demands are partly unforeseeable, a good strategy would be to accelerate one's evolution in multiple directions, keeping many irons in the fire. Indeed, in the Olympics just concluded, thirty-nine sports were represented.

The power of these contests is maximized by using the outcomes as unconditioned stimuli that are associated with the family names of the winners and losers: the conditioned stimuli. In this way, one acquires a good or bad "name" that will affect the reproductive success of all who inherit it, an exponential effect. To ground this discussion biologically, it must be assumed that the contests are effective in isolating carriers of good or bad genes (technically, alleles), and that the resulting "name" is an effective proxy for natural selection in altering the frequency of said genes. To keep the population density stable during all this, winners must be balanced by losers. The winners are determined and branded in places like the ball diamond, and the losers are determined and branded in the courts.

Tuesday, August 16, 2016

#14. Three Stages of Abiogenesis [evolution, chemistry]

The iconic Miller experiment on the origin of life

Abiogenesis chemistry outside the box

EV    CH    
Red, theory; black, fact

Repair, growth, reproduction

"Abiogenesis" is the term for life originating from non-life.
Self-repair processes will be important in abiogenesis because life is made of metastable molecules that spontaneously break down and have to be continually repaired, which results in continuous energy dissipation. I will assume that self-repair in non-reproducing molecules is what eventually evolved into self-replication and life.

I also assume that the self repair process was fallible, so that it occasionally introduced a mutation. Favorable mutations would have increased the longevity of the self-repairing molecules. Nevertheless, a given cohort of these molecules would relentlessly decrease in numbers, but they would have been continuously replenished in the juvenile form by undirected chemistry on the early Earth. Eventually, at least one of them was able to morph self-repair into self-replication, and life began. I call this process of refinement of non-reproducing molecules "longitudinal evolution" by analogy to a longitudinal cohort study in medical science. The process bears an interesting resemblance to carcinogenesis, where an accumulation of mutations in long-lived cells also leads to an ability to self-replicate autonomously. Carcinogenesis is difficult to prevent, and so must be considered a facile process, suggesting that longitudinal evolution to the threshold of life was also facile.

A simple self-repairing molecule

The "enzyme ring" shown above is an example of a possible self-repairing molecule that I dreamt up. It is a ring of covalently-bonded monomers that are individually large enough to have good potential for catalyzing reactions, like globular proteins, but are small enough to be present in multiple copies like the standardized building blocks that one wants for templated synthesis.

If the covalent bond between a given pair of monomers breaks, the ring is held together by multiple, parallel secondary valence forces and hydrophobic interactions, until the break can be repaired by the ring's catalytic members. With continuing lack of repair, the ring eventually opens completely, and effectively "dies." To bring the necessary catalysts to the break site reliably, no matter where it is, I assume that multiple copies of the repair enzyme are present in the ring, and are randomly distributed. I also assume a temperature cycle like that of the polymerization chain reaction technology that repeatedly makes the ring single-stranded during the warm phase and allows it to collapse into a self-adhering, linear, double-stranded form during the cool phase. This could simply be driven by the day-night cycle. In the linear form, the catalytic sites are brought close to the covalent bond sites, and can repair any that are broken using small-molecule condensing agents such as cyanogen, which are arguably present on the early Earth under Miller-Urey assumptions. When the ring collapses, it does so at randomly selected fold diameters, so that only a few catalytic monomers are needed, since each will eventually land next to all covalent bonds in the ring except those nearby, which it cannot reach because of steric hindrance and/or bond angle restrictions. The other catalytic monomers in the ring will take care of these.

How it would grow

The mutation process of the enzyme ring could result from random ring-expansion and ring-contraction events, the net effect being to replace one kind of monomer with another. Expansion would most likely begin with intercalation of a free monomer between the bound ones at the high-curvature regions at the ends of the linear conformation. The new monomer would be held in place by the multiple, weak parallel bonds alluded to above. It could become incorporated into the ring if it intercalates at a site where the covalent bond is broken. Two bond-repair events would then suffice to sew it into the ring. The ring-contraction process would the the time-reversed version of this. 

In addition, an ability to undergo ring expansion allows the enzyme ring to start small and grow larger. This is important because, on entropy grounds, a long polymer is very unlikely to spontaneously cyclize. The energy-requiring repair process will bias the system to favor net ring expansion. Thus, we see how easily self-repair can become growth.

How it would reproduce

If large rings can split in two while in the linear conformation, the result is reproduction, without even a requirement for templated synthesis. Thus, we see how easily growth can become reproduction.

Onward to the bacterium

To get from reproduction-competent enzyme rings to something like a bacterium, the sequence of steps might have been multiplication, coacervate droplet formation, cooperation within the confines of the droplet, and specialization. The first specialist subtypes may have been archivists, forerunners of the circular genome of bacteria; and gatekeepers, forerunners of the plasma membrane with its sensory and transporter sites. Under these assumptions, DNA would not have evolved from RNA; both would represent independently originated lines of evolution, but forced to develop many chemical similarities by the demands of templated information transfer.

Back to chemistry

During the classic experiment in abiogenesis, the Miller-Urey experiment, amino acids were formed in solution, but no-one has been able to show how these could subsequently have polymerized to functional protein catalysts. The origin of the monomers in my enzyme ring thus needs to be explained. However, the formation of relatively large amounts of insoluble, dark-colored "tars" is apparently facile under the Miller-Urey reaction conditions. The carbon in this tar is not necessarily lost to the system forever, like a coal deposit. In present-day anoxic environments relevant to the early Earth, at least three-quarters of modern biomass returns to the atmosphere as marsh gas. The driving force for these reactions seems to be not enthalpy reduction, but entropy increase.
Seen in the library of the University of Ottawa

Retrofractive synthesis

I therefore propose that if you wait long enough, and a diversity of trace-metal ions is present, then the abiogenesis tar will largely break down again, releasing large, prefab molecular chunks into solution. Reasoning from what is known of coal chemistry, these chunks may look something like asphaltenes, illustrated above, but relatively enriched in hydrophilic functional groups to make them water soluble. Hydrolysis reactions, for example, can simultaneously depolymerize a big network and introduce such groups (e.g., carboxylic acid groups). I propose that these asphaltene analogs are the optimally-sized monomers needed to form the enzyme ring.

Monday, August 15, 2016

#13. The Neural Code, Part II: the Thalamus [neuroscience, engineering]

A hypothetical scheme of the thalamus, a central part of your brain.

EN     NE     
Red, theory; black, fact.

Thalamic processing as Laplace transform

More in Deprecated, Part 1. I postulate that the thalamus performs a Laplace transform (LT). All the connections shown are established anatomical facts, and are based on the summary diagram of lateral geniculate nucleus circuitry of Steriade et al. (Steriade, M., Jones E. G. and McCormick, D. A. (1997) Thalamus, 2 Vols. Amsterdam: Elsevier).  What I have added is feedback from cortex as a context-sensitive enabling signal for the analytical process. I originally guessed that the triadic synapses are differentiators, but now I think that they are function multipliers.

Thalamic electrophysiology

The thalamic low-threshold spike (LTS) is a slow calcium spike that triggers further spiking that appears in extracellular recordings as a distinctive cluster of four or five sodium spikes. The thalamus also has an alternative response mode consisting of fast single spikes, which is observed at relatively depolarized membrane potentials.

The thalamic low-threshold spike as triggered by a hyperpolarization due to an electric current pulse injected into the neuron through the recording electrode. ACSF, normal conditions; TTX, sodium spikes deleted pharmacologically. From my thesis, page 167.

Network relationships of the thalamus

Depolarizing input to thalamus from cortex is conjectured to be a further requirement for the LTS-burst complex. This depolarization is conjectured to take the form of a pattern of spots, each representing a mask to detect a specific pole of the stimulus that the attentional system is looking for in that context.

The complex frequency plane is where LTs are graphed, usually as a collection of points. Some of these are "poles," where gain goes to infinity, and others are "zeroes," where gain goes to zero. I assume that the cerebral cortex-thalamus system takes care of the poles, while the superior and inferior colliculi take care of the zeroes. 

If this stimulus is found, the pattern of poles must still be recognized. This may be accomplished through a cortical AND-element wired up on Hebbian principles among cortical neurons. These neurons synapse on each other by extensive recurrent collaterals, which might be the anatomical substrate of the conjectured AND-elements. Explosive activation of the AND network would then be the signal that the expected stimulus has been recognized, as Hebb proposed long ago, and the signal would then be sent forward in the brain via white matter tracts to the motor cortex, which would output a collection of excitation spots representing the LT of the desired response.

Presumably, a reverse LT is then applied, possibly by the spinal grey, which I have long considered theoretically underemployed in light of its considerable volume. If we assume that the cerebral cortex is highly specialized for representing LTs, then motor outputs from cerebellum and internal globus pallidus would also have to be transformed to enable the cortex to represent them. In agreement with this, the motor cortex is innervated by prominent motor thalami, the ventrolateral (for cerebellar inputs) and the ventroanterior (for pallidal inputs).

Brain representation of Laplace transforms

The difficulty is to see how a two-dimensional complex plane can be represented on a two-dimensional cerebral cortex without contradicting the results of receptive field studies, which clearly show that the two long dimensions of the cortex represent space in egocentric coordinates. This just leaves the depth dimension for representing the two dimensions of complex frequency.

03-01-2020:
A simple solution is that the complex frequency plane is tiled by the catchment basins of discrete, canonical poles, and all poles in a catchment basin are represented approximately by the nearest canonical pole. It then becomes possible to distinguish the canonical poles in the cerebral cortex by the labelled-line mechanism (i.e., by employing different cell-surface adhesion molecules to control synapse formation.)

Recalling that layer 1 of cortex is mostly processes, this leaves us with five cortical cell layers not yet assigned to functions. Four of them might correspond to the four quadrants of  the complex frequency plane, which differ qualitatively in the motions they represent. The two granule-cell layers 2 and 4 are interleaved with the two pyramidal-cell layers 3 and 5. The two granule layers might be the top and bottom halves of the left half-plane, which represents decaying, stabilized motions. The two pyramidal layers might represent the top and bottom halves of the right half-plane, which represents dangerously growing, unstable motions. Since the latter represent emergency conditions, the signal must be processed especially fast, requiring fast, large-diameter axons. Producing and maintaining such axons requires correspondingly large cell bodies. This is why I assign the relatively large pyramidal cells to the right half-plane.

Intra-thalamic operations

It is beginning to look like the thalamus computes the Laplace transform just the way it is defined: the integral of the product of the input time-domain function and an exponentially decaying or growing sinusoid (eigenfunction). A pole would be recognized after a finite integration time as the integrand rising above a threshold. This thresholding is plausibly done in cortical layer 4, against a background of elevated inhibition controlled by the recurrent layer-6 collaterals that blocks intermediate calculation results from propagating further into the cortex. The direct projections from layer 6 down to thalamus would serve to trigger the analysis and rescale eigenfunction tempo to compensate for changes in behavioral tempo. Reverberation of LTS-bursting activity between thalamic reticular neurons and thalamic principal neurons would be the basis of the oscillatory activity involved in implementing the eigenfunctions. This is precedented by the spindling mechanism and the phenomenon of Parkinsonian tremor cells.

Mutual inhibition of reticular thalamic neurons would be the basis of the integrator and multiplication of functions would be done by silent inhibition in the triadic synapses (here no longer considered to be differentiators) via the known disinhibitory pathway from the reticular thalamus. 

A negative feedback system will be necessary to dynamically rejig the thalamus so that the same pole maps to the same spot despite changes in tempo. Some of the corticothalamic cells (layer 6) could be part of this system (layer 6 cells are of two quite different types), as well as the prominent cholinergic projections to the RT.

Consequences for object recognition

The foregoing system could be used to extract objects from the incoming data by in effect assuming that the elements or features of an object always share the same motion and therefore will be represented by the same set of poles. An automatic process of object extraction may therefore be implemented as a tendency for Hebbian plasticity to involve the same canonical pole at two different cortical locations that are connected by recurrent axon collaterals.