Friday, November 25, 2016

#20. The Two-clock Universe [physics]

Red, theory; black, fact.

The arrow of time is thought to be thermodynamic in origin, namely the direction in which entropy (disorder of an isolated system) increases. Entropy is one of the two main extensive variables of thermodynamics, the other being volume. I would like to propose that since we live in an expanding universe, the direction of cosmological volume increase makes sense as a second arrow of time; it's just not our arrow of time.

One of the outstanding problems of cosmology is the nature of dark energy, thought to be responsible for the recently discovered acceleration of the Hubble expansion. Another problem is the nature of the inflationary era that occurred just after the Big Bang (BB), introduced to explain why the distribution of matter in the universe is smoother than predicted by the original version of the BB.

Suppose that the entropy of the universe slowly oscillates between a maximal value and a minimal value, like a mass oscillating up and down on the end of a spring, whereas the volume of the universe always smoothly increases. Thus, entropy would trace out a sinusoidal wave when plotted against volume.

If the speed of light is only constant against the entropic clock, then the cosmological acceleration is explainable as an illusion due to the slowing of the entropic increase that occurs when nearing the top of the entropy oscillation, just before it reverses and starts down again. The cosmological volume increase will look faster when measured by a slower clock.

The immensely rapid cosmological expansion imputed to the inflationary era would originate analogously, as an illusion caused by the slowness of the entropy oscillation when it is near the bottom of its cycle, just after having started upward again.

These ideas imply that entropy at the cosmological scale has properties analogous to those of a mass-and-spring system, namely inertia (ability to store energy in movement) and stiffness (ability to store energy in fields). The only place it could get these properties appears to be from the subatomic particles of the universe and their fields. Thus, there has to be a hidden network of relationships among all the particles in the universe to create and maintain this correspondence. Is this the meaning of quantum-mechanical entanglement and quantum-mechanical conservation of information? However, if the universe is closed, properties of the whole universe, such as a long circumnavigation time at the speed of light, could produce the bounce.

These ideas also imply the apocalyptic conclusion that all structures in the present universe will be disassembled in the next half-period of the entropy oscillation. The detailed mechanism of this may be an endothermic, resonant absorption of infrared and microwave photons that have circumnavigated a closed universe and returned to their starting point. Enormous amounts of phase information would have to be preserved in intergalactic space for billions of years to make this happen, and here is where I depend heavily on quantum mechanical results. I have not figured out how to factor in the redshift due to volume expansion.&&

Sunday, October 30, 2016

#19. Explaining Science-religion Antipathy also Explains Religion [evolutionary psychology]

Red, theory; black, fact.

I will be arguing here that the Darwinian selective advantage to humans of having a propensity for religion is that it regulates the pace of introduction of new technology, which is necessitated by the disruptive side effects of new technology.

If this sounds like a weak argument, perhaps people have been chronically underestimating the costs to society of the harmful side effects of new technology, ever since there have been people. Take the downside of the taming of fire, for instance. You can bet that the first use would have been military, just as in the case of nuclear energy. Remember that everything was covered in forests in those days; there must have been an appalling time of fire until kin selection slowly put a stop to it. The lake-bottom charcoal deposits will still be there, if anyone cares to look for them. (Shades of Asimov's story "Nightfall.")

The sedimentary record does not seem to support the idea that the smoke from such a time of fire caused a planetary cooling event sufficient to trigger the last ice age. However, the mere possibility helps to drive home the point, namely that prehistoric, evolutionary-milieu technology was not necessarily too feckless to produce enough disruption to constitute a source of selection pressure.

Natural selection could have built a rate-of-innovation controller by exaggerating people's pleasure at discovering a new, unexplored phenomenon, until they bog down in rapture at that moment and never progress to the next step of actually experimenting or exploring. The latter activities would be just upstream of the nominally controlled process, the introduction of new technology. People's tendency for "rapture capture" would be causally linked via genetically specified neural pathways to the kinds of hardships caused by technological side effects, thereby completing a negative feedback loop that would work like a steam engine governor.

I conjecture that all present-day religions are built on this phenomenon of "rapture capture." This may explain why the most innovative country, the USA, is also the most religiose, according to Dawkins, writing in "The God Delusion." An Einsteinian sense of wonder at the cosmos that, according to Dawkins, most scientists feel, could be a mild, non-capturing version of the same thing. The unlikely traits attributed to God, omnipotence, omni this and that, could have instrumental value in intensifying the rapture.

Another possible name for what I have been calling rapture could be "arcanaphilia." A basic insight for me here was that religion is fundamentally hedonistic. I do not seem to be straying too far from Marx's statement that "Religion is the opiate of the people."

These ideas help to explain why some sciences such as astronomy and chemistry began as inefficient protosciences (e.g., astrology, alchemy): they were inhibited from the start by an excessive sense of wonder, until harder heads eventually prevailed (Galileo, Lavoisier). Seen as a protoscience, the Abrahamic religions could originally have been sparked by evidence that "someone is looking out for us" found in records of historical events such as those the ancient Israelites compiled (of which the Dead Sea Scrolls are a surviving example). That "someone" would in reality be various forms of generation-time compensation, one of which I have been calling the "intermind" in these pages. Perhaps when the subject of study is in reality a powerful aspect of ourselves as populations, the stimulus for rapture capture will be especially effective, explaining why religion has yet to become an experimental science.

By the way, there is usually no insurmountable difficulty in experimenting on humans so long as the provisions of the Declaration of Helsinki are observed: volunteer basis only; controlled, randomized, double-blind study; experiment thoroughly explained to volunteers before enrollment; written consent obtained from all volunteers before enrollment; approval of the experimental design obtained in advance from the appropriate institutional ethics committee; and the experiment registered online with the appropriate registry.

Religions seem to be characterized by an unmistakable style made up of little touches that exaggerate the practitioner's sense of wonder and mystery, thus, their arcanaphilic "high." I refer to unnecessarily high ceilings in places of worship, use of enigmatic symbols, putting gold leaf on things, songs with Italian phrases in the score, such as "maestoso," wearing colorful costumes, etc. I shall refer to all the elements of this style collectively as "bractea," Latin for tinsel or gold leaf. I propose the presence of bractea as a field mark for recognizing religions in the wild. By this criterion, psychiatry is not a religion, but science fiction is.

It seems to me that bractea use can easily graduate into the creation of formal works of art, such as canticles, stained glass windows, statues of the Buddha, and the covers of science fiction magazines. Exposure to concentrations of excessive creativity in places of worship can be expected to drive down the creativity of the worshipers by a negative feedback process evolved to regulate the diversity of the species memeplex, already alluded to in my post titled, "The Intermind: Engine of History?"

This effect should indirectly reduce the rate of introduction of new technology, thereby feeding into the biological mission of religion. Religion could be the epi-evolutionary solution, and the artistic feedback could be the evolutionary solution, to the disorders caused by creativity. Bractea would represent a synergy between the two.

Sunday, September 25, 2016

#17. Hell's Kitchen [evolutionary psychology]

Red, theory; black, fact.

Ever since the assassination of JFK in '63, people of my generation have been wondering why the Americans kill off their best and brightest. It's not just the Americans, of course. The same thing happened to Gandhi and Our Savior no less.

I think a homey kitchen metaphor nails it. Once you have emptied the milk carton of all its milk, you can use it to dispose of the grease. That is, by the logic of "The Insurance of the Heart," once tremendous acclaim has been conferred on someone's name, the physical person no longer matters for the purposes of enhancing the name their descendents will inherit, and so can safely be used to draw the fire of the genetic undesirables; the resulting tremendous indignation will confer bad odor on the name of said undesirable for quite long enough to eradicate their meh genes in all copies.

Thus, Booth's genes were eradicated to make way for Lincoln's, and Oswald's genes were eradicated to make way for Kennedy's, without overall change in population density.

If the intermind could be said to have thoughts, this is what they would be like. Clearly, it's not God.

Wednesday, September 21, 2016

#16. The Intermind, Engine of History? [evolutionary psychology]

Red, theory; black, fact.

9-21-2016
This post is a further development of the ideas in the post, "What is intelligence? DNA as knowledge base." It was originally published 9-21-2016 and extensively edited 10-09-2016 with references added 10-11-2016 and 10-30-2016. Last modified: 10-30-2016.

In "AviApics 101" and "The Insurance of the Heart," I seem to be venturing into human sociobiology, which one early critic called "An outbreak of neatness." With the momentum left over from "Insurance," I felt up for a complete human sociobiological theory, to be created from the two posts mentioned.

However, what I wrote about the "genetic intelligence" suggests that this intelligence constructs our sociobiology in an ad hoc fashion, by rearranging a knowledge base, or construction kit, of "rules of conduct" into algorithm-like assemblages. This rearrangement is (See Deprecated, Part 7) blindingly fast by the standards of classical Darwinian evolution, which only provides the construction kit itself, and presumably some further, special rules equivalent to a definition of an objective function to be optimized. The ordinary rules translate experiences into the priming of certain emotions, not the emotions themselves, 

Thus, my two sociobiological posts are best read as case studies of the products of the genetic intelligence. I have named this part the intermind, because it is intermediate in speed between classical evolution and learning by operant conditioning. (All three depend on trial-and error.) The name is also appropriate in that the intermind is a distributed intelligence, acting over continental, or a least national, areas. If we want neatness, we must focus on its objective function, which is simply whatever produces survival. It will be explicitly encoded into the genes specifying the intermind, (For more on multi-tier, biological control systems with division of labor according to time scale, see "Sociobiology: the New Synthesis," E. O. Wilson, 1975 & 2000, chapter 7.)

Let us assume that the intermind accounts for evil, and that this is because it is only concerned with survival of the entire species and not with the welfare of individuals. Therefore, it will have been created by group selection of species. (Higher taxonomic units such as genus or family will scarcely evolve because the units that must die out to permit this are unlikely to do so, because they comprise relatively great genetic and geographical diversity.* However, we can expect adaptations that facilitate speciation. Imprinted genes may be one such adaptation, which might enforce species barriers by a lock-and-key mechanism that kills the embryo if any imprinted gene is present in either two or zero active copies.) Species group selection need act only on the objective function used by epigenetic trial-and-error processes.

In these Oncelerian times, we know very well that species survival is imperiled by loss of range and by loss of genetic diversity. Thus, the objective function will tend to produce range expansion and optimization of genetic diversity. My post "The Insurance of the Heart" concluded with a discussion of "preventative evolution," which was all about increasing genetic diversity. My post "AviApics 101" was all about placing population density under a rigid, negative feedback control, which would force excess population to migrate to less-populated areas, thereby expanding range. Here we see how my case studies support the existence of an intermind with an objective  function as described above.

However, all this is insufficient to explain the tremendous cultural creativity of humans, starting at the end of the last ice age with cave paintings, followed shortly thereafter by the momentous invention of agriculture. The hardships of the ice age must have selected genes for a third, novel component, or pillar, of the species objective function, namely optimization of memetic diversity. Controlled diversification of the species memeplex may have been the starting point for cultural creativity and the invention of all kinds of aids to survival. Art forms may represent the sensor of a feedback servomechanism by which a society measures its own memeplex diversity, measurement being necessary to control.

A plausible reason for evolving an intermind is that it permits larger body size, which leads to more internal degrees of freedom and therefore access to previously impossible adaptations. For example, eukaryotes can phagocytose their food; prokaryotes cannot. However, larger body size comes at the expense of longer generation time, which reduces evolvability. A band of high frequencies in the spectrum of environmental fluctuations therefore develops where the large organism has relinquished evolvability, opening it to being out competed by its smaller rivals. 

The intermind is a proxy for classical evolution that fills the gap, but it needs an objective function to provide it with its ultimate gold standard of goodness of adaptations. Species-replacement group selection makes sure the objective function is close to optimal. This group selection process takes place at enormously lower frequencies than those the intermind is adapting to, because if the timescales were  too similar, chaos would result. For example, in model predictive control, the model is updated on a much longer cycle than are the predictions derived from it.

12-25-2016
Today, when I was checking to see if I was using the word "cathexis" correctly (I wasn't), I discovered the Freudian term "collective unconscious," which sounds close to my "intermind" concept.

* 3-12-2018
I now question this argument. Why can't there be as many kinds of group selection as taxonomic levels? Admittedly, the higher-level processes would be mind-boggling in their slowness, but in evolution, there are no deadlines.

Monday, August 29, 2016

#15. The Insurance of the Heart [evolutionary psychology]

Red, theory; black, fact.

8-29-2016
We live in an uncertain world, the best reason to buy insurance while you can. Insurance is too good a trick for evolution to have missed. When food is plentiful, as it now is in my country, people get obese, as they are now doing here, so that they can live on their fat during possible future hard times. They don't do this consciously; it's in their genes.

However, eating has only an additive effect on your footprint on society's demand for resources; how many kids you have affects your footprint multiplicatively. Thus, the effectiveness of biological insurance taken out in children foregone during times of plenty would be greater than that taken out in food consumed. Such a recourse exists (See Deprecated, Part 8); how well and long remembered the family name you bequeath to your children affects your footprint exponentially. (I assume that a good or bad "name" affects the reproductive success of all your descendants having that name until you are finally forgotten.) Compared to exponential returns, everything else is chump change. ("Who steals my purse steals trash." - Shakespeare)

There remains the problem of food going to waste during times of plenty because social forces prevent a quick population increase. I conjecture that the extra energy available is invested by society in contests of various sorts (think of the Circus Maximus during the heyday of ancient Rome) that act as a proxy to evolutionary selection pressure, whereby the society accelerates it's own evolution. Although natural selection pressure is maximal during the hard times, relying on these to do all your evolving for you can make you extinct; better to do some "preventative evolution" ahead of time.

Postscript 3
Since future environmental demands are partly unforeseeable, a good strategy would be to accelerate one's evolution in multiple directions, keeping many irons in the fire. Indeed, in the Olympics just concluded, thirty-nine sports were represented.

The power of these contests is maximized by using the outcomes as unconditioned stimuli that are associated with the family names of the winners and losers: the conditioned stimuli. In this way, one acquires a good or bad "name" that will affect the reproductive success of all who inherit it, an exponential effect. To ground this discussion biologically, it must be assumed that the contests are effective in isolating carriers of good or bad genes (technically, alleles), and that the resulting "name" is an effective proxy for natural selection in altering the frequency of said genes. To keep the population density stable during all this, winners must be balanced by losers. The winners are determined and branded in places like the ball diamond, and the losers are determined and branded in the courts.

Tuesday, August 16, 2016

#14. Three Stages of Abiogenesis [evolution, chemistry]

The iconic Miller experiment on the origin of life

Abiogenesis chemistry outside the box

EV    CH    
Red, theory; black, fact

Repair, growth, reproduction

"Abiogenesis" is the term for life originating from non-life.
Self-repair processes will be important in abiogenesis because life is made of metastable molecules that spontaneously break down and have to be continually repaired, which results in continuous energy dissipation. I will assume that self-repair in non-reproducing molecules is what eventually evolved into self-replication and life.

I also assume that the self repair process was fallible, so that it occasionally introduced a mutation. Favorable mutations would have increased the longevity of the self-repairing molecules. Nevertheless, a given cohort of these molecules would relentlessly decrease in numbers, but they would have been continuously replenished in the juvenile form by undirected chemistry on the early Earth. Eventually, at least one of them was able to morph self-repair into self-replication, and life began. I call this process of refinement of non-reproducing molecules "longitudinal evolution" by analogy to a longitudinal cohort study in medical science. The process bears an interesting resemblance to carcinogenesis, where an accumulation of mutations in long-lived cells also leads to an ability to self-replicate autonomously. Carcinogenesis is difficult to prevent, and so must be considered a facile process, suggesting that longitudinal evolution to the threshold of life was also facile.

A simple self-repairing molecule

The "enzyme ring" shown above is an example of a possible self-repairing molecule that I dreamt up. It is a ring of covalently-bonded monomers that are individually large enough to have good potential for catalyzing reactions, like globular proteins, but are small enough to be present in multiple copies like the standardized building blocks that one wants for templated synthesis.

If the covalent bond between a given pair of monomers breaks, the ring is held together by multiple, parallel secondary valence forces and hydrophobic interactions, until the break can be repaired by the ring's catalytic members. With continuing lack of repair, the ring eventually opens completely, and effectively "dies." To bring the necessary catalysts to the break site reliably, no matter where it is, I assume that multiple copies of the repair enzyme are present in the ring, and are randomly distributed. I also assume a temperature cycle like that of the polymerization chain reaction technology that repeatedly makes the ring single-stranded during the warm phase and allows it to collapse into a self-adhering, linear, double-stranded form during the cool phase. This could simply be driven by the day-night cycle. In the linear form, the catalytic sites are brought close to the covalent bond sites, and can repair any that are broken using small-molecule condensing agents such as cyanogen, which are arguably present on the early Earth under Miller-Urey assumptions. When the ring collapses, it does so at randomly selected fold diameters, so that only a few catalytic monomers are needed, since each will eventually land next to all covalent bonds in the ring except those nearby, which it cannot reach because of steric hindrance and/or bond angle restrictions. The other catalytic monomers in the ring will take care of these.

How it would grow

The mutation process of the enzyme ring could result from random ring-expansion and ring-contraction events, the net effect being to replace one kind of monomer with another. Expansion would most likely begin with intercalation of a free monomer between the bound ones at the high-curvature regions at the ends of the linear conformation. The new monomer would be held in place by the multiple, weak parallel bonds alluded to above. It could become incorporated into the ring if it intercalates at a site where the covalent bond is broken. Two bond-repair events would then suffice to sew it into the ring. The ring-contraction process would the the time-reversed version of this. 

In addition, an ability to undergo ring expansion allows the enzyme ring to start small and grow larger. This is important because, on entropy grounds, a long polymer is very unlikely to spontaneously cyclize. The energy-requiring repair process will bias the system to favor net ring expansion. Thus, we see how easily self-repair can become growth.

How it would reproduce

If large rings can split in two while in the linear conformation, the result is reproduction, without even a requirement for templated synthesis. Thus, we see how easily growth can become reproduction.

Onward to the bacterium

To get from reproduction-competent enzyme rings to something like a bacterium, the sequence of steps might have been multiplication, coacervate droplet formation, cooperation within the confines of the droplet, and specialization. The first specialist subtypes may have been archivists, forerunners of the circular genome of bacteria; and gatekeepers, forerunners of the plasma membrane with its sensory and transporter sites. Under these assumptions, DNA would not have evolved from RNA; both would represent independently originated lines of evolution, but forced to develop many chemical similarities by the demands of templated information transfer.

Back to chemistry

During the classic experiment in abiogenesis, the Miller-Urey experiment, amino acids were formed in solution, but no-one has been able to show how these could subsequently have polymerized to functional protein catalysts. The origin of the monomers in my enzyme ring thus needs to be explained. However, the formation of relatively large amounts of insoluble, dark-colored "tars" is apparently facile under the Miller-Urey reaction conditions. The carbon in this tar is not necessarily lost to the system forever, like a coal deposit. In present-day anoxic environments relevant to the early Earth, at least three-quarters of modern biomass returns to the atmosphere as marsh gas. The driving force for these reactions seems to be not enthalpy reduction, but entropy increase.
Seen in the library of the University of Ottawa

Retrofractive synthesis

I therefore propose that if you wait long enough, and a diversity of trace-metal ions is present, then the abiogenesis tar will largely break down again, releasing large, prefab molecular chunks into solution. Reasoning from what is known of coal chemistry, these chunks may look something like asphaltenes, illustrated above, but relatively enriched in hydrophilic functional groups to make them water soluble. Hydrolysis reactions, for example, can simultaneously depolymerize a big network and introduce such groups (e.g., carboxylic acid groups). I propose that these asphaltene analogs are the optimally-sized monomers needed to form the enzyme ring.

Monday, August 15, 2016

#13. The Neural Code, Part II: the Thalamus [neuroscience, engineering]

A hypothetical scheme of the thalamus, a central part of your brain.

EN     NE     
Red, theory; black, fact.

Thalamic processing as Laplace transform

More in Deprecated, Part 1. I postulate that the thalamus performs a Laplace transform (LT). All the connections shown are established anatomical facts, and are based on the summary diagram of lateral geniculate nucleus circuitry of Steriade et al. (Steriade, M., Jones E. G. and McCormick, D. A. (1997) Thalamus, 2 Vols. Amsterdam: Elsevier).  What I have added is feedback from cortex as a context-sensitive enabling signal for the analytical process. I originally guessed that the triadic synapses are differentiators, but now I think that they are function multipliers.

Thalamic electrophysiology

The thalamic low-threshold spike (LTS) is a slow calcium spike that triggers further spiking that appears in extracellular recordings as a distinctive cluster of four or five sodium spikes. The thalamus also has an alternative response mode consisting of fast single spikes, which is observed at relatively depolarized membrane potentials.

The thalamic low-threshold spike as triggered by a hyperpolarization due to an electric current pulse injected into the neuron through the recording electrode. ACSF, normal conditions; TTX, sodium spikes deleted pharmacologically. From my thesis, page 167.

Network relationships of the thalamus

Depolarizing input to thalamus from cortex is conjectured to be a further requirement for the LTS-burst complex. This depolarization is conjectured to take the form of a pattern of spots, each representing a mask to detect a specific pole of the stimulus that the attentional system is looking for in that context.

The complex frequency plane is where LTs are graphed, usually as a collection of points. Some of these are "poles," where gain goes to infinity, and others are "zeroes," where gain goes to zero. I assume that the cerebral cortex-thalamus system takes care of the poles, while the superior and inferior colliculi take care of the zeroes. 

If this stimulus is found, the pattern of poles must still be recognized. This may be accomplished through a cortical AND-element wired up on Hebbian principles among cortical neurons. These neurons synapse on each other by extensive recurrent collaterals, which might be the anatomical substrate of the conjectured AND-elements. Explosive activation of the AND network would then be the signal that the expected stimulus has been recognized, as Hebb proposed long ago, and the signal would then be sent forward in the brain via white matter tracts to the motor cortex, which would output a collection of excitation spots representing the LT of the desired response.

Presumably, a reverse LT is then applied, possibly by the spinal grey, which I have long considered theoretically underemployed in light of its considerable volume. If we assume that the cerebral cortex is highly specialized for representing LTs, then motor outputs from cerebellum and internal globus pallidus would also have to be transformed to enable the cortex to represent them. In agreement with this, the motor cortex is innervated by prominent motor thalami, the ventrolateral (for cerebellar inputs) and the ventroanterior (for pallidal inputs).

Brain representation of Laplace transforms

The difficulty is to see how a two-dimensional complex plane can be represented on a two-dimensional cerebral cortex without contradicting the results of receptive field studies, which clearly show that the two long dimensions of the cortex represent space in egocentric coordinates. This just leaves the depth dimension for representing the two dimensions of complex frequency.

03-01-2020:
A simple solution is that the complex frequency plane is tiled by the catchment basins of discrete, canonical poles, and all poles in a catchment basin are represented approximately by the nearest canonical pole. It then becomes possible to distinguish the canonical poles in the cerebral cortex by the labelled-line mechanism (i.e., by employing different cell-surface adhesion molecules to control synapse formation.)

Recalling that layer 1 of cortex is mostly processes, this leaves us with five cortical cell layers not yet assigned to functions. Four of them might correspond to the four quadrants of  the complex frequency plane, which differ qualitatively in the motions they represent. The two granule-cell layers 2 and 4 are interleaved with the two pyramidal-cell layers 3 and 5. The two granule layers might be the top and bottom halves of the left half-plane, which represents decaying, stabilized motions. The two pyramidal layers might represent the top and bottom halves of the right half-plane, which represents dangerously growing, unstable motions. Since the latter represent emergency conditions, the signal must be processed especially fast, requiring fast, large-diameter axons. Producing and maintaining such axons requires correspondingly large cell bodies. This is why I assign the relatively large pyramidal cells to the right half-plane.

Intra-thalamic operations

It is beginning to look like the thalamus computes the Laplace transform just the way it is defined: the integral of the product of the input time-domain function and an exponentially decaying or growing sinusoid (eigenfunction). A pole would be recognized after a finite integration time as the integrand rising above a threshold. This thresholding is plausibly done in cortical layer 4, against a background of elevated inhibition controlled by the recurrent layer-6 collaterals that blocks intermediate calculation results from propagating further into the cortex. The direct projections from layer 6 down to thalamus would serve to trigger the analysis and rescale eigenfunction tempo to compensate for changes in behavioral tempo. Reverberation of LTS-bursting activity between thalamic reticular neurons and thalamic principal neurons would be the basis of the oscillatory activity involved in implementing the eigenfunctions. This is precedented by the spindling mechanism and the phenomenon of Parkinsonian tremor cells.

Mutual inhibition of reticular thalamic neurons would be the basis of the integrator and multiplication of functions would be done by silent inhibition in the triadic synapses (here no longer considered to be differentiators) via the known disinhibitory pathway from the reticular thalamus. 

A negative feedback system will be necessary to dynamically rejig the thalamus so that the same pole maps to the same spot despite changes in tempo. Some of the corticothalamic cells (layer 6) could be part of this system (layer 6 cells are of two quite different types), as well as the prominent cholinergic projections to the RT.

Consequences for object recognition

The foregoing system could be used to extract objects from the incoming data by in effect assuming that the elements or features of an object always share the same motion and therefore will be represented by the same set of poles. An automatic process of object extraction may therefore be implemented as a tendency for Hebbian plasticity to involve the same canonical pole at two different cortical locations that are connected by recurrent axon collaterals.

Sunday, August 7, 2016

#12. The Neural Code, Part I: the Hippocampus [neuroscience, engineering]

EN    NE    
Red, theory; black, fact.

"Context information" is often invoked in neuroscience theory as an address for storing more specific data in memory, such as whatever climbing fibers carry into the cerebellar cortex (Marr theory), but what exactly is context, as a practical matter?

First, it must change on a much longer timescale than whatever it addresses. Second, it must also be accessible to a moving organism that follows habitual, repetitive pathways in patrolling its territory. Consideration of the mainstream theory that the hippocampus prepares a cognitive map of the organism's spatial environment suggests that context is a set of landmarks. It seems that a landmark will be any stimulus that appears repetitively. Since only rhythmically repeating functions have a classical discrete-frequency Fourier transform, the attempt to calculate such a transform could be considered a filter for extracting rhythmic signals from the sensory input. 

However, this is not enough for a landmark extractor because landmark signals are only repetitive, not rhythmic. Let us suppose, however, that variations in the intervals between arrivals at a given landmark are due entirely to programmed, adaptive variations in the overall tempo of the organism's behavior. A tempo increase will upscale all incoming frequencies by the same factor, and a tempo decrease will downscale them all by the same factor. Since these variations originate within the organism, the organism could send a "tempo efference copy" to the neuronal device that calculates the discrete Fourier transform, to slide the frequency axis left or right to compensate for tempo variations. 

Thus, the same landmark will always transform to the same set of activated spots in the frequency-amplitude-phase volume. I conjecture that the hippocampus calculates a discrete-frequency Fourier transform of all incoming sensory data, with lowest frequency represented ventrally and highest dorsally, and a with a linear temporal spectrum represented between. 

The negative feedback device that compensates tempo variations would be the loop through medial septum. The septum is the central hub of the network in which the EEG theta rhythm can be detected. This rhythm may be a central clock of unvarying frequency that serves as a reference for measuring tempo variations, possibly by a beat-frequency principle. 

The hippocampus could calculate the Fourier transform by exploiting the mathematical fact that a sinusoidal function differentiated four times in succession gives exactly the starting function, if its amplitude and frequency are both numerically equal to one. This could be done by the five-synapse loop from dentate gyrus to hippocampal CA3 to hippocampal CA1 to subiculum to entorhinal cortex, and back to dentate gyrus. The dentate gyrus looks anatomically unlike the others and may be the input site where amplitude standardization operations are performed, while the other four stages would be the actual differentiators. 

Differentiation would occur by the mechanism of a parallel shunt pathway through slowly-responding inhibitory interneurons, known to be present throughout the hippocampus. The other two spatial dimensions of the hippocampus would represent frequency and amplitude by setting up gradients in the gain of the differentiators. A given spot in the array maps the input function to itself only for one particular combination of frequency and transformed (i.e., output) amplitude. 

The self-mapping sets up a reverberation around the loop that makes the spot stand out functionally. All the concurrently active spots would constitute the context. This context could in principle reach the entire cerebral cortex via the fimbria fornix, mammillary bodies, and tuberomamillary nucleus of the hypothalamus, the latter being histaminergic.

The cortex may contain a novelty-detection function, source of the well-documented mismatch negativity found in oddball evoked-potential experiments. A stimulus found to be novel would go into a short term memory store in cortex. If a crisis develops while it is there, it is changed into a flash memory and wired up to the amygdala, which mediates visceral fear responses. In this way, a conditioned fear stimulus could be created. If a reward registers while the stimulus is in short term memory, it could be converted to a conditioned appetitive stimulus by a similar mechanism.

 I conjecture that all a person's declarative and episodic memories together are nothing more nor less than those that confer conditioned status on particular stimuli.

To become such a memory, a stimulus must first be found to be novel, and this is much less likely in the absence of a context signal; to put it another way, it is the combination of the context signal and the sensory stimulus that is found to be novel. Absent the context, and almost no simple stimulus will be novel. This may be the reason why at least one hippocampus must be functioning if declarative or episodic memories are to be formed.

Wednesday, August 3, 2016

#11. Revised Motor Scheme [neuroscience]


How skilled behavior may be generated, on the assumption that it is acquired by an experimentation-like process.

Red, theory; black, fact.

Please find above a revised version of the motor control theory presented in the last blog. The revision was necessitated by the fact that there is no logical reason why a motor command cannot go to both sides of the body at once to produce a mid line-symmetrical movement. The prediction is that mid-line-symmetrical movements are acquired one side at a time whenever the controlling corticofugal pathway allows the two sides to move independently.

Saturday, July 30, 2016

#10. The Two–test-tube Experiment: Part II [neuroscience]

Red, theory; black, fact.

At this point we have a problem. The experimenting-brain theory predicts zero hard-wired asymmetries between the hemispheres. However, the accepted theory of hemispheric dominance postulates that this arrangement allows us to do two things at once, one task with the left hemisphere and the other task with the right. The accepted theory is basically a parsimony argument. However, this argument predicts huge differences between the hemispheres, not the subtle ones actually found.

My solution is that hard-wired hemispheric dominance must be seen as an imperfection of symmetry in the framework of the experimenting brain caused by the human brain being still in the process of evolving, combined with the hypothesis that brain-expanding mutations individually produce small and asymmetric expansions. (See Post 45.) Our left-hemispheric speech apparatus is the most asymmetric part of our brain and these ideas predict that we are due for another mutation that will expand the right side, thereby matching up the two sides, resulting in an improvement in the efficiency of operant conditioning of speech behavior.

These ideas also explain why speech defects such as lisping and stuttering are so common and slow to resolve, even in children, who are supposed to be geniuses at speech acquisition.
This is how the brain would have to work if fragments of skilled behaviors are randomly stored in memory on the left or right side, reflecting the possibility that the two hemispheres play experiment versus control, respectively, during learning.
The illustration shows the theory of motor control I was driven to by the implications of the theory of the dichotomously experimenting brain already outlined. It shows how hemispheric dominance can be reversed independently of the side of the body that should perform the movement specified by the applicable rule of conduct in the controlling hemisphere. The triangular device is a summer that converges the motor outputs of both hemispheres into a common output stream that is subsequently gated into the appropriate side of the body. This arrangement cannot create contention because at any given time, only one hemisphere is active. Anatomically, and from stroke studies, it certainly appears that the outputs of the hemispheres must be crossed, with the left hemisphere only controlling the right body and vice-versa.

However, my theory predicts that in healthy individuals, either hemisphere can control either side of the body, and the laterality of control can switch freely and rapidly during skilled performance so as to always use the best rule of conduct at any given time, regardless of the hemisphere in which it was originally created during REM sleep.

The first bit is calculated and stored in the basal ganglia. It would be output from the reticular substantia nigra (SNr) and gate sensory input to thalamus to favor one hemisphere or the other, by means of actions at the reticular thalamus and intermediate grey of the superior colliculus. The second bit would be stored in the cerebellar hemispheres and gate motor output to one side of the body or the other, at the red nucleus. Conceivably, the two parts of the red nucleus, the parvocellular and the magnocellular, correspond to the adder and switch, respectively, that are shown in the illustration.

Under these assumptions, the corpus callosum is needed only to distribute priming signals from the motor/premotor cortices to activate the rule that will be next to fire, without regard for which side that rule happens to be on. The callosum would never be required to carry signals forward from sensory to motor areas. I see that as the time-critical step, and it would never depend on getting signals through the corpus callosum, which is considered to be a signaling bottleneck.

How would the basal ganglia identify the "best" rule of conduct in a given context? I see the dopaminergic compact substantia nigra (SNc) as the most likely place for a hemisphere-specific "goodness" value to be calculated after each rule firing, using hypothalamic servo-error signals processed through the habenula as the main input for this. The half of the SNc located in the inactive hemisphere would be shut down by inhibitory GABAergic inputs from the adjacent SNr. The dopaminergic nigrostriatal projection would permanently potentiate simultaneously-active corticostriatal inputs (carrying context information) to medium spiny neurons (MSNs) of enkephalin type via a crossed projection, and to MSNs of substance-P type via uncrossed projections. The former MSN type innervates the external globus pallius (GPe), and the latter type innervates the SNr. These latter two nuclei are inhibitory and innervate each other. 

I conjecture that this arrangement sets up a winner-take-all kind of competition between GPe and SNr, with choice of the winner being exquisitely sensitive to small historical differences in dopaminergic tone between hemispheres. The "winner" is the side of the SNr that shuts down sensory input to the hemisphere on that side. The mutually inhibitory arrangement could also plausibly implement hysteresis, which means that once one hemisphere is shut down, it stays shut down without the need for an ongoing signal from the striatum to keep it shut down.

Each time the cerebral cortex outputs a motor command, a copy goes to the subthalamic nucleus (STN) and could plausibly serve as the timing signal for a "refresh" of the hemispheric dominance decision based on the latest context information from cortex. The STN signal presumably removes the hysteresis mentioned above, very temporarily, then lets the system settle down again into possibly a new state.

We now need a system that decides that something is wrong, and that the time to experiment has arrived. This could plausibly be the role of the large, cholinergic inter neurons of the striatum. They have a diverse array of inputs that could potentially signal trouble with the status quo, and could implement a decision to experiment simply by reversing the hemispheric dominance prevailing at the time. Presumably, they would do this by a cholinergic action on the surrounding MSNs of both types.

Finally, there is the second main output of the basal ganglia to consider, the inner pallidal segment (GPi). This structure is well developed in primates such as humans but is rudimentary in rodents and even in the cat, a carnivore. It sends its output forward, to motor thalamus. I conjecture that its role is to organize the brain's knowledge base to resemble block-structured programs. All the instructions in a block would be simultaneously primed by this projection. The block identifier may be some hash of the corticostriatal context information. A small group of cells just outside the striatum called the claustrum seems to have the connections necessary for preparing this hash. Jump rules, that is, rules of conduct for jumping between blocks, would not output motor commands, but block identifiers, which would be maintained online by hysteresis effects in the basal ganglia.

The cortical representation of jump rules would likely be located in medial areas, such as Brodmann 23, 24, 31, and 32. BA23-24 is classed as limbic system, and BA31-32 is situated between this and neocortex. This arrangement suggests that, seen as a computer, the brain is capable of executing programs with three levels of indentation, not counting whatever levels may be encoded as chromatin marks in the serotonergic neurons. Dynamic changes in hemispheric dominance might have to occur independently in neocortex, medial cortex, and limbic system.

Sunday, July 24, 2016

#9. The Two–test-tube Experiment: Part I [neuroscience]

Your Brain is Like This.

Red, theory; black, fact.

The the motivating challenge of this post is to explain the hemispheric organization of the human brain. That is, why we seem to have two very similar brains in our heads, the left side and the right side.

Systems that rely on the principle of trial-and-error must experiment. The genetic intelligence mentioned in previous posts would have to experiment by mutation/natural selection. The synaptic intelligence would have to experiment by operant conditioning. I propose that both these experimentation processes long ago evolved into something slick and simplified that can be compared to the two–test-tube experiment beloved of lab devils everywhere.

Remember that an experiment must have a control, because "everything is relative." Therefore, the simplest and fastest experiment in chemistry that has any generality is the two-test-tube experiment; one tube for the "intervention," and one tube for the control. Put mixture c in both tubes, and add chemical x to only the intervention tube. Run the reaction, then hold the two test tubes up to the light and compare the contents visually (Remember that ultimately, the visual system only detects contrasts.) Draw your conclusions.

My theory is basically this: the two hemispheres of the brain are like the two test tubes. Moreover, the two copies of a given chromosome in a diploid cell are also like the two test tubes. In both systems, which is which varies randomly from experiment to experiment, to prevent phenomena analogous to screen burn. The hemisphere that is dominant for a particular action is the last one that produced an improved result when control passed to it from the other. The allele that is dominant is the last one that produced an improved result when it got control from the other. Chromosomes and hemispheres will mutually inhibit to produce winner-take-all dynamics in which at any given time only one is exposed to selection, but it is fully exposed. 

These flip-flops do not necessarily involve the whole system, but may be happening independently in each sub-region of a hemisphere or chromosome (e.g., Brodmann areas, alleles). Some objective function, expressing the goodness of the organism's overall adaptation, must be recalculated after each flip-flop, and additional flip flopping suppressed if an improvement is found when the new value is compared to a copy of the old value held in memory. In case of a worsening of the objective function, you quickly flip back to the allele or hemisphere that formerly had control, then suppress further flip flopping for awhile, as before. 

The foregoing implies multiple sub-functions, and these ideas will not be compelling unless I specify a brain structure that could plausibly carry out each sub-function. For example, the process of comparing values of the objective function achieved by left and right hemispheres in the same context would be mediated by the nigrostriatal projection, which has a crossed component as well as an uncrossed component. More on this next post.

Sunday, July 17, 2016

#8. What is Intelligence? Part II. Human Brain as Knowledge Base [neuroscience, engineering]

EN    NE    
Red, theory; black, fact.

7-17-2016
A previous post's discussion of the genetic intelligence will provide the framework for this post, in which functions will be assigned to brain structures by analogy.

The CCE-structural-gene realization of the if-then rule of AI appears to have three realizations in neuroscience. These are as follows: a single neuron (dendrites, sensors; axon, effectors) the basal ganglia (striatum, sensors; globus pallidus, effectors) and the corticothalamic system (postRolandic cortex, sensors; preRolandic cortex, effectors).

Taking the last system first, the specific association of a pattern recognizer to an output to form a rule of conduct would be implemented by white-matter axons running forward in the brain. Remember that the brain's sensory parts lie at the back, and its motor, or output, parts lie at the front.

The association of one rule to the next within a block of instructions would be by the white-matter axons running front-to-back in the brain. Since the brain has no clock, unlike a computer, the instructions must all be of the if-then type even in a highly automatic block, so each rule fires when it sees evidence that the purpose of the previous rule was accomplished. This leads to a pleasing uniformity, where all instructions have the same form. 

This also illustrates how a slow knowledge base can be morphed into something surprisingly close to a fast algorithm, by keeping track of the conditional firing probabilities of the rules, and rearranging the rules in their physical storage medium so that high conditional probabilities correspond to small distances, and vice-versa.

However, in a synaptic intelligence, the "small distances" would be measured on the voltage axis relative to firing threshold, and would indicate a high readiness to fire on the part of some neuron playing the role of decision element for an if-then rule, if the usual previous rule has fired recently. An external priming signal having the form of a steady excitation, disinhibition, or deinactivation could produce this readiness. Inhibitory inputs to thalamus or excitatory inputs to cortical layer one could be such priming inputs.

The low-to-medium conditional firing probabilities would characterize if-then rules that act as jump instructions between blocks, and the basal ganglia appear to have the connections necessary to implement these. 

In Parkinson disease, the basal ganglia are disabled, and the symptoms are as follows: difficulty in getting a voluntary movement started, slowness of movements, undershoot of movements, few movements, and tremor. Except for the tremor, all these symptoms could be due to an inability to jump to the next instruction block. Patients are capable of some routine movements once they get started, and these may represent the output of a single-block program fragment.

Unproven, trial-basis rules of the "jump" type that are found to be useful could be consolidated by a burst of dopamine secretion into the striatum. Unproven, trial-basis rules of the "block" type found useful could be consolidated by dopamine secretion into prefrontal cortex. [The last two sentences express a new idea conceived during the process of keying in this post.] Both of these dopamine inputs are well established by anatomical studies.

(See Deprecated, Part 9)...influence behavior with great indirection, by changing the firing thresholds of other rules, that themselves only operate on thresholds of still other rules, and so on in a chain eventually reaching the rules that actually produce overt behavior. The chain of indirection probably passes medial to lateral over the cortex, beginning with the limbic system. (Each level of indirection may correspond to a level of indentation seen in a modern computer language such as C.) The mid-line areas are known to be part of the default-mode network (DMN), which is active in persons who are awake but lying passively and producing no overt behavior. The DMN is thus thought to be closely associated with consciousness itself, one of the greatest mysteries in neuroscience.

7-19-2016
It appears from this post and a previous post that you can take a professionally written, high-level computer program and parse it into a number of distinctive features, to borrow a term from linguistics. Such features already dealt with are the common use of if-then rules, block structuring, and levels of indentation. Each such distinctive feature will correspond to a structural feature in each of the natural intelligences, the genetic and the synaptic. Extending this concept, we would predict that features of object-oriented programming will be useful in assigning function to form in the two natural intelligences. 

For example, the concept of class may correspond to the Brodmann areas of the human brain, and the grouping of local variables with functions that operate on them may be the explanation of the cerebellum, a brain structure not yet dealt with. In the cerebellum, the local variables would be physically separate from their corresponding functions, but informationaly bound to them by an addressing procedure that uses the massive mossy-fiber/parallel-fiber array.

Monday, July 4, 2016

#7. What is Intelligence? Part I. DNA as Knowledge Base [genetics, engineering]

EN     GE     
Red: theory; black, fact.

I have concluded that the world contains three intelligences: the genetic, the synaptic, and the artificial. The first includes (See Deprecated, Part 10) genetic phenomena and is the scientifically-accessible reality behind the concept of God. The synaptic is the intelligence in your head, and seems to be the hardest to study and the one most in need of elucidation. The artificial is the computer, and because we built it ourselves, we presumably understand it. Thus, it can provide a wealth of insights into the nature of the other two intelligences and a vocabulary for discussing them.

Artificial intelligence systems are classically large knowledge bases (KBs), each animated by a relatively small, general-purpose program, the "inference engine." The knowledge bases are lists of if-then rules. The “if” keyword introduces a logical expression (the condition) that must be true to prevent control from immediately passing to the next rule, and the “then” keyword introduces a block of actions the computer is to take if the condition is true. Classical AI suffers from the problem that as the number of if-then rules increases, operation speed decreases dramatically due to an effect called the combinatorial explosion.

A genome can be compared to a KB in that it contains structural genes and cis-acting control elements.(CCEs). The CCEs trigger the transcription of the structural genes into messenger RNAs in response to environmental factors and these are then translated into proteins that have some effect on cell behavior. The analogy to a list of if-then rules is obvious. A CCE evaluates the “if” condition and the conditionally translated protein enables the “action” taken by the cell if the condition is true.

Note that the structural gene of one rule precedes the CCE of the next rule along the DNA strand. Surely, would this circumstance not also represent information? However, what could it be used for? It could be used to order the rules along the DNA strand in the same sequence as the temporal sequence in which the rules are normally applied, given the current state of the organism’s world. This seems to be a possible solution to the combinatorial explosion problem, leading to much shorter delays on average for the transcriptase complex to arrive where it is needed. I suspect that someday, it will be to this specific arrangement that the word “intelligence” will refer.
The process of putting the rules into such a sequence may involve trial-and-error, with transposon jumping providing the random variation on which selection operates. A variant on this process would involve stabilization by methylation of recombination sites that have recently produced successful results. These results would initially be encoded in the organism's emotions, as a proxy to reproductive success. In this form, the signal can be rapidly amplified by inter individual positive feedback effects. It would then be converted into DNA methylation signals in the germ line. (See my post on mental illness for possible mechanisms.) DNA methylation is known to be able to cool recombination hot spots.

A longer-timescale process involving meiotic crossing-over may create novel rules of conduct by breaking DNA between promoter and structural gene of the same rule, a process analogous to the random-move generation discussed in my post on dreaming. Presumably, the longest-timescale process would be creating individual promoters and structural genes with new capabilities of recognition and effects produced, respectively. This would happen by point mutation and classical selection.
How would the genetic intelligence handle conditional firing probabilities in the medium to low range? This could be done by cross linking nucleosomes via the histone side chains in such a way as to cluster the CCEs of likely-to-fire-next rules near the end of the relevant structural gene, by drawing together points on different loops of DNA. The analogy here would be to a science-fictional “wormhole” from one part of space to another via a higher-dimensional embedding space. In this case, “space” is the one-dimensional DNA sequence with distances measured in kilobases, and the higher-dimensional embedding space is the three-dimensional physical space of the cell nucleus.

The cross linking is presumably created and/or stabilized by the diverse epigenetic marks known to be deposited in chromatin. Most of these marks will certainly change the electric charge and/or the hydrophobicity of amino acid residues on the histone side chains. Charge and hydrophobicity are crucial factors in ionic bonding between proteins. The variety of such changes that are possible.

Mechanistically, there seems to be a great divide between the handling of high and of medium-to-low conditional probabilities. This may correspond with the usual block structure of algorithms, with transfer of control linear and sequential within a block, and by jump instruction between blocks.

Another way of accounting for the diversity of epigenetic marks, mostly due to the diversity of histone marks, is to suppose that they can be paired up into negative-positive, lock-key partnerships, each serving to stabilize by ionic bonding all the wormholes in a subset of the chromatin that deals with a particular function of life. The number of such pairs would equal the number of functions.

Their lock-key specificity would prevent wormholes, or jumps, from forming between different functions, which would cause chaos. If the eukaryotic cell is descended from a glob-like array of prokaryotes, with internal division of labor and specialization, then by one simple scheme, the specialist subtypes would be defined and organized by something like mathematical array indexes. For parsimony, assume that these array indexes are the different kinds of histone marks, and that they simultaneously are used to stabilize specialist-specific wormholes. A given lock-key pair would wormhole specifically across regions of the shared genome not needed by that particular specialist.

 A secondary function of the array indexes would be to implement wormholes that execute between-blocks jumps within the specialist's own program-like KB. With consolidation of most genetic material in a nucleus, the histone marks would serve only to produce these secondary kind of jumps while keeping functions separate and maintaining an informational link to the ancestral cytoplasmic compartment. The latter could be the basis of sorting processes within the modern eukaryotic cell.