Saturday, July 30, 2016

#10. The Two–test-tube Experiment: Part II [neuroscience]

Red, theory; black, fact.

At this point we have a problem. The experimenting-brain theory predicts zero hard-wired asymmetries between the hemispheres. However, the accepted theory of hemispheric dominance postulates that this arrangement allows us to do two things at once, one task with the left hemisphere and the other task with the right. The accepted theory is basically a parsimony argument. However, this argument predicts huge differences between the hemispheres, not the subtle ones actually found.

My solution is that hard-wired hemispheric dominance must be seen as an imperfection of symmetry in the framework of the experimenting brain caused by the human brain being still in the process of evolving, combined with the hypothesis that brain-expanding mutations individually produce small and asymmetric expansions. (See Post 45.) Our left-hemispheric speech apparatus is the most asymmetric part of our brain and these ideas predict that we are due for another mutation that will expand the right side, thereby matching up the two sides, resulting in an improvement in the efficiency of operant conditioning of speech behavior.

These ideas also explain why speech defects such as lisping and stuttering are so common and slow to resolve, even in children, who are supposed to be geniuses at speech acquisition.
This is how the brain would have to work if fragments of skilled behaviors are randomly stored in memory on the left or right side, reflecting the possibility that the two hemispheres play experiment versus control, respectively, during learning.
The illustration shows the theory of motor control I was driven to by the implications of the theory of the dichotomously experimenting brain already outlined. It shows how hemispheric dominance can be reversed independently of the side of the body that should perform the movement specified by the applicable rule of conduct in the controlling hemisphere. The triangular device is a summer that converges the motor outputs of both hemispheres into a common output stream that is subsequently gated into the appropriate side of the body. This arrangement cannot create contention because at any given time, only one hemisphere is active. Anatomically, and from stroke studies, it certainly appears that the outputs of the hemispheres must be crossed, with the left hemisphere only controlling the right body and vice-versa.

However, my theory predicts that in healthy individuals, either hemisphere can control either side of the body, and the laterality of control can switch freely and rapidly during skilled performance so as to always use the best rule of conduct at any given time, regardless of the hemisphere in which it was originally created during REM sleep.

The first bit is calculated and stored in the basal ganglia. It would be output from the reticular substantia nigra (SNr) and gate sensory input to thalamus to favor one hemisphere or the other, by means of actions at the reticular thalamus and intermediate grey of the superior colliculus. The second bit would be stored in the cerebellar hemispheres and gate motor output to one side of the body or the other, at the red nucleus. Conceivably, the two parts of the red nucleus, the parvocellular and the magnocellular, correspond to the adder and switch, respectively, that are shown in the illustration.

Under these assumptions, the corpus callosum is needed only to distribute priming signals from the motor/premotor cortices to activate the rule that will be next to fire, without regard for which side that rule happens to be on. The callosum would never be required to carry signals forward from sensory to motor areas. I see that as the time-critical step, and it would never depend on getting signals through the corpus callosum, which is considered to be a signaling bottleneck.

How would the basal ganglia identify the "best" rule of conduct in a given context? I see the dopaminergic compact substantia nigra (SNc) as the most likely place for a hemisphere-specific "goodness" value to be calculated after each rule firing, using hypothalamic servo-error signals processed through the habenula as the main input for this. The half of the SNc located in the inactive hemisphere would be shut down by inhibitory GABAergic inputs from the adjacent SNr. The dopaminergic nigrostriatal projection would permanently potentiate simultaneously-active corticostriatal inputs (carrying context information) to medium spiny neurons (MSNs) of enkephalin type via a crossed projection, and to MSNs of substance-P type via uncrossed projections. The former MSN type innervates the external globus pallius (GPe), and the latter type innervates the SNr. These latter two nuclei are inhibitory and innervate each other. 

I conjecture that this arrangement sets up a winner-take-all kind of competition between GPe and SNr, with choice of the winner being exquisitely sensitive to small historical differences in dopaminergic tone between hemispheres. The "winner" is the side of the SNr that shuts down sensory input to the hemisphere on that side. The mutually inhibitory arrangement could also plausibly implement hysteresis, which means that once one hemisphere is shut down, it stays shut down without the need for an ongoing signal from the striatum to keep it shut down.

Each time the cerebral cortex outputs a motor command, a copy goes to the subthalamic nucleus (STN) and could plausibly serve as the timing signal for a "refresh" of the hemispheric dominance decision based on the latest context information from cortex. The STN signal presumably removes the hysteresis mentioned above, very temporarily, then lets the system settle down again into possibly a new state.

We now need a system that decides that something is wrong, and that the time to experiment has arrived. This could plausibly be the role of the large, cholinergic inter neurons of the striatum. They have a diverse array of inputs that could potentially signal trouble with the status quo, and could implement a decision to experiment simply by reversing the hemispheric dominance prevailing at the time. Presumably, they would do this by a cholinergic action on the surrounding MSNs of both types.

Finally, there is the second main output of the basal ganglia to consider, the inner pallidal segment (GPi). This structure is well developed in primates such as humans but is rudimentary in rodents and even in the cat, a carnivore. It sends its output forward, to motor thalamus. I conjecture that its role is to organize the brain's knowledge base to resemble block-structured programs. All the instructions in a block would be simultaneously primed by this projection. The block identifier may be some hash of the corticostriatal context information. A small group of cells just outside the striatum called the claustrum seems to have the connections necessary for preparing this hash. Jump rules, that is, rules of conduct for jumping between blocks, would not output motor commands, but block identifiers, which would be maintained online by hysteresis effects in the basal ganglia.

The cortical representation of jump rules would likely be located in medial areas, such as Brodmann 23, 24, 31, and 32. BA23-24 is classed as limbic system, and BA31-32 is situated between this and neocortex. This arrangement suggests that, seen as a computer, the brain is capable of executing programs with three levels of indentation, not counting whatever levels may be encoded as chromatin marks in the serotonergic neurons. Dynamic changes in hemispheric dominance might have to occur independently in neocortex, medial cortex, and limbic system.

Sunday, July 24, 2016

#9. The Two–test-tube Experiment: Part I [neuroscience]

Your Brain is Like This.

Red, theory; black, fact.

The the motivating challenge of this post is to explain the hemispheric organization of the human brain. That is, why we seem to have two very similar brains in our heads, the left side and the right side.

Systems that rely on the principle of trial-and-error must experiment. The genetic intelligence mentioned in previous posts would have to experiment by mutation/natural selection. The synaptic intelligence would have to experiment by operant conditioning. I propose that both these experimentation processes long ago evolved into something slick and simplified that can be compared to the two–test-tube experiment beloved of lab devils everywhere.

Remember that an experiment must have a control, because "everything is relative." Therefore, the simplest and fastest experiment in chemistry that has any generality is the two-test-tube experiment; one tube for the "intervention," and one tube for the control. Put mixture c in both tubes, and add chemical x to only the intervention tube. Run the reaction, then hold the two test tubes up to the light and compare the contents visually (Remember that ultimately, the visual system only detects contrasts.) Draw your conclusions.

My theory is basically this: the two hemispheres of the brain are like the two test tubes. Moreover, the two copies of a given chromosome in a diploid cell are also like the two test tubes. In both systems, which is which varies randomly from experiment to experiment, to prevent phenomena analogous to screen burn. The hemisphere that is dominant for a particular action is the last one that produced an improved result when control passed to it from the other. The allele that is dominant is the last one that produced an improved result when it got control from the other. Chromosomes and hemispheres will mutually inhibit to produce winner-take-all dynamics in which at any given time only one is exposed to selection, but it is fully exposed. 

These flip-flops do not necessarily involve the whole system, but may be happening independently in each sub-region of a hemisphere or chromosome (e.g., Brodmann areas, alleles). Some objective function, expressing the goodness of the organism's overall adaptation, must be recalculated after each flip-flop, and additional flip flopping suppressed if an improvement is found when the new value is compared to a copy of the old value held in memory. In case of a worsening of the objective function, you quickly flip back to the allele or hemisphere that formerly had control, then suppress further flip flopping for awhile, as before. 

The foregoing implies multiple sub-functions, and these ideas will not be compelling unless I specify a brain structure that could plausibly carry out each sub-function. For example, the process of comparing values of the objective function achieved by left and right hemispheres in the same context would be mediated by the nigrostriatal projection, which has a crossed component as well as an uncrossed component. More on this next post.

Sunday, July 17, 2016

#8. What is Intelligence? Part II. Human Brain as Knowledge Base [neuroscience, engineering]

EN    NE    
Red, theory; black, fact.

7-17-2016
A previous post's discussion of the genetic intelligence will provide the framework for this post, in which functions will be assigned to brain structures by analogy.

The CCE-structural-gene realization of the if-then rule of AI appears to have three realizations in neuroscience. These are as follows: a single neuron (dendrites, sensors; axon, effectors) the basal ganglia (striatum, sensors; globus pallidus, effectors) and the corticothalamic system (postRolandic cortex, sensors; preRolandic cortex, effectors).

Taking the last system first, the specific association of a pattern recognizer to an output to form a rule of conduct would be implemented by white-matter axons running forward in the brain. Remember that the brain's sensory parts lie at the back, and its motor, or output, parts lie at the front.

The association of one rule to the next within a block of instructions would be by the white-matter axons running front-to-back in the brain. Since the brain has no clock, unlike a computer, the instructions must all be of the if-then type even in a highly automatic block, so each rule fires when it sees evidence that the purpose of the previous rule was accomplished. This leads to a pleasing uniformity, where all instructions have the same form. 

This also illustrates how a slow knowledge base can be morphed into something surprisingly close to a fast algorithm, by keeping track of the conditional firing probabilities of the rules, and rearranging the rules in their physical storage medium so that high conditional probabilities correspond to small distances, and vice-versa.

However, in a synaptic intelligence, the "small distances" would be measured on the voltage axis relative to firing threshold, and would indicate a high readiness to fire on the part of some neuron playing the role of decision element for an if-then rule, if the usual previous rule has fired recently. An external priming signal having the form of a steady excitation, disinhibition, or deinactivation could produce this readiness. Inhibitory inputs to thalamus or excitatory inputs to cortical layer one could be such priming inputs.

The low-to-medium conditional firing probabilities would characterize if-then rules that act as jump instructions between blocks, and the basal ganglia appear to have the connections necessary to implement these. 

In Parkinson disease, the basal ganglia are disabled, and the symptoms are as follows: difficulty in getting a voluntary movement started, slowness of movements, undershoot of movements, few movements, and tremor. Except for the tremor, all these symptoms could be due to an inability to jump to the next instruction block. Patients are capable of some routine movements once they get started, and these may represent the output of a single-block program fragment.

Unproven, trial-basis rules of the "jump" type that are found to be useful could be consolidated by a burst of dopamine secretion into the striatum. Unproven, trial-basis rules of the "block" type found useful could be consolidated by dopamine secretion into prefrontal cortex. [The last two sentences express a new idea conceived during the process of keying in this post.] Both of these dopamine inputs are well established by anatomical studies.

(See Deprecated, Part 9)...influence behavior with great indirection, by changing the firing thresholds of other rules, that themselves only operate on thresholds of still other rules, and so on in a chain eventually reaching the rules that actually produce overt behavior. The chain of indirection probably passes medial to lateral over the cortex, beginning with the limbic system. (Each level of indirection may correspond to a level of indentation seen in a modern computer language such as C.) The mid-line areas are known to be part of the default-mode network (DMN), which is active in persons who are awake but lying passively and producing no overt behavior. The DMN is thus thought to be closely associated with consciousness itself, one of the greatest mysteries in neuroscience.

7-19-2016
It appears from this post and a previous post that you can take a professionally written, high-level computer program and parse it into a number of distinctive features, to borrow a term from linguistics. Such features already dealt with are the common use of if-then rules, block structuring, and levels of indentation. Each such distinctive feature will correspond to a structural feature in each of the natural intelligences, the genetic and the synaptic. Extending this concept, we would predict that features of object-oriented programming will be useful in assigning function to form in the two natural intelligences. 

For example, the concept of class may correspond to the Brodmann areas of the human brain, and the grouping of local variables with functions that operate on them may be the explanation of the cerebellum, a brain structure not yet dealt with. In the cerebellum, the local variables would be physically separate from their corresponding functions, but informationaly bound to them by an addressing procedure that uses the massive mossy-fiber/parallel-fiber array.

Monday, July 4, 2016

#7. What is Intelligence? Part I. DNA as Knowledge Base [genetics, engineering]

EN     GE     
Red: theory; black, fact.

I have concluded that the world contains three intelligences: the genetic, the synaptic, and the artificial. The first includes (See Deprecated, Part 10) genetic phenomena and is the scientifically-accessible reality behind the concept of God. The synaptic is the intelligence in your head, and seems to be the hardest to study and the one most in need of elucidation. The artificial is the computer, and because we built it ourselves, we presumably understand it. Thus, it can provide a wealth of insights into the nature of the other two intelligences and a vocabulary for discussing them.

Artificial intelligence systems are classically large knowledge bases (KBs), each animated by a relatively small, general-purpose program, the "inference engine." The knowledge bases are lists of if-then rules. The “if” keyword introduces a logical expression (the condition) that must be true to prevent control from immediately passing to the next rule, and the “then” keyword introduces a block of actions the computer is to take if the condition is true. Classical AI suffers from the problem that as the number of if-then rules increases, operation speed decreases dramatically due to an effect called the combinatorial explosion.

A genome can be compared to a KB in that it contains structural genes and cis-acting control elements.(CCEs). The CCEs trigger the transcription of the structural genes into messenger RNAs in response to environmental factors and these are then translated into proteins that have some effect on cell behavior. The analogy to a list of if-then rules is obvious. A CCE evaluates the “if” condition and the conditionally translated protein enables the “action” taken by the cell if the condition is true.

Note that the structural gene of one rule precedes the CCE of the next rule along the DNA strand. Surely, would this circumstance not also represent information? However, what could it be used for? It could be used to order the rules along the DNA strand in the same sequence as the temporal sequence in which the rules are normally applied, given the current state of the organism’s world. This seems to be a possible solution to the combinatorial explosion problem, leading to much shorter delays on average for the transcriptase complex to arrive where it is needed. I suspect that someday, it will be to this specific arrangement that the word “intelligence” will refer.
The process of putting the rules into such a sequence may involve trial-and-error, with transposon jumping providing the random variation on which selection operates. A variant on this process would involve stabilization by methylation of recombination sites that have recently produced successful results. These results would initially be encoded in the organism's emotions, as a proxy to reproductive success. In this form, the signal can be rapidly amplified by inter individual positive feedback effects. It would then be converted into DNA methylation signals in the germ line. (See my post on mental illness for possible mechanisms.) DNA methylation is known to be able to cool recombination hot spots.

A longer-timescale process involving meiotic crossing-over may create novel rules of conduct by breaking DNA between promoter and structural gene of the same rule, a process analogous to the random-move generation discussed in my post on dreaming. Presumably, the longest-timescale process would be creating individual promoters and structural genes with new capabilities of recognition and effects produced, respectively. This would happen by point mutation and classical selection.
How would the genetic intelligence handle conditional firing probabilities in the medium to low range? This could be done by cross linking nucleosomes via the histone side chains in such a way as to cluster the CCEs of likely-to-fire-next rules near the end of the relevant structural gene, by drawing together points on different loops of DNA. The analogy here would be to a science-fictional “wormhole” from one part of space to another via a higher-dimensional embedding space. In this case, “space” is the one-dimensional DNA sequence with distances measured in kilobases, and the higher-dimensional embedding space is the three-dimensional physical space of the cell nucleus.

The cross linking is presumably created and/or stabilized by the diverse epigenetic marks known to be deposited in chromatin. Most of these marks will certainly change the electric charge and/or the hydrophobicity of amino acid residues on the histone side chains. Charge and hydrophobicity are crucial factors in ionic bonding between proteins. The variety of such changes that are possible.

Mechanistically, there seems to be a great divide between the handling of high and of medium-to-low conditional probabilities. This may correspond with the usual block structure of algorithms, with transfer of control linear and sequential within a block, and by jump instruction between blocks.

Another way of accounting for the diversity of epigenetic marks, mostly due to the diversity of histone marks, is to suppose that they can be paired up into negative-positive, lock-key partnerships, each serving to stabilize by ionic bonding all the wormholes in a subset of the chromatin that deals with a particular function of life. The number of such pairs would equal the number of functions.

Their lock-key specificity would prevent wormholes, or jumps, from forming between different functions, which would cause chaos. If the eukaryotic cell is descended from a glob-like array of prokaryotes, with internal division of labor and specialization, then by one simple scheme, the specialist subtypes would be defined and organized by something like mathematical array indexes. For parsimony, assume that these array indexes are the different kinds of histone marks, and that they simultaneously are used to stabilize specialist-specific wormholes. A given lock-key pair would wormhole specifically across regions of the shared genome not needed by that particular specialist.

 A secondary function of the array indexes would be to implement wormholes that execute between-blocks jumps within the specialist's own program-like KB. With consolidation of most genetic material in a nucleus, the histone marks would serve only to produce these secondary kind of jumps while keeping functions separate and maintaining an informational link to the ancestral cytoplasmic compartment. The latter could be the basis of sorting processes within the modern eukaryotic cell.