Showing posts with label brain. Show all posts
Showing posts with label brain. Show all posts

Friday, May 19, 2017

#28. The Origin of Consciousness [neuroscience]

Red, theory; black, fact.

After perusing Gideon Rosenblatt's blog at the prompting of Google, I finally saw the need for this post.

I theorize that we begin life conscious only of our own emotions. Then the process of classical conditioning, first studied in animals, brings more and more of our environment into the circle of our consciousness, causing the contents of consciousness to become enriched in spatial and temporal detail. Thus, you are now able to be conscious of these words of mine on the screen. However, each stroke of each letter of each word of mine that now reaches your consciousness does so because, subjectively, it is "made of" pure emotion, and that emotion is yours.

Some analogies come to mind. Emotion as the molten tin that the typesetter pours into the mold, the casting process being classical conditioning and the copy the environmental data reported by our sense organs. Emotion as the bulk on one side of a fractal line and sensory data the bulk on the other side. Emotion as an intricately ramifying tree-like structure by which sensory details can send excitation down to the hypothalamus at the root and thus enter consciousness.

The status of "in consciousness" can in principle affect the cerebral cortex via the projections to cortex from the histaminergic tuberomamillary nucleus of the hypothalamus. Histamine is known to have an alerting effect on cortex, but to call it "alerting" may be to grossly undersell its significance. It may carry a consolidation signal  for declarative, episodic, and flash memory. Not for a second do I suppose all of that to be packed into the hippocampus, rather than being located in the only logical place for it: the vast expanse of the human cerebral cortex.

Sunday, February 12, 2017

#24. The Pictures in Your Head [neuroscience]

Red, theory; black, fact.

My post on the thalamus suggests that in thinking about the brain, we should maintain a sharp distinction between temporal information (signals most usefully plotted against time) and spatial information (signals most usefully plotted against space). Remember that the theory of General Relativity, which posits a unified space-time, applies only to energy and distance scales far from the quotidian.

In the thalamus post, I theorized about how the brain could tremendously data-compress temporal information using the Laplace transform, by which a continuous time function, classically containing an infinite number of points, can be re-represented as a mere handful of summarizing points called poles and zeroes, scattered on a two-dimensional plot called the complex frequency plane. Infinity down to a handful. Pretty good data compression, I'd say. The brain will tend to evolve data-compression schemes if these reduce the number of neurons needed for processing (I hereby assume that they always do), because neurons are metabolically expensive to maintain and evolution favors parsimony in the use of metabolic energy.

Ultimately, the efficiency of the Laplace transform seems to come from the fact that naturally-occurring time functions tend to be pretty stereotyped and repetitious: a branch nodding in the wind, leaves on it oscillating independently and more rapidly, the whole performance decaying exponentially to stillness with each calming of the wind; an iceberg calving discontinuously into the sea; astronomical cycles of perfect regularity; and a bacterial population growing exponentially, then shifting gears to a regime of ever-slowing growth as resources become limiting, the whole sequence following what is called a logistic curve.

Nature is very often described by differential equations, such as Maxwell's equations, those of General Relativity, and Schrodinger's Equation, the three greats. Other differential equations describe growth and decay processes, oscillations, diffusion, and passive but non-chemically energy-storing electrical and mechanical systems. A differential equation is one that contains at least one symbol representing the rate of change of a first variable versus a second variable. Moreover, differential equations seem to be relatively easy to derive from theories. The challenge is to solve the equation, not for a single number, but for a whole function that gives the actual value of the first variable versus the second variable, for purposes of making quantitative, testable predictions, thereby allowing testing of the theory itself. The Laplace transform greatly facilitates the solution of many of science's temporal differential equations, and these solutions are remarkably few and stereotyped: oscillations, growth/decay curves, and simple sums, magnifications, and/or products of these. Clearly, the complexity of the world comes not from its temporal information, but from it's spatial information. However, spatial regularities that might be exploited for spatial data compression are weaker than in the temporal case.

The main regularity in the spatial domain seems to be hierarchical clustering. For an example of this, let's return to the nodding branch. Petioles, veins, and teeth cluster to form a leaf. Leaves and twigs cluster to form a branch. Branches and trunk cluster to form a tree. Trees cluster to form a forest. This spatially clustered aspect of reality is being exploited currently in an approach to machine intelligence called "deep learning," where the successive stages in the hierarchy of the data are learned by successive hidden layers of simulated neurons in a neural net. Data is processed as it passes through the stack of layers, with successive layers learning to recognize successively larger clusters, representing these to the next layer as symbols simplified to aid further cluster recognition. This technology is based on discoveries about how the mammalian visual system operates. (For the seminal paper in the latter field, see Hubel and Wiesel, Journal of Physiology, 1959, 148[3], pp 574-591.)

Visual information passes successively through visual areas Brodmann 17, 18, and 19, with receptive fields becoming progressively larger and more complex, as would be expected from a hierarchical process of cluster recognition. The latter two areas, 18 and 19, are classed as association cortex, of which humans have the greatest amount of any primate. However, cluster recognition requires the use of neuron specialist sub-types, each looking for a very particular stimulus. To even cover most of the cluster-type possibilities, a large number of different specialists must be trained up. This does not seem like very good data compression from the standpoint of metabolic cost savings. Thus, the evolution of better ability with spatial information should require many more new neurons than with the case of temporal information.

My hypothesis here is that what is conferred by the comparatively large human cerebral cortex, especially the association cortices, is not general intelligence, but facility with using spatial information. We take it on and disgorge it like water-bombers. Think of a rock-climber sizing up a cliff face. Think of an architect, engineer, tool-and-die maker, or trades person reading a blueprint. Now look around you. Do we not have all these nice buildings to live and work in? Can any other animal claim as much? My hypothesis seems obvious when you look at it this way.

Mere possession of a well developed sense of vision will not necessarily confer such ability with spatial information. The eyes of a predatory bird, for instance, could simply be gathering mainly temporal information modulated onto light, and used as a servo error for dynamically homing in on prey. To make a difference, the spatial information has to have someplace to go when it reaches the higher brain. Conversely, our sense of hearing is far from useless in providing spatial information. We possess an elaborate network of brain-stem auditory centers for accomplishing exactly this. Clearly, the spatial/temporal issue is largely dissociable from the issue of sensory modality.

You may argue that the uniquely human power of language suggests that our cortical advantage is used for processing temporal information, because speech is a spaceless phenomenon that unfolds only in time. However, the leading theory of speech seems to be the Wittgenstein picture theory of meaning, which postulates that a statement shows its meaning by its logical structure. Bottom line: language as currently understood is entirely consistent with my hypothesis that humans are specialized for processing spatial information.

Since fossil and comparative evidence suggests that our large brain is our most recently evolved attribute, it is safe to suppose that it may be evolving still, for all we know. There may still be a huge existential premium on possession of improved spatial ability. For example, Napoleon's strategy for winning the decisive Battle of Austerlitz while badly outnumbered seems to have involved a lot of visualization. The cultural face of the zeitgeist may reflect this in shows and movies where the hero prevails as a result of superior use of spatial information. (e.g., Star Wars, Back to the Future, and many Warner Bros. cartoons). Many if not most of our competitive games take place on fields, courts, or boards, showing that they test the spatial abilities of the contestants. By now, the enterprising reader will be thinking, "All I have to do is emphasize the spatial [whatever that means], and I'll be a winner! What a great take-home!"

Let me know how it goes, because all this is just theory.

Monday, February 6, 2017

#23. Proxy Natural Selection: The God-shaped Gap at the Heart of Biology [genetics, evolution]

EV    GE    
Red, theory; black, fact.

2-06-2017
As promised, here is my detailed and hypothetical description of the entity responsible for compensating for the fact that our microbial, insect, and rodent competitors evolve much faster than we do because of their shorter generation times. In these pages, I have been variously calling this entity the intermind, the collective unconscious, the mover of the zeitgeist, and the real, investigable system that the word "God" points to. I here recant my former belief that epigenetic marks are likely to be the basis of an information storage system sufficient to support an independent evolution-like process. I will assume that the new system, "proxy natural selection" (PNS) is DNA-based.

11-20-2017
The acronym PNS is liable to be confused with "peripheral nervous system," so a better acronym would be "PGS," meaning "post-zygotic gamete selection."

2-06-2017
First, a refresher on how standard natural selection works. DNA undergoes point mutations (I will deal with the other main type of mutation later) that add diversity to the genome. The developmental process translates the various genotypes into a somewhat diverse set of phenotypes. Existential selection then ensues from the interaction of these phenotypes with the environment, made chronically stringent by population pressure. Differential reproduction of phenotypes then occurs, leading to changes in gene frequencies in the population gene pool. Such changes are the essence of evolution.

PNS assumes that the genome contains special if-then rules, perhaps implemented as cis-control-element/structural gene partnerships, that collectively simulate the presence of an objective function that dictates the desiderata of survival and replaces or stands in for existential selection. A given objective function is species-specific but has a generic resemblance across the species of a genus. The genus-averaged objective function evolves by species-replacement group selection, and can thus theoretically produce altruism between individuals. The if-then rules instruct the wiring of the hypothalamus during development, which thereby comes to dictate the organism's likes and dislikes in a way leading to species survival as well as (usually) individual survival. Routinely, however, some specific individuals end up sacrificed for the benefit of the species.

Here is how PNS may work. Crossing-over mutations during meiosis to produce sperm increase the diversity of the recombinotypes making up the sperm population. During subsequent fertilization and brain development, each recombinotype instructs a particular behavioral temperament, or idiosyncratotype. Temperament is assumed to be a set of if-then rules connecting certain experiences with the triggering of specific emotions. An emotion is a high-level, but in some ways stereotyped, motor command, the details of which are to be fleshed out during conscious planning before anything emerges as overt behavior. Each idiosyncratotype interacts with the environment and the result is proxy-evaluated by the hypothalamus to produce a proxy-fitness (p-fitness) measurement. The measurement is translated into blood-borne factors that travel from the brain to the gonads where they activate cell-surface receptors on the spermatogonia. Good p-fitness results in the recombination hot spots of the spermatogonia being stabilized, whereas poor p-fitness results in their further destabilization. 

Thus, good p-fitness leads to good penetrance of the paternal recombinotype into viable sperm, whereas poor p-fitness leads to poor penetrance, because of many further crossing-over events. Changes in hotspot activity could possibly be due to changes in cytosine methylation status. The result is within-lifetime changes in idiosyncratotype frequencies in the population, leading to changes in the gross behavior of the population in a way that favors species survival in the face of environmental fluctuations on an oligogenerational timescale. On such a timescale, neither standard natural selection nor synapse-based learning systems are serviceable.

2-07-2017
The female version of crossing over may set up a slow, random process of recombination that works in the background to gradually erase any improbable statistical distribution of recombinotypes that is not being actively maintained by PNS.

7-29-2017
Here is a better theory of female PNS. First, we need a definition. PNS focus: a function that is the target of most PNS. Thus, in trees, the PNS focus is bio elaboration of natural pesticides. In human males, the PNS focus is brain development and the broad outlines of emotional reactivity, and thus behavior. In human females, the PNS focus is the digestive process. The effectiveness of the latter could be evaluated while the female fetus is still in the womb, when the eggs are developing. The proxy fitness measure would be how well nourished the fetus is, which requires no sensory experience. This explains the developmental timing difference between oogenesis and spermatogenesis. Digestion would be fine tuned by the females for whatever types of food happen to be available in a given time and place.

8-18-2017
Experimental evidence for my proposed recombination mechanism of proxy natural selection has been available since 2011, as follows:

Stress-induced recombination and the mechanism of evolvability
by Weihao Zhong; Nicholas K. Priest
Behavioral Ecology and Sociobiology, 03/2011, Volume 65, Issue 3

permalink:

Abstract:
"The concept of evolvability is controversial. To some, it is simply a measure of the standing genetic variation in a population and can be captured by the narrow-sense heritability (h2). To others, evolvability refers to the capacity to generate heritable phenotypic variation. Many scientists, including Darwin, have argued that environmental variation can generate heritable phenotypic variation. However, their theories have been difficult to test.
 Recent theory on the evolution of sex and recombination provides a much simpler framework for evaluating evolvability. It shows that modifiers of recombination can increase in prevalence whenever low fitness individuals produce proportionately more recombinant offspring. Because recombination can generate heritable variation, stress-induced recombination might be a plausible mechanism of evolvability if populations exhibit a negative relationship between fitness and recombination. Here we use the fruit fly, Drosophila melanogaster, to test for this relationship.
We exposed females to mating stress, heat shock or cold shock and measured the temporary changes that occurred in reproductive output and the rate of chromosomal recombination. We found that each stress treatment increased the rate of recombination and that heat shock, but not mating stress or cold shock, generated a negative relationship between reproductive output and recombination rate. The negative relationship was absent in the low-stress controls, which suggests that fitness and recombination may only be associated under stressful conditions. Taken together, these findings suggest that stress-induced recombination might be a mechanism of evolvability."

However, my theory also has a macro aspect, namely that the definition of what constitutes "stress," in terms of neuron interconnections or chemical signaling pathways, itself  evolves, by species-replacement group selection. Support for that idea is the next thing I must search for in the literature. &&

Wednesday, September 21, 2016

#16. The Intermind, Engine of History? [evolutionary psychology]

Red, theory; black, fact.

9-21-2016
This post is a further development of the ideas in the post, "What is intelligence? DNA as knowledge base." It was originally published 9-21-2016 and extensively edited 10-09-2016 with references added 10-11-2016 and 10-30-2016. Last modified: 10-30-2016.

In "AviApics 101" and "The Insurance of the Heart," I seem to be venturing into human sociobiology, which one early critic called "An outbreak of neatness." With the momentum left over from "Insurance," I felt up for a complete human sociobiological theory, to be created from the two posts mentioned.

However, what I wrote about the "genetic intelligence" suggests that this intelligence constructs our sociobiology in an ad hoc fashion, by rearranging a knowledge base, or construction kit, of "rules of conduct" into algorithm-like assemblages. This rearrangement is (See Deprecated, Part 7) blindingly fast by the standards of classical Darwinian evolution, which only provides the construction kit itself, and presumably some further, special rules equivalent to a definition of an objective function to be optimized. The ordinary rules translate experiences into the priming of certain emotions, not the emotions themselves, 

Thus, my two sociobiological posts are best read as case studies of the products of the genetic intelligence. I have named this part the intermind, because it is intermediate in speed between classical evolution and learning by operant conditioning. (All three depend on trial-and error.) The name is also appropriate in that the intermind is a distributed intelligence, acting over continental, or a least national, areas. If we want neatness, we must focus on its objective function, which is simply whatever produces survival. It will be explicitly encoded into the genes specifying the intermind, (For more on multi-tier, biological control systems with division of labor according to time scale, see "Sociobiology: the New Synthesis," E. O. Wilson, 1975 & 2000, chapter 7.)

Let us assume that the intermind accounts for evil, and that this is because it is only concerned with survival of the entire species and not with the welfare of individuals. Therefore, it will have been created by group selection of species. (Higher taxonomic units such as genus or family will scarcely evolve because the units that must die out to permit this are unlikely to do so, because they comprise relatively great genetic and geographical diversity.* However, we can expect adaptations that facilitate speciation. Imprinted genes may be one such adaptation, which might enforce species barriers by a lock-and-key mechanism that kills the embryo if any imprinted gene is present in either two or zero active copies.) Species group selection need act only on the objective function used by epigenetic trial-and-error processes.

In these Oncelerian times, we know very well that species survival is imperiled by loss of range and by loss of genetic diversity. Thus, the objective function will tend to produce range expansion and optimization of genetic diversity. My post "The Insurance of the Heart" concluded with a discussion of "preventative evolution," which was all about increasing genetic diversity. My post "AviApics 101" was all about placing population density under a rigid, negative feedback control, which would force excess population to migrate to less-populated areas, thereby expanding range. Here we see how my case studies support the existence of an intermind with an objective  function as described above.

However, all this is insufficient to explain the tremendous cultural creativity of humans, starting at the end of the last ice age with cave paintings, followed shortly thereafter by the momentous invention of agriculture. The hardships of the ice age must have selected genes for a third, novel component, or pillar, of the species objective function, namely optimization of memetic diversity. Controlled diversification of the species memeplex may have been the starting point for cultural creativity and the invention of all kinds of aids to survival. Art forms may represent the sensor of a feedback servomechanism by which a society measures its own memeplex diversity, measurement being necessary to control.

A plausible reason for evolving an intermind is that it permits larger body size, which leads to more internal degrees of freedom and therefore access to previously impossible adaptations. For example, eukaryotes can phagocytose their food; prokaryotes cannot. However, larger body size comes at the expense of longer generation time, which reduces evolvability. A band of high frequencies in the spectrum of environmental fluctuations therefore develops where the large organism has relinquished evolvability, opening it to being out competed by its smaller rivals. 

The intermind is a proxy for classical evolution that fills the gap, but it needs an objective function to provide it with its ultimate gold standard of goodness of adaptations. Species-replacement group selection makes sure the objective function is close to optimal. This group selection process takes place at enormously lower frequencies than those the intermind is adapting to, because if the timescales were  too similar, chaos would result. For example, in model predictive control, the model is updated on a much longer cycle than are the predictions derived from it.

12-25-2016
Today, when I was checking to see if I was using the word "cathexis" correctly (I wasn't), I discovered the Freudian term "collective unconscious," which sounds close to my "intermind" concept.

* 3-12-2018
I now question this argument. Why can't there be as many kinds of group selection as taxonomic levels? Admittedly, the higher-level processes would be mind-boggling in their slowness, but in evolution, there are no deadlines.

Saturday, July 30, 2016

#10. The Two–test-tube Experiment: Part II [neuroscience]

Red, theory; black, fact.

At this point we have a problem. The experimenting-brain theory predicts zero hard-wired asymmetries between the hemispheres. However, the accepted theory of hemispheric dominance postulates that this arrangement allows us to do two things at once, one task with the left hemisphere and the other task with the right. The accepted theory is basically a parsimony argument. However, this argument predicts huge differences between the hemispheres, not the subtle ones actually found.

My solution is that hard-wired hemispheric dominance must be seen as an imperfection of symmetry in the framework of the experimenting brain caused by the human brain being still in the process of evolving, combined with the hypothesis that brain-expanding mutations individually produce small and asymmetric expansions. (See Post 45.) Our left-hemispheric speech apparatus is the most asymmetric part of our brain and these ideas predict that we are due for another mutation that will expand the right side, thereby matching up the two sides, resulting in an improvement in the efficiency of operant conditioning of speech behavior.

These ideas also explain why speech defects such as lisping and stuttering are so common and slow to resolve, even in children, who are supposed to be geniuses at speech acquisition.
This is how the brain would have to work if fragments of skilled behaviors are randomly stored in memory on the left or right side, reflecting the possibility that the two hemispheres play experiment versus control, respectively, during learning.
The illustration shows the theory of motor control I was driven to by the implications of the theory of the dichotomously experimenting brain already outlined. It shows how hemispheric dominance can be reversed independently of the side of the body that should perform the movement specified by the applicable rule of conduct in the controlling hemisphere. The triangular device is a summer that converges the motor outputs of both hemispheres into a common output stream that is subsequently gated into the appropriate side of the body. This arrangement cannot create contention because at any given time, only one hemisphere is active. Anatomically, and from stroke studies, it certainly appears that the outputs of the hemispheres must be crossed, with the left hemisphere only controlling the right body and vice-versa.

However, my theory predicts that in healthy individuals, either hemisphere can control either side of the body, and the laterality of control can switch freely and rapidly during skilled performance so as to always use the best rule of conduct at any given time, regardless of the hemisphere in which it was originally created during REM sleep.

The first bit is calculated and stored in the basal ganglia. It would be output from the reticular substantia nigra (SNr) and gate sensory input to thalamus to favor one hemisphere or the other, by means of actions at the reticular thalamus and intermediate grey of the superior colliculus. The second bit would be stored in the cerebellar hemispheres and gate motor output to one side of the body or the other, at the red nucleus. Conceivably, the two parts of the red nucleus, the parvocellular and the magnocellular, correspond to the adder and switch, respectively, that are shown in the illustration.

Under these assumptions, the corpus callosum is needed only to distribute priming signals from the motor/premotor cortices to activate the rule that will be next to fire, without regard for which side that rule happens to be on. The callosum would never be required to carry signals forward from sensory to motor areas. I see that as the time-critical step, and it would never depend on getting signals through the corpus callosum, which is considered to be a signaling bottleneck.

How would the basal ganglia identify the "best" rule of conduct in a given context? I see the dopaminergic compact substantia nigra (SNc) as the most likely place for a hemisphere-specific "goodness" value to be calculated after each rule firing, using hypothalamic servo-error signals processed through the habenula as the main input for this. The half of the SNc located in the inactive hemisphere would be shut down by inhibitory GABAergic inputs from the adjacent SNr. The dopaminergic nigrostriatal projection would permanently potentiate simultaneously-active corticostriatal inputs (carrying context information) to medium spiny neurons (MSNs) of enkephalin type via a crossed projection, and to MSNs of substance-P type via uncrossed projections. The former MSN type innervates the external globus pallius (GPe), and the latter type innervates the SNr. These latter two nuclei are inhibitory and innervate each other. 

I conjecture that this arrangement sets up a winner-take-all kind of competition between GPe and SNr, with choice of the winner being exquisitely sensitive to small historical differences in dopaminergic tone between hemispheres. The "winner" is the side of the SNr that shuts down sensory input to the hemisphere on that side. The mutually inhibitory arrangement could also plausibly implement hysteresis, which means that once one hemisphere is shut down, it stays shut down without the need for an ongoing signal from the striatum to keep it shut down.

Each time the cerebral cortex outputs a motor command, a copy goes to the subthalamic nucleus (STN) and could plausibly serve as the timing signal for a "refresh" of the hemispheric dominance decision based on the latest context information from cortex. The STN signal presumably removes the hysteresis mentioned above, very temporarily, then lets the system settle down again into possibly a new state.

We now need a system that decides that something is wrong, and that the time to experiment has arrived. This could plausibly be the role of the large, cholinergic inter neurons of the striatum. They have a diverse array of inputs that could potentially signal trouble with the status quo, and could implement a decision to experiment simply by reversing the hemispheric dominance prevailing at the time. Presumably, they would do this by a cholinergic action on the surrounding MSNs of both types.

Finally, there is the second main output of the basal ganglia to consider, the inner pallidal segment (GPi). This structure is well developed in primates such as humans but is rudimentary in rodents and even in the cat, a carnivore. It sends its output forward, to motor thalamus. I conjecture that its role is to organize the brain's knowledge base to resemble block-structured programs. All the instructions in a block would be simultaneously primed by this projection. The block identifier may be some hash of the corticostriatal context information. A small group of cells just outside the striatum called the claustrum seems to have the connections necessary for preparing this hash. Jump rules, that is, rules of conduct for jumping between blocks, would not output motor commands, but block identifiers, which would be maintained online by hysteresis effects in the basal ganglia.

The cortical representation of jump rules would likely be located in medial areas, such as Brodmann 23, 24, 31, and 32. BA23-24 is classed as limbic system, and BA31-32 is situated between this and neocortex. This arrangement suggests that, seen as a computer, the brain is capable of executing programs with three levels of indentation, not counting whatever levels may be encoded as chromatin marks in the serotonergic neurons. Dynamic changes in hemispheric dominance might have to occur independently in neocortex, medial cortex, and limbic system.

Sunday, July 24, 2016

#9. The Two–test-tube Experiment: Part I [neuroscience]

Your Brain is Like This.

Red, theory; black, fact.

The the motivating challenge of this post is to explain the hemispheric organization of the human brain. That is, why we seem to have two very similar brains in our heads, the left side and the right side.

Systems that rely on the principle of trial-and-error must experiment. The genetic intelligence mentioned in previous posts would have to experiment by mutation/natural selection. The synaptic intelligence would have to experiment by operant conditioning. I propose that both these experimentation processes long ago evolved into something slick and simplified that can be compared to the two–test-tube experiment beloved of lab devils everywhere.

Remember that an experiment must have a control, because "everything is relative." Therefore, the simplest and fastest experiment in chemistry that has any generality is the two-test-tube experiment; one tube for the "intervention," and one tube for the control. Put mixture c in both tubes, and add chemical x to only the intervention tube. Run the reaction, then hold the two test tubes up to the light and compare the contents visually (Remember that ultimately, the visual system only detects contrasts.) Draw your conclusions.

My theory is basically this: the two hemispheres of the brain are like the two test tubes. Moreover, the two copies of a given chromosome in a diploid cell are also like the two test tubes. In both systems, which is which varies randomly from experiment to experiment, to prevent phenomena analogous to screen burn. The hemisphere that is dominant for a particular action is the last one that produced an improved result when control passed to it from the other. The allele that is dominant is the last one that produced an improved result when it got control from the other. Chromosomes and hemispheres will mutually inhibit to produce winner-take-all dynamics in which at any given time only one is exposed to selection, but it is fully exposed. 

These flip-flops do not necessarily involve the whole system, but may be happening independently in each sub-region of a hemisphere or chromosome (e.g., Brodmann areas, alleles). Some objective function, expressing the goodness of the organism's overall adaptation, must be recalculated after each flip-flop, and additional flip flopping suppressed if an improvement is found when the new value is compared to a copy of the old value held in memory. In case of a worsening of the objective function, you quickly flip back to the allele or hemisphere that formerly had control, then suppress further flip flopping for awhile, as before. 

The foregoing implies multiple sub-functions, and these ideas will not be compelling unless I specify a brain structure that could plausibly carry out each sub-function. For example, the process of comparing values of the objective function achieved by left and right hemispheres in the same context would be mediated by the nigrostriatal projection, which has a crossed component as well as an uncrossed component. More on this next post.

Sunday, July 17, 2016

#8. What is Intelligence? Part II. Human Brain as Knowledge Base [neuroscience, engineering]

EN    NE    
Red, theory; black, fact.

7-17-2016
A previous post's discussion of the genetic intelligence will provide the framework for this post, in which functions will be assigned to brain structures by analogy.

The CCE-structural-gene realization of the if-then rule of AI appears to have three realizations in neuroscience. These are as follows: a single neuron (dendrites, sensors; axon, effectors) the basal ganglia (striatum, sensors; globus pallidus, effectors) and the corticothalamic system (postRolandic cortex, sensors; preRolandic cortex, effectors).

Taking the last system first, the specific association of a pattern recognizer to an output to form a rule of conduct would be implemented by white-matter axons running forward in the brain. Remember that the brain's sensory parts lie at the back, and its motor, or output, parts lie at the front.

The association of one rule to the next within a block of instructions would be by the white-matter axons running front-to-back in the brain. Since the brain has no clock, unlike a computer, the instructions must all be of the if-then type even in a highly automatic block, so each rule fires when it sees evidence that the purpose of the previous rule was accomplished. This leads to a pleasing uniformity, where all instructions have the same form. 

This also illustrates how a slow knowledge base can be morphed into something surprisingly close to a fast algorithm, by keeping track of the conditional firing probabilities of the rules, and rearranging the rules in their physical storage medium so that high conditional probabilities correspond to small distances, and vice-versa.

However, in a synaptic intelligence, the "small distances" would be measured on the voltage axis relative to firing threshold, and would indicate a high readiness to fire on the part of some neuron playing the role of decision element for an if-then rule, if the usual previous rule has fired recently. An external priming signal having the form of a steady excitation, disinhibition, or deinactivation could produce this readiness. Inhibitory inputs to thalamus or excitatory inputs to cortical layer one could be such priming inputs.

The low-to-medium conditional firing probabilities would characterize if-then rules that act as jump instructions between blocks, and the basal ganglia appear to have the connections necessary to implement these. 

In Parkinson disease, the basal ganglia are disabled, and the symptoms are as follows: difficulty in getting a voluntary movement started, slowness of movements, undershoot of movements, few movements, and tremor. Except for the tremor, all these symptoms could be due to an inability to jump to the next instruction block. Patients are capable of some routine movements once they get started, and these may represent the output of a single-block program fragment.

Unproven, trial-basis rules of the "jump" type that are found to be useful could be consolidated by a burst of dopamine secretion into the striatum. Unproven, trial-basis rules of the "block" type found useful could be consolidated by dopamine secretion into prefrontal cortex. [The last two sentences express a new idea conceived during the process of keying in this post.] Both of these dopamine inputs are well established by anatomical studies.

(See Deprecated, Part 9)...influence behavior with great indirection, by changing the firing thresholds of other rules, that themselves only operate on thresholds of still other rules, and so on in a chain eventually reaching the rules that actually produce overt behavior. The chain of indirection probably passes medial to lateral over the cortex, beginning with the limbic system. (Each level of indirection may correspond to a level of indentation seen in a modern computer language such as C.) The mid-line areas are known to be part of the default-mode network (DMN), which is active in persons who are awake but lying passively and producing no overt behavior. The DMN is thus thought to be closely associated with consciousness itself, one of the greatest mysteries in neuroscience.

7-19-2016
It appears from this post and a previous post that you can take a professionally written, high-level computer program and parse it into a number of distinctive features, to borrow a term from linguistics. Such features already dealt with are the common use of if-then rules, block structuring, and levels of indentation. Each such distinctive feature will correspond to a structural feature in each of the natural intelligences, the genetic and the synaptic. Extending this concept, we would predict that features of object-oriented programming will be useful in assigning function to form in the two natural intelligences. 

For example, the concept of class may correspond to the Brodmann areas of the human brain, and the grouping of local variables with functions that operate on them may be the explanation of the cerebellum, a brain structure not yet dealt with. In the cerebellum, the local variables would be physically separate from their corresponding functions, but informationaly bound to them by an addressing procedure that uses the massive mossy-fiber/parallel-fiber array.

Monday, June 27, 2016

#6. Mental Illness as Communication [neuroscience, genetics]

NE     GE     
Red, theory; black, fact.

The effects of most deleterious mutations are compensated by negative feedback processes occurring during development in utero. However, if the population is undergoing intense Darwinian selection, many of these mutations become unmasked and therefore contribute variation for selection. (Jablonka and Lamb, 2005, The MIT Press, "Evolution in Four Dimensions")

However, since most mutations are harmful, a purely random process for producing them, with no pre-screening, is wasteful. Raw selection alone is capable of scrubbing out a mistake that gets as far as being born, at great cost in suffering, only to have, potentially, the very same random mutation happen all over again the very next day, with nothing learnt. Repeat ad infinitum. This is Absurd, and quarrels with the engineer in me, and I like to say that evolution is an engineer. Nowadays, evolution itself is thought to evolve. A simple example of this would be the evolution of DNA repair enzymes, which were game-changers, allowing much longer genes to be transmitted to the next generation, resulting in the emergence of more-complex lifeforms.

An obvious, further improvement would be a screening, or vetting process for genetic variation. Once a bad mutation happens, you mark the offending stretch of DNA epigenetically in all the close relatives of the sufferer, to suppress further mutations there for a few thousand years, until the environment has had time to change significantly.

Obviously, you also want to oppositely mark the sites of beneficial mutations, and even turn them into recombinant hot spots for a few millennia, to keep the party going. Hot spots may even arise randomly and spontaneously, as true, selectable epi-mutations. The downside of all this is that even in a hot spot, most mutations will still be harmful, leading to the possibility of "hitchhiker" genetic diseases that cannot be efficiently selected against because they are sheltered in a hot spot. Cystic fibrosis may be such a disease, and as the hitchhiker mechanism would predict, it is caused by many different mutations, not just one. It would be a syndrome defined by the overlap of a vital structural gene and a hot spot, not by a single DNA mutation. I imagine epigenetic hot spots to be much more extended along the DNA than a classic point mutation.

It is tempting to suppose that the methylation islands found on DNA are these hot spots, but the scanty evidence available so far is that methylation suppresses recombinant hot spots, which are generally defined non-epigenetically, by the base-pair sequence.

The human brain has undergone rapid, recent evolutionary expansion, presumably due to intense selection, presumably unmasking many deleterious mutations affecting brain development that were formerly silent. Since the brain is the organ of behavior, we expect almost all these mutations to indirectly affect behavior for the worse. That explains mental illness, right?

I'm not so sure; mental illnesses are not random, but cluster into definable syndromes. My reading suggests the existence of three such syndromes: schizoid, depressive, and anxious. My theory is that each is defined by a different recombinant hot spot, as in the case of CF, and may even correspond to the three recently-evolved association cortices of the brain, namely parietal, prefrontal, and temporal, respectively. The drama of mental illness would derive from its communication role in warning nearby relatives that they may be harbouring a bad hot spot, causing them to find it and cool it by wholly unconscious processes. Mental illness would then be the push back against the hot spots driving human brain evolution, keeping them in check and deleting them as soon as they are no longer pulling their weight fitness-wise. The variations in the symptoms of mental illness would encode the information necessary to find the particular hot spot afflicting a particular family.

Now all we need is a communication link from brain to gonads. The sperm are produced by two rounds of meiosis and one of mitosis from the stem-like, perpetually self-renewing spermatogonia, that sit just outside the blood-testes barrier and are therefore exposed to blood-borne hormones. These cells are known to have receptors for the hypothalamic hormone orexin A*, as well as many other receptors for signaling molecules that do or could plausibly originate in the brain as does orexin. Some of these receptors are:
  • retinoic acid receptor α
  • glial cell-derived neurotrophic factor (GDNF) receptor
  • CB2 (cannabinoid type 2) receptor
  • p75 (For nerve growth factor, NGF)
  • kisspeptin receptor.

*Gen Comp Endocrinol. 2016 May 9. pii: S0016-6480(16)30127-7. doi: 10.1016/j.ygcen.2016.05.006. [Epub ahead of print] Localization and expression of Orexin A and its receptor in mouse testis during different stages of postnatal development. Joshi D1, Singh SK2.

PS: for brevity, I left out mention of three sub-functions necessary to the pathway: an intracellular gonadal process transducing receptor activation into germ line-heritable epigenetic changes, a process for exaggerating the effects of bad mutations into the development of monsters or behavioral monsters for purposes of communication, and a process of decoding the communication located in the brains of the recipients.

Saturday, June 18, 2016

#5. Why We Dream [neuroscience]

NE
Red, theory; black, fact.

The Melancholy Fields








Something I still remember from Psych 101 is the prof's statement that "operant conditioning" is the basis of all voluntary behavior. The process was discovered in lab animals such as pigeons by B.F. Skinner in the 1950s and can briefly be stated as "If the ends are achieved, the means will be repeated." (Gandhi said something similar about revolutionary governments.)

I Dream of the Gruffalo. Pareidolia as dream imagery.

Let's say The Organism is in a supermarket checkout line and can't get the opposite sides of a plastic grocery bag unstuck from each other no matter how it rubs, blows, stretches, picks at, or pinches the bag. At great length, a rubbing behavior by chance happens near the sweet spot next to the handle, and the bag opens at once. Thereafter, when in the same situation, The Organism goes straight to the sweet spot and rubs, for a great savings in time and aggravation. This is operant conditioning, which is just trial-and-error, like evolution itself, only faster. Notice how it must begin: with trying moves randomly--behavioral mutations. However, the process is not really random like a DNA mutation. The Organism never tries kicking out his foot, for example, when it is the hand that is holding the bag. Clearly, common sense plays a role in getting the bag open, but any STEM-educated person will want to know just what this "common sense" is and how you would program it. Ideally, you want the  creativity and genius of pure randomness, AND the assurance of not doing anything crazy or even lethal just because some random-move generator suggested it. You vet those suggestions.

That, in a nutshell, is dreaming: vetting random moves against our accumulated better judgment to see if they are safe--stocking the brain with pre-vetted random moves for use the next day when stuck. This is why the emotions associated with dreaming are more often unpleasant than pleasant: there are more ways to go wrong than to go right (This is why my illustrations for this post are melancholy and monster-haunted.) The vetting is best done in advance (e.g., while we sleep) because there's no time in the heat of the action the next day, and trial-and-error with certified-safe "random" moves is already time-consuming without having to do the vetting on the spot as well.

Dreams are loosely associated with brain electrical events called "PGO waves," which begin with a burst of action potentials ("nerve impulses") in a few small brainstem neuron clusters, then spread to the visual thalamus, then to the primary visual cortex. I theorize that each PGO wave creates a new random move that is installed by default in memory in cerebral cortex, and is then tested in the inner theater of dreaming to see what the consequences would be. In the event of a disaster foreseen, the move is scrubbed from memory, or better yet, added as a "don't do" to the store of accumulated wisdom. Repeat all night.

If memory is organized like an AI knowledge base, then each random move would actually be a connection from a randomly-selected but known stimulus to a randomly-selected but known response, amounting to adding a novel if-then rule to the knowledge base. Some of the responses in question could be strictly internal to the brain, raising or lowering the firing thresholds of still other rules.

In "Evolution in Four Dimensions" [1st ed.] Jablonka and Lamb make the point that epigenetic, cultural, and symbolic processes can come up with something much better than purely random mutations: variation that has been subjected to a variety of screening processes.

Nightmares involving feelings of dread superimposed on experiencing routine activities may serve to disrupt routine assumptions that are not serving you well (that is, you may be barking up the wrong tree).

Tuesday, May 31, 2016

#3. AviApics 101 [population, engineering, evolutionary psychology]

PO     EN     EP     
Red, theory; black, fact.

Here, I go into detail about the human population controller introduced in the previous post.

I assume that, like everything in the natural (i.e., evolved) world, it is a masterful piece of engineering, as Leonardo Da Vinci declared.

The way to build an ideal controller is the inverse plant method, where the controller contains the mathematical inverse of a mathematical model of the system to be controlled.  To derive the model, you take the Laplace transform of the system's impulse response function. For populations, a suitable impulse would be the instantaneous introduction of the smallest viable breeding population into an ideal habitat.

What happens then is well known, as least in microbial life forms too simple to already have a controller: unrestrained, exponential population growth as per Malthus, with no end in sight.

This exponential curve is then the impulse response function we need, and its Laplace transform is simple: 1/(S - r), where S is complex frequency and r is the Malthusian constant, that is, percent population growth rate per year. The mathematical inverse is even simpler: S - r, which is calculated as set point X minus controlled variable Y. The result is summed with perturbation P and made equal to Y. The result is usually simplified to permit predictions about controller performance, but that is not needed in this discussion.

The control effort is E(S - r), which can be multiplied out as ES - Er. Remember that everything has been Laplace transformed in these expressions, and that ES becomes the time differential of e when transformed back into the real world. Multiplication by a constant such as r stays multiplication, however. Control effort in the real world is then rate of change of e minus r times e. (Lowercase variables are the un-transformed versions.) Since e = x - y, and since x is constant, x becomes zero when differentiated, and drops out of the expression. Control effort is then -dy/dt - er. <Corrected 5 Jun '16.>

I theorize that women calculate -dy/dt, and men calculate er. When they get together, the complete population control effort is exerted, resulting in stability, which the world rewards. However, on average, the men and the women will be pulling in opposite directions exactly 50% of the time, if we model population variation as a sine wave centered on the set point.

A prediction is that women unconsciously react to evidence of increased birth rate or decreased death rate by wanting fewer children. Men react to excess absolute population relative to set point by violence, and to breathing room under the set point by partying.

That negative sign in front of the male contribution was puzzling at first, until I realized that it must derive from the married state itself, and not from the base male response to population error. This could be the origin of statements such as: "Marriage is the exact opposite of the way you think it will be." 

The level of the noise produced so copiously by small children is probably the signal that women unconsciously integrate to estimate birth rate, and the wailing and long faces following a death probably serve the same purpose for estimating death rate, aided by reading the tabloids. [My (married) older brother once showed me the developmental time course of child noise in the air with his hand, and it looked like an EPSP, the response of a neuron to an incoming action potential. The EPSP is the convolution kernel by which a neuron decodes a rate code.] The men have to calculate absolutes, not rates, however. The male proprietary instinct causes them to divvy up the limiting resource for breeding (jobs in our present society) into quanta that can be paired off with people like pairs of beads on adjacent wires of an abacus. Excess people left over at the end of this operation spells trouble. Politicians are right to worry about jobless rates.

Saturday, May 28, 2016

#2. The Iatrogenic Conflicts of the Twentieth Century [population, engineering]



The Edwardian era (1901-1911) in small-town Ontario, and La Belle Epoque will soon be over. (From a photo owned by Constance M. Mooney of Ottawa, Canada)


PO     EN     
Red, theory; black, fact.

Medical advances during a turbulent century

In 1911, the anti-syphilis drug salvarsan, invented by Paul Ehrlich, became widely available to the public, at a time when this disease was cutting a wide swath of morbidity and mortality. Three years later, World War I broke out.

In 1937, sulfa drugs, the first effective treatment for tuberculosis, became available to the public. Two years later, World War II broke out.

In 1945, both penicillin and streptomycin became available to the public, followed in short order by the first mass vaccinations, notably against smallpox. In that decade (1945-1955), the Cold War between the United States and the Soviet Union began. That one nearly finished us in 1962, the year of the Cuba Missile Crisis, when a nuclear WW III was narrowly averted.

My conclusions

In the human brain, there is a wholly unconscious controller for population density with a feedback delay of some two to four years, that answers every sudden downtick in the death rate with a brutal, reflexive uptick. Recently, these downticks in the death rate have been due to advances in medicine, hence my title for this post. "Iatrogenic" means roughly "caused by doctors."

Moreover, last year I noted that the headlines were all about ISIS, an unusually disruptive phenomenon of the Muslim world. I then checked to see what the main preoccupation of the headline-writers had been exactly four years previously. This seemed to be the Arab Spring, when many old governments in the Arab world were being thrown off. I concluded that these regimes had somehow been suppressing population growth.

An engineering model

I began to reason thus: if this controller is real, it should be just as analyzable as Watt's steam-engine governor, using standard engineering approaches. If it has a significant feedback delay, then a perturbation sufficiently rich in high-frequency harmonics (i.e., sufficiently sudden) should drive it briefly into a damped oscillation.

Evidence for the engineering model

In support of these conclusions, I present the US Census Bureau statistics on the percent growth rate of the human population for the 20th century, international yearly figures, aggregated to "World," and extended back to 1900 with decade-wise World data from the historical estimates table. At roughly the end of WW II, we see a huge jump in the growth rate followed by a sharp drop bottoming at 1960, followed by another sharp peak at 1962, followed by a leveling off superimposed on a gradual decline, the latter possibly due to increasing absolute numbers. This time series could be construed as showing a damped oscillation. See below.


The historical global population growth rate scaled to population.


11-07-2018
My surmise that the post-1964 decline in the plot would disappear if corrected for changing absolute numbers is confirmed by calculation based on US Census Bureau data. See below. Furthermore, the plot shown below appears to level off at 78 million new people per year, which is probably the upper trigger level for the controller. There is probably no formal lower trigger level, making this controller asymmetric. Oscillation begins well before this level is reached, however, reflecting the presence of a differential control term, as discussed in the next post. The sharp upstroke in growth rate that occurs at 1980 may be due to the eradication of smallpox over the decade 1967-1977. The downturn after 1988 was probably due to the AIDS pandemic. The data are coarse-grained before 1950 and do not show the upstrokes in 1911-1914 and 1937-1939 that I would have predicted from the two world wars.

World population growth rates in persons per year with no scaling. Note the reaction in 1960.


Center: a centrifugal speed governor familiar in 1914. The Steam Museum, Kingston, Canada, 2012.