Sunday, November 18, 2018

#45. The Denervation-supersensitivity Theory of Mental Illness [neuroscience, evolution, genetics]

NE     EV     GE     
Red, theory; black, fact.

People get mental illness but animals seemingly do not, or at least not outside of artificial laboratory models such as the unpredictable, mild-stress rodent model of depression. A simple theory to account for this cites the paleontological fact that the human brain has been expanding at breakneck speed over recent evolutionary time and postulates that this expansion is ongoing at the present time, and that mental illness is the price we are paying for all this brain progress.

In other words, the mentally ill carry the unfavorable mutations that have to be selected out during this progress, and the mutation rate in certain categories of mutation affecting human brain development is elevated in modern humans by some sort of "adaptive" hot-spot system. "Adaptive" is in scare quotes here to indicate that the adaptation inheres in changes in the standard deviation of traits, not the average, and is therefore not Lamarkian.

In brain evolution, the growth changes in the various parts very probably have to be coordinated somehow. I conjecture that there is no master program doing this coordination. Rather, I conceive of the human brain as comprising scores of tissue "parcels," each with its own gene to control the final size that parcel reaches in development. (This idea is consistent with the finding of about 400 genes in humans that participate in establishing body size.) All harmonious symmetry, even left-right symmetry, would have to be painstakingly created by brute-force selection, involving the early deaths of millions of asymmetrical individuals. This idea was outlined in post 10.

Assuming that left and right sides must functionally cooperate to produce a fitness improvement, mutations affecting parcel growth must occur in linked, left-right pairs to avoid irreducible-complexity paradoxes. I have previously conjectured in these pages that the crossing-over phenomenon of egg and sperm maturation serves to create these linked pairs of mutations, where the two mutations are identified with the two ends of the DNA segment that translocates. (See "Can Irreducible Complexity Evolve?")

Most of the evolutionary expansion of the human brain appears to be focused on association cortex, which I conjecture implements if-then rules, like those making up the knowledge bases familiar from the field of artificial intelligence. The "if" part of the rule would be evaluated in post-Rolandic cortex, i.e., in temporal and parietal association cortices, and the "then" part of the rule would be created by the pre-Rolandic association cortex, i.e., the prefrontal cortex. The white matter tracts running forward in the brain would connect the "if" part with the "then" part, and the backward running white-matter tracts would carry priming signals to get other rules ready to "fire" if they are commonly used after the rule in question.

Due to such tight coordination, I would expect that the ideal brain will have a fixed ratio of prefrontal cortex to post-Rolandic association cortex. However, the random nature of the growth-gene bi-mutations (perhaps at mutational hot-spots) permitting human brain evolution will routinely violate this ideal ratio, leading to the creation of individuals having either too much prefrontal cortex or too much temporal/parietal cortex. In the former case, you will have prefrontal cortex starved of sensory input. In the latter case, you will have sensory association cortex starved of priming signals feeding back from motoric areas.

Denervation supersensitivity occurs when the normal nerve supply to a muscle is interrupted, resulting in a rapid overexpression of acetylcholine receptors on the muscle. This can be seen as an attempt to compensate for weak nerve transmission with a tremendous re-amplification of the signal by the muscle. Analogous effects have been found in areas of the cerebral cortex deprived of their normal supply of sensory signals, so the effect seems to be quite general.

In cases of genetically-determined frontal-parietal/temporal imbalance, I conjecture that the input-starved side develops something like denervation supersensitivity, making it prone to autonomous, noise-driven nervous activity.

If the growth excess is in sensory association cortex, this autonomous activity will manifest as hallucinations, resulting in a person with schizophrenia. If the growth excess is in the prefrontal cortex, however, the result of the autonomous activity will be mania or a phobia. Depression may originally have been an adaptation to the presence of a man-eating predator in the neighborhood, but in civilized contexts, it can get activated by the unpredictable (to the sufferer) punishments resulting from manic activity. If the mania is sufficiently mild to co-exist with depression, as in type II bipolar disorder, then the overall effect of the depressive component may be like a band-aid on the mania.

The non-overgrown association cortex might even secondarily develop the opposite of denervation supersensitivity as the result of continual bombardment with autonomous activity from the other side of the Rolandic fissure. This could account for the common observation of hypoprefrontality in cases of schizophrenia.

#44. Sunshine in Covey's Gap [evolutionary psychology, neuroscience]

EP     NE     
Red, theory; black, fact.

Who has not felt frustration at the difficulty and seeming impossibility of making a disagreeable emotion go away before we weaken and act it out, to our detriment? Techniques of true emotional control, i.e., making the bad feelings disappear rather than white-knuckle, open-ended resistance to acting them out, are not impossible, just non obvious. You just have to persuade yourself that this bad is good and believe it.

For the modern person, that second part, the believing, is difficult to achieve robustly if one is using religious solutions to the problem, the domain of soteriology (being "saved"), easier with psychoanalytical solutions, and, I am here to say, easiest of all with scientific solutions. "Believing," for me, means being prepared to bet your life on the truth of a proposition.

Steven Covey writes in "The Seven Habits of Highly Effective People" that between stimulus and [emotional] response, humans have, somewhat metaphorically, a "gap" in the causal chain and animals do not. In the gap, you find such things as imagination, self-awareness, conscience, and self will. He correctly lays tremendous emphasis on this point. George Santayana seems to have grasped this truth when he wrote: "Our dignity is not in what we do but in what we understand. The whole world is doing things." [source, Wiki quotes, accessed 11-06-2018]

Neuroscientist Joseph LeDoux has even elucidated what could be the neural pathways that make Covey's gap possible. A direct pathway from the thalamus to the amygdala mediates the basic fear response but an indirect pathway that leads from thalamus to cerebral cortex to amygdala provides a more nuanced, intelligent amendment to the first response. Full cancellation of the direct pathway by the indirect would account for Covey's gap, and in principle, this could be done by a cortical relay through the inhibitory interstitial neurons of the amygdala that terminate on the amygdalar projection cells.

The doctrines of classical religion probably lead to such cancellation of emotions such as hate and fear by activating the same circuits that are used by a parent to reassure a needlessly fearful infant.

Apparently, classical religion is all about getting people to do the right things for the wrong reasons. When the discipline of evolutionary psychology is sufficiently developed, we can look forward to the age when people do the right things for the right reasons.