Monday, June 27, 2016

#6. Mental Illness as Communication [neuroscience, genetics]

NE     GE     
Red, theory; black, fact.

The effects of most deleterious mutations are compensated by negative feedback processes occurring during development in utero. However, if the population is undergoing intense Darwinian selection, many of these mutations become unmasked and therefore contribute variation for selection. (Jablonka and Lamb, 2005, The MIT Press, "Evolution in Four Dimensions")

However, since most mutations are harmful, a purely random process for producing them, with no pre-screening, is wasteful. Raw selection alone is capable of scrubbing out a mistake that gets as far as being born, at great cost in suffering, only to have, potentially, the very same random mutation happen all over again the very next day, with nothing learnt. Repeat ad infinitum. This is Absurd, and quarrels with the engineer in me, and I like to say that evolution is an engineer. Nowadays, evolution itself is thought to evolve. A simple example of this would be the evolution of DNA repair enzymes, which were game-changers, allowing much longer genes to be transmitted to the next generation, resulting in the emergence of more-complex lifeforms.

An obvious, further improvement would be a screening, or vetting process for genetic variation. Once a bad mutation happens, you mark the offending stretch of DNA epigenetically in all the close relatives of the sufferer, to suppress further mutations there for a few thousand years, until the environment has had time to change significantly.

Obviously, you also want to oppositely mark the sites of beneficial mutations, and even turn them into recombinant hot spots for a few millennia, to keep the party going. Hot spots may even arise randomly and spontaneously, as true, selectable epi-mutations. The downside of all this is that even in a hot spot, most mutations will still be harmful, leading to the possibility of "hitchhiker" genetic diseases that cannot be efficiently selected against because they are sheltered in a hot spot. Cystic fibrosis may be such a disease, and as the hitchhiker mechanism would predict, it is caused by many different mutations, not just one. It would be a syndrome defined by the overlap of a vital structural gene and a hot spot, not by a single DNA mutation. I imagine epigenetic hot spots to be much more extended along the DNA than a classic point mutation.

It is tempting to suppose that the methylation islands found on DNA are these hot spots, but the scanty evidence available so far is that methylation suppresses recombinant hot spots, which are generally defined non-epigenetically, by the base-pair sequence.

The human brain has undergone rapid, recent evolutionary expansion, presumably due to intense selection, presumably unmasking many deleterious mutations affecting brain development that were formerly silent. Since the brain is the organ of behavior, we expect almost all these mutations to indirectly affect behavior for the worse. That explains mental illness, right?

I'm not so sure; mental illnesses are not random, but cluster into definable syndromes. My reading suggests the existence of three such syndromes: schizoid, depressive, and anxious. My theory is that each is defined by a different recombinant hot spot, as in the case of CF, and may even correspond to the three recently-evolved association cortices of the brain, namely parietal, prefrontal, and temporal, respectively. The drama of mental illness would derive from its communication role in warning nearby relatives that they may be harbouring a bad hot spot, causing them to find it and cool it by wholly unconscious processes. Mental illness would then be the push back against the hot spots driving human brain evolution, keeping them in check and deleting them as soon as they are no longer pulling their weight fitness-wise. The variations in the symptoms of mental illness would encode the information necessary to find the particular hot spot afflicting a particular family.

Now all we need is a communication link from brain to gonads. The sperm are produced by two rounds of meiosis and one of mitosis from the stem-like, perpetually self-renewing spermatogonia, that sit just outside the blood-testes barrier and are therefore exposed to blood-borne hormones. These cells are known to have receptors for the hypothalamic hormone orexin A*, as well as many other receptors for signaling molecules that do or could plausibly originate in the brain as does orexin. Some of these receptors are:
  • retinoic acid receptor α
  • glial cell-derived neurotrophic factor (GDNF) receptor
  • CB2 (cannabinoid type 2) receptor
  • p75 (For nerve growth factor, NGF)
  • kisspeptin receptor.

*Gen Comp Endocrinol. 2016 May 9. pii: S0016-6480(16)30127-7. doi: 10.1016/j.ygcen.2016.05.006. [Epub ahead of print] Localization and expression of Orexin A and its receptor in mouse testis during different stages of postnatal development. Joshi D1, Singh SK2.

PS: for brevity, I left out mention of three sub-functions necessary to the pathway: an intracellular gonadal process transducing receptor activation into germ line-heritable epigenetic changes, a process for exaggerating the effects of bad mutations into the development of monsters or behavioral monsters for purposes of communication, and a process of decoding the communication located in the brains of the recipients.

Saturday, June 18, 2016

#5. Why We Dream [neuroscience]

NE
Red, theory; black, fact.

The Melancholy Fields








Something I still remember from Psych 101 is the prof's statement that "operant conditioning" is the basis of all voluntary behavior. The process was discovered in lab animals such as pigeons by B.F. Skinner in the 1950s and can briefly be stated as "If the ends are achieved, the means will be repeated." (Gandhi said something similar about revolutionary governments.)

I Dream of the Gruffalo. Pareidolia as dream imagery.

Let's say The Organism is in a supermarket checkout line and can't get the opposite sides of a plastic grocery bag unstuck from each other no matter how it rubs, blows, stretches, picks at, or pinches the bag. At great length, a rubbing behavior by chance happens near the sweet spot next to the handle, and the bag opens at once. Thereafter, when in the same situation, The Organism goes straight to the sweet spot and rubs, for a great savings in time and aggravation. This is operant conditioning, which is just trial-and-error, like evolution itself, only faster. Notice how it must begin: with trying moves randomly--behavioral mutations. However, the process is not really random like a DNA mutation. The Organism never tries kicking out his foot, for example, when it is the hand that is holding the bag. Clearly, common sense plays a role in getting the bag open, but any STEM-educated person will want to know just what this "common sense" is and how you would program it. Ideally, you want the  creativity and genius of pure randomness, AND the assurance of not doing anything crazy or even lethal just because some random-move generator suggested it. You vet those suggestions.

That, in a nutshell, is dreaming: vetting random moves against our accumulated better judgment to see if they are safe--stocking the brain with pre-vetted random moves for use the next day when stuck. This is why the emotions associated with dreaming are more often unpleasant than pleasant: there are more ways to go wrong than to go right (This is why my illustrations for this post are melancholy and monster-haunted.) The vetting is best done in advance (e.g., while we sleep) because there's no time in the heat of the action the next day, and trial-and-error with certified-safe "random" moves is already time-consuming without having to do the vetting on the spot as well.

Dreams are loosely associated with brain electrical events called "PGO waves," which begin with a burst of action potentials ("nerve impulses") in a few small brainstem neuron clusters, then spread to the visual thalamus, then to the primary visual cortex. I theorize that each PGO wave creates a new random move that is installed by default in memory in cerebral cortex, and is then tested in the inner theater of dreaming to see what the consequences would be. In the event of a disaster foreseen, the move is scrubbed from memory, or better yet, added as a "don't do" to the store of accumulated wisdom. Repeat all night.

If memory is organized like an AI knowledge base, then each random move would actually be a connection from a randomly-selected but known stimulus to a randomly-selected but known response, amounting to adding a novel if-then rule to the knowledge base. Some of the responses in question could be strictly internal to the brain, raising or lowering the firing thresholds of still other rules.

In "Evolution in Four Dimensions" [1st ed.] Jablonka and Lamb make the point that epigenetic, cultural, and symbolic processes can come up with something much better than purely random mutations: variation that has been subjected to a variety of screening processes.

Nightmares involving feelings of dread superimposed on experiencing routine activities may serve to disrupt routine assumptions that are not serving you well (that is, you may be barking up the wrong tree).

Thursday, June 9, 2016

#4. My First Theory of Everything (TOE) [physics]

PH
Red, theory; black, fact.

The nucleus around which a TOE will hopefully crystallize.

Alocia and Anaevia

In my first post, I made a case for the existence of absolute space and even suggested that space is some kind of condensate (e.g., a crystal). The divide-and-conquer strategy that has served us so well in science suggests that the next step is to conceptually take this condensate apart into particles. The first question that arises is whether these particles are themselves situated in an older, larger embedding space, or come directly out of spacelessness (i.e., a strange, hypothetical early universe that I call "Alocia," my best Latin for "domain of no space." Going even further back, there would have been "Anaevia," "domain of no time." Reasoning without time seems even trickier than reasoning without space.)

What came before space?

The expansion of our universe suggests that the original, catastrophic condensation event, the Big Bang, was followed by further, slower accretion that continues to this day. However, the resulting expansion of space is uniform throughout its volume, which would be impossible if the incoming particles had to obey the rules of some pre-existing space. If there were a pre-existing space, incoming particles could only add to the exterior surface of the huge condensate in which we all presumably live, and could never access the interior unless our universe were not only embedded in a 4-space, but hyper-pizza-shaped as well. The latter is unlikely because self-attraction of the constituent particles would crumple any hyper-pizza-shaped universe into a hypersphere in short order. (Unless it spins?) Conclusion: the particles making up space probably have no spatial properties themselves, and bind together in a purely informational sense, governed by Hebb's rule. 

Hebb's rule was originally a neuroscience idea about how learning happens in the brain. My use of it here does NOT imply that a giant brain somehow underlies space. Rather, the evolutionary process that led to the human brain re-invented Hebb's rule as the most efficient way of acquiring spatial information. 

Hebb's rule pertains to signal sources: how could hypothetical space-forming particles come up with the endless supply of energy required by pumping out white noise, waves, etc., 24/7? Answer: these "particles" are the growing tips of time lines, that themselves grow by an energy-releasing accretion process. The chunks that accrete are variable in size or interrupted by voids, so timeline extension has entropy associated with it that represents the signals needed by Hebb's rule.

I am well aware of all the space-bound terms in the previous paragraph (underlined), supposedly about goings-on in Alocia, the domain of no space; however, I am using models here as an aid to thought, a time-honored scientific technique.

Is cosmological expansion some kind of accretion?

I imagine that Alocia is home to large numbers of space-like condensates, with a size distribution favoring the microscopic, but with a long tail extending toward larger sizes. Our space grows because these mostly tiny pre-fab spaces are continually inserting themselves into it, as soon as their background signal pattern matches ours somewhere. This insertion process is probably more exothermic than any other process in existence. If the merging space happens to be one of the rarer, larger ones, the result would be a gamma ray burst bright enough to be observed at cosmological distances and generating enough pure energy to materialize all the cosmic rays we observe.

The boundary problem

I suspect that matter is annihilated when it reaches the edge of a space. This suggests that our space must be mostly closed to have accumulated significant amounts of matter. This agrees with Hawking's no-boundary hypothesis. The closure need not be perfect, however; indeed, that would be asking a lot of chance. Imperfections in the closure of our universe may take the form of pseudo-black holes: cavities in space that lack fields. If they subsequently acquire fields from the matter that happens to hit them, they could evolve to closely resemble super-massive black holes, and be responsible for nucleating galaxies.

Conclusions

  • Spatial proximity follows from correlations among processes, and does not cause them.
  • Any independence of processes is primordial and decays progressively.
  • The universe evolves through a succession of binding events, each creating a new property of matter, which can be interpreted as leftover entropy.
  • Analysis in the present theoretical framework proceeds by declaring familiar concepts to be conflations of these properties, e.g., time = change + contrast + extent + unidirectional sequence; space = time + bidirectional sequence.

Tuesday, May 31, 2016

#3. AviApics 101 [population, engineering, evolutionary psychology]

PO     EN     EP     
Red, theory; black, fact.

Here, I go into detail about the human population controller introduced in the previous post.

I assume that, like everything in the natural (i.e., evolved) world, it is a masterful piece of engineering, as Leonardo Da Vinci declared.

The way to build an ideal controller is the inverse plant method, where the controller contains the mathematical inverse of a mathematical model of the system to be controlled.  To derive the model, you take the Laplace transform of the system's impulse response function. For populations, a suitable impulse would be the instantaneous introduction of the smallest viable breeding population into an ideal habitat.

What happens then is well known, as least in microbial life forms too simple to already have a controller: unrestrained, exponential population growth as per Malthus, with no end in sight.

This exponential curve is then the impulse response function we need, and its Laplace transform is simple: 1/(S - r), where S is complex frequency and r is the Malthusian constant, that is, percent population growth rate per year. The mathematical inverse is even simpler: S - r, which is calculated as set point X minus controlled variable Y. The result is summed with perturbation P and made equal to Y. The result is usually simplified to permit predictions about controller performance, but that is not needed in this discussion.

The control effort is E(S - r), which can be multiplied out as ES - Er. Remember that everything has been Laplace transformed in these expressions, and that ES becomes the time differential of e when transformed back into the real world. Multiplication by a constant such as r stays multiplication, however. Control effort in the real world is then rate of change of e minus r times e. (Lowercase variables are the un-transformed versions.) Since e = x - y, and since x is constant, x becomes zero when differentiated, and drops out of the expression. Control effort is then -dy/dt - er. <Corrected 5 Jun '16.>

I theorize that women calculate -dy/dt, and men calculate er. When they get together, the complete population control effort is exerted, resulting in stability, which the world rewards. However, on average, the men and the women will be pulling in opposite directions exactly 50% of the time, if we model population variation as a sine wave centered on the set point.

A prediction is that women unconsciously react to evidence of increased birth rate or decreased death rate by wanting fewer children. Men react to excess absolute population relative to set point by violence, and to breathing room under the set point by partying.

That negative sign in front of the male contribution was puzzling at first, until I realized that it must derive from the married state itself, and not from the base male response to population error. This could be the origin of statements such as: "Marriage is the exact opposite of the way you think it will be." 

The level of the noise produced so copiously by small children is probably the signal that women unconsciously integrate to estimate birth rate, and the wailing and long faces following a death probably serve the same purpose for estimating death rate, aided by reading the tabloids. [My (married) older brother once showed me the developmental time course of child noise in the air with his hand, and it looked like an EPSP, the response of a neuron to an incoming action potential. The EPSP is the convolution kernel by which a neuron decodes a rate code.] The men have to calculate absolutes, not rates, however. The male proprietary instinct causes them to divvy up the limiting resource for breeding (jobs in our present society) into quanta that can be paired off with people like pairs of beads on adjacent wires of an abacus. Excess people left over at the end of this operation spells trouble. Politicians are right to worry about jobless rates.

Saturday, May 28, 2016

#2. The Iatrogenic Conflicts of the Twentieth Century [population, engineering]



The Edwardian era (1901-1911) in small-town Ontario, and La Belle Epoque will soon be over. (From a photo owned by Constance M. Mooney of Ottawa, Canada)


PO     EN     
Red, theory; black, fact.

Medical advances during a turbulent century

In 1911, the anti-syphilis drug salvarsan, invented by Paul Ehrlich, became widely available to the public, at a time when this disease was cutting a wide swath of morbidity and mortality. Three years later, World War I broke out.

In 1937, sulfa drugs, the first effective treatment for tuberculosis, became available to the public. Two years later, World War II broke out.

In 1945, both penicillin and streptomycin became available to the public, followed in short order by the first mass vaccinations, notably against smallpox. In that decade (1945-1955), the Cold War between the United States and the Soviet Union began. That one nearly finished us in 1962, the year of the Cuba Missile Crisis, when a nuclear WW III was narrowly averted.

My conclusions

In the human brain, there is a wholly unconscious controller for population density with a feedback delay of some two to four years, that answers every sudden downtick in the death rate with a brutal, reflexive uptick. Recently, these downticks in the death rate have been due to advances in medicine, hence my title for this post. "Iatrogenic" means roughly "caused by doctors."

Moreover, last year I noted that the headlines were all about ISIS, an unusually disruptive phenomenon of the Muslim world. I then checked to see what the main preoccupation of the headline-writers had been exactly four years previously. This seemed to be the Arab Spring, when many old governments in the Arab world were being thrown off. I concluded that these regimes had somehow been suppressing population growth.

An engineering model

I began to reason thus: if this controller is real, it should be just as analyzable as Watt's steam-engine governor, using standard engineering approaches. If it has a significant feedback delay, then a perturbation sufficiently rich in high-frequency harmonics (i.e., sufficiently sudden) should drive it briefly into a damped oscillation.

Evidence for the engineering model

In support of these conclusions, I present the US Census Bureau statistics on the percent growth rate of the human population for the 20th century, international yearly figures, aggregated to "World," and extended back to 1900 with decade-wise World data from the historical estimates table. At roughly the end of WW II, we see a huge jump in the growth rate followed by a sharp drop bottoming at 1960, followed by another sharp peak at 1962, followed by a leveling off superimposed on a gradual decline, the latter possibly due to increasing absolute numbers. This time series could be construed as showing a damped oscillation. See below.


The historical global population growth rate scaled to population.


11-07-2018
My surmise that the post-1964 decline in the plot would disappear if corrected for changing absolute numbers is confirmed by calculation based on US Census Bureau data. See below. Furthermore, the plot shown below appears to level off at 78 million new people per year, which is probably the upper trigger level for the controller. There is probably no formal lower trigger level, making this controller asymmetric. Oscillation begins well before this level is reached, however, reflecting the presence of a differential control term, as discussed in the next post. The sharp upstroke in growth rate that occurs at 1980 may be due to the eradication of smallpox over the decade 1967-1977. The downturn after 1988 was probably due to the AIDS pandemic. The data are coarse-grained before 1950 and do not show the upstrokes in 1911-1914 and 1937-1939 that I would have predicted from the two world wars.

World population growth rates in persons per year with no scaling. Note the reaction in 1960.


Center: a centrifugal speed governor familiar in 1914. The Steam Museum, Kingston, Canada, 2012. 




Wednesday, May 25, 2016

#1. Intro [evolutionary psychology, evolution]

This is the sort of thing I write:

EP       EV      
Red, theory; black, fact.


EP
Religion is the last proto-science (e.g., alchemy, astrology). 
(Parts cut to Deprecated page, Part 2.)

***
EV
The eukaryotic cell arose from a clonal array of prokaryotes that selectively lost some of its internal partition walls while following the colony path to complexity. The remaining partitions gave rise to the internal membrane systems of present-day eukaryotes. Those prokaryote colonists specializing in chemiosmotic processes such as oxidative phosphorylation and photosynthesis could not lose any of their delimiting walls because of the need to maintain concentration gradients, so they remain bacterium-like in morphology to this day. This is an alternative to the phagocytotic theory of the origin of mitochondria and chloroplasts. Modern blue-green algae genetically resemble the DNA in chloroplasts, and modern aerobic bacteria have genetic resemblances to the DNA in mitochondria, but this is not necessarily differential support for the phagocytosis theory. The resemblances can be accounted for by convergent evolution or by the existence of an ancestor common to the modern organisms and the ancient colony formers I suppose here.

11-15-2017
These prokaryote colonies would have originally reproduced by sporulation, not mitosis, which would have come later. The "spores" would be actively-metabolizing prokaryotes and before growing into further colonies, would be subject to natural selection. In the spore phase, the rapid evolvability of typical prokaryotes would have been recovered, allowing the formation of large, slow-growing colonies without sacrifice of the high evolvability of the original solitary prokaryotes. Modern-day eukaryotes often secrete tiny bodies called exosomes containing all the macromolecules of life. Exosomes may be the evolutionary vestige of the sporulation phase of the original eukaryotes.