Tuesday, October 19, 2021

#73. How Enhancers May Work [Biochemistry]

 CH

Red, theory; black, fact.


Background about enhancers

Enhancers are stretches of DNA that, when activated by second messengers like cyclic AMP, enhance the activity of specific promoters in causing the transcription of certain genes, leading to the translation of these genes into protein. Enhancers are known for causing the post-translational modification of the histones associated with them. Typically, lysine side chains on histones are methylated, doubly methylated, triply methylated, or acetylated. Serines are phosphorylated. In general, phosphorylation condenses chromatin and acetylation expands and activates it for transcription. Methylation increases positive electric charge on the histones, acetylation decreases positive charge, and phosphorylation increases negative charge. The enhancers of a promoter are usually located far away from it measuring along the DNA strand, and can even be on different chromosomes ("in trans"). 

The mystery of enhancer–promoter interaction

How the distant enhancer communicates with its promoter is a big mystery. The leading theory is that the enhancer goes and sticks to the promoter, and the intervening length of DNA sticks out of the resulting complex as a loop. This is the "transcription hub" theory. 

My electrostatic theory of enhancer–promoter interaction

I propose a far different mechanism: when activated, the multiple enhancers cause modification of their associated histones that place the same electric charge on all of them, which is also of the same sign as the charge on the promoter region. Mutual electrostatic repulsion of all these regions then expands the chromatin around the promoter. This effect reduces the fraction of the time that the RNA polymerase II cannot move down the DNA strand because unrelated chromatin loops are in the way, like trees fallen across the railway tracks. (Each "tree" eventually moves away because of Brownian motion.) This could also be the mechanism of chromatin decondensation generally, which is known to be a precondition for the expression of protein-coding genes.

05-28-2022: The mutual electrostatic repulsion of enhancers does not necessarily accomplish decondensation directly, but may do so indirectly, by triggering a cascade of alternating chromatin expansions and histone modifications. Furthermore, this cascade is not necessarily deterministic. These ideas predict that raising the ionic strength in the nuclear compartment, which would tend to shield charges from each other, should inhibit gene activation. This manipulation will require genetic knockout of osmolarity regulating genes.




Monday, September 13, 2021

#72. Why There is Sex [evolution]

EV

Red, theory; black, fact.

The flower Coronilla varia L.

Sex is an evolvability adaptation

There are always two games in town: reproduction and evolution. Since we live on an unstable planet where the environment can change capriciously, species here have been selected for rapid evolvability per se to enable them to adapt to the occasional rapid environment changes and not go extinct. Apparently, mutations, the starting point for evolutionary adaptation, become more common when the organism is stressed, and stress may partly be a forecast of loss of fertility due to a developing genome-environment mismatch. Bacteria exhibit the large mutation of transformation under stress conditions, and 3 types of stress all increased the meiotic recombination rate of fruit flies (Stress-induced recombination and the mechanism of evolvability. Zhong W, Priest NK. Behavioral ecology and sociobiology. 2011;65:493-502). Recombination can involve unequal crossing-over in which changes in gene dose can occur due to gene duplication or deletion. However, since most mutations are deleterious (there are more ways to do something wrong than to do it better) many mutations will also reduce fertility, and at precisely the wrong moment: when a reduction in fertility is impending due to environment change. The answer was to split the population into two halves: the reproduction specialists and the selection specialists, and remix their respective genomes at each generation.

The roles of the two sexes

Females obviously do the heavy lifting of reproduction, and males seem to be the gene testers. So if a guy gets a bad gene, so long, and the luckier guy next to him then gets two wives. The phenomenon of greater male variability (Greater male than female variability in regional brain structure across the lifespan. Wierenga LM, Doucet GE, Dima D, Agartz I, Aghajani M, Akudjedu TN, Albajes‐Eizagirre A, Alnæs D, Alpert KI, Andreassen OA, Anticevic A. Karolinska Schizophrenia Project (KaSP) Consortium. Hum. Brain Mapp., doi. 2020;10, and I have never seen so many authors on a paper: 160.) suggests that mutations have more penetrance in males, as befits the male role of cannon fodder/selectees. What the male brings to the marriage bed, then, is field-tested genetic information. Male promiscuity can therefore be seen as a necessary part of this system, which allows many mutations to be field tested with minimal loss of whole-population fertility, because it is the females who are the limiting factor in population fertility.

Chromosomal mechanisms of greater male variability

Chromosomal diploidy may be a system for sheltering females from mutations, assuming that the default process is for the phenotype that develops to be the average of the phenotypes individually specified by the paternal and maternal chromosome sets. Averaging tends to mute the extremes. The males, however, may set up a winner-take-all competition between homologous chromosomes early in development, with inactivation of one of them chosen at random. The molecular machinery for this may be similar to that of random x-inactivation in females. The result will be greater penetrance of mutations through to the phenotype and thus greater male variability. 

Quantitative prediction

This reasoning predicts that on a given trait, male variability (as standard deviation) will be 41% greater than the female variability, a testable prediction. 41% = [SQRT(2) -1] × 100. Already in my reading I have found a figure of 30%, which is suggestive. 

Now all I have to do is reconcile all this with the laws of Mendelian inheritance. 

Mechanistic reconciliation with Mendel's laws

09-16-2021: This reconciliation seems to require an exemption mechanism built into the postulated chromosome inactivation process that operates on genes present in only one copy per parent. The effect of this mechanism will be to double the penetrance of dominant alleles at that gene. Therefore, in males, at single-copy genes, evolution of the machinery of sex is driven by the favorable mutations.

A lovers' heart drawn in dust






Sunday, July 25, 2021

#71. The Checkered Universe [Physics]

 PH

Red, theory; black, fact.


Natty dog/epiphany



The basic theoretical vision

☯This is a theory of everything based on a foam model. The foam is made up of two kinds of "bubbles," or "domains": "plus" and "minus." Each plus domain must be completely surrounded by minus domains, and vice versa. Any incipient violation of this rule, call it the "checkerboard rule," causes domains of like type to fuse instantly until the status quo is restored, with release of energy. The energy, typically electromagnetic waves, radiates as undulations in the plus-minus interfaces. The result of many such events is progressive enlargement and diversification of domain sizes. This process, run backward in time, appears to result in a featureless, grey nothingness (imagining contrasting domain types as black and white, inspired by the Yin-and-Yang symbol), thereby giving a halfway-satisfying explanation of how nothing became something. <03-12-2022: Halfway, because it’s an infinite regress: explaining the phenomenon in terms of the phenomenon, and a hint that I am not out of the box yet. Invoking progressively deepening shades of gray in forward time, to gray out and thus censor the regress encountered in backward time, looks like a direction to explore.> I assume a Law of Conservation of Nothingness, whereby plus domains and minus domains must be present in equal amounts, although this can be violated locally. This law is supposed to be at the origin of all the other known conservation laws. The cosmological trend favors domain fusion, but domain budding is nevertheless possible given enough energy.

The givens assumed for this theory of everything.
Since there are givens, it is probably not the real theory of everything
but rather a simplified physics. (But maybe a stepping stone?)


The dimensionality question

The foam is infinite-dimensional. Within the foam, interfaces of all dimensionalities are abundant. Following Hawking, I suggest that we live on a three dimensional brane within the foam because that is the only dimensionality conducive to life. The foamy structure of the large-scale galaxy distribution that we observe thus receives a natural explanation: these are the lower-dimensional foam structures visible from within our brane. The interiors of the domains are largely inaccessible to matter and energy. <03-12-2022: We have an infinite regress again, this time toward increasing dimensionality: we never get to the bulk. Is it time to censor again and postulate progressively lessened contrast with greater dimensionality, and asymptotic to zero contrast? No; the foam model implies a bulk and therefore a maximum dimensionality, but not necessarily three. But what is so special about this maximum dimensionality? Let us treat yin-yang separation as an ordinary chemical process and apply the second law of thermodynamics to see if there is some theoretical special dimensionality. Assuming zero free energy change, we set enthalpy increase (“work”) equal to entropy increase (“disorder”) times absolute temperature. Postulating that separation work decreases with dimensionality and the entropy of the resulting space foam increases with dimensionality, we can solve for the special dimensionality we seek. The separation process has no intrinsic entropy penalty because there are no molecules at this level of description. The real, maximum dimensionality would be greater than theoretical to provide some driving force, which real transformations require. However, is the solution stable? Moreover, the argument implies that temperature is non-zero.><06-26-2022: Temperature is here the relative motion of all the minute, primordial domains. This could be leftover separation motion. How could all this motion happen without innumerable checkerboard-rule violations and thus many fusion events? Fusion events can be construed as interactions, and extra dimensions, which we have here, suppress interactions. More on this below.><02-07-2023: That said, the idea of primordial infinite dimensionality remains beguiling in its simplicity and possibilities.>

12-23-2021: Since infinite dimensionality is a bit hard to get your mind around, let us inquire what is diminished upon increasing the dimensionality, and just set it to zero to represent infinite dimensionality. Some suggestions: order, interaction, and correlation. To illustrate, imagine two 2-dimensional Ardeans* walking toward each other. When they meet, one must lie down so the other can walk over them before either can continue on their way. That's a tad too much correlation and interaction for my taste.

03-03-2022: I suppose that as dimensions are added, order, correlation, and interaction decrease toward zero asymptotically. This would mean that 4D is not so different from 3D as 3D is from 2D. The latter comparison is the usual test case that people employ to try to understand extra dimensions, but it may be misleading. However, in 4D, room-temperature superconductivity may be the rule rather than the exception, due to extradimensional suppression of the interactions responsible for electrical resistance. The persistent, circulating 4D supercurrents, understood as travelling electron waves, may look like electrostatic fields from within our 3-brane, which, if true, would help to eliminate action-at-a-distance from physics. Two legs of the electron-wave circulation would travel in a Direction We Cannot Point. These ideas also lead to the conclusion that electrostatic fields can be diffracted. Bizarre, perhaps, but has anyone ever tried it? <03-21-2022: Yes, they have, and it is the classical electron diffraction experiment. The electrons are particles and are therefore not diffracted; they are accelerated in an electrostatic field that is diffracted, thereby building up a fringe pattern on the photographic plate. The particles, then, are just acting here as field tracers. Slight difficulty: neutrons can also be diffracted. A diffraction experiment requires that they move, however, so read on.>

Still to be explained: Newton's first law (i.e., inertia and motion).


How to accommodate the fact of motion

08-06-2012: Motion can be modelled as the array of ripples that spreads across the surface of a pond after a stone is thrown in. A segment of the wave packet coincides with the many minute domains that define the atoms of the moving object, and moves them along. The foam model implies surface tension, whereby increases in interface area increase the total energy of the domain. If the brane is locally thrown into undulations, this will increase the surface area of the interface and thus the energy. This accounts for the kinetic energy of moving masses. Momentum is the rate of change of kinetic energy with velocity and inertia is the rate of change of momentum with velocity. 

03-08-2022: Brane surface tension would be a consequence of the basic yin-yang de-mixing phenomenon, because increases in interfacial area without volume increase (interface crumpling) can be construed as incipient re-mixing, which would go against the cosmological trend. Thus, the interface always tends to minimum area, as if it had surface tension. <03-04-2023: this tension provides the restoring force that one needs for an oscillation, which is important because waves figure prominently in this theory. However, a wave also needs the medium to have inertia, or resistance to change, and where is that in the present theory? It can be introduced in the form of a finite speed of light. For example, the permeability of free space, related to inductance, a kind of electrical inertia, can be expressed in terms of the speed of light and the permittivity of free space, related to capacitance, which inversely expresses a kind of electrical restoring force.>


Dark matter and how to introduce gravity into this theory

08-14-2021: Gravity is being difficult here. I don't want to replace it with a bunch of spiral branewaves, and I don't know why it has an inverse squared power law. Let's drill down on this: when two interstellar dust grains collide, some kinetic energy is converted to heat energy, which radiates away. Without this process, called inelastic collision, the gravitational accretion of mass, and thus gravity itself, will not be observed. (By the way, one illusion that we may need to shed to make progress is that the convex spaces of the universe, such as the spaces occupied by stars, planets, and galaxies, are fundamental and the negative, concave, between-spaces are just meaningless pseudo-structures, which are called "spandrels" by evolutionary theoreticians. But could it be the other way around? In this, I am using the word "space" in an architectural sense.

08-27-2021: Eureka! Let us suppose that each visible astrophysical object is surrounded by an invisible atmosphere-like structure consisting of mid-sized domains (larger than atomic scale but smaller than intergalactic voids). This could be dark matter. Let us further assume that minimizing the total interfacial area of this structure leads to sorting according to domain size, resulting in a gradient of domain sizes that places the smallest in the center. Therefore, lifting an object off the visible surface necessarily disturbs this minimum-energy structure, requiring an input of energy. This requirement would be the gravitational potential energy of an elevated object. The exact power law remains unexplained, but I think these assumptions bring us much closer to an explanation.

12-29-2021: Ordinary matter would be distinguished from dark matter by the ordinary domains being full of standing waves that store the energy of many historical merging events. The antinodes in the standing wave pattern would be the regular array of atoms thought to make up a crystal (most solid matter is crystalline). The facets of the crystals would correspond to the domain walls. 

<02-10-2023: The waves could be confined inside the ordinary domains by travelling across our 3-brane at right angles, strictly along directions we cannot point. However, something has to leak into the 3-brane to account for electrostatic effects.><02-12-2023: Crossing the 3-brane perpendicularly is possible by geometry if each particle is a hypersphere exactly bisected by the 3-brane, and mass-associated waves travel around the hypersphere surface.><02-14-2023: Presumably, the neutron produces no leakage waves, which could be assured by the presence of a nodal plane coinciding with the spherical intersection of the particle hypersphere with the 3-brane. Electrons and protons could emanate leakage waves, a possibility that suggests the origins of their respective electric fields. However, the fact that these particles have stable masses means that waves must be absorbed from the 3-brane as fast as they exit, meaning that an equilibrium with space must exist. For an equilibrium to be possible, space must be bounded somehow, which is already implied by the foam model. Since we know of only two charged, stable particles, two equilibrium points must exist. This scenario also explains why all electrons are identical and why all protons are identical. If their respective leakage waves are of different frequencies, the two particle types could equilibrate largely independently by a basic theorem of the Fourier transform.><02-19-2023: Particles of like charge would resonate with each other's leakage wave, resulting in a tendency to be entrained by it. This would account for electrostatic repulsion. Particles of opposite charge would radiate at different frequencies and therefore not mutually resonate, leading to no entrainment. However, since each particle is in the other's scattering shadow, it will experience an imbalanced force due to the shadow, tending to push it toward the other particle. This effect could explain electrostatic attraction.><03-14-2023: Gravity may also be due to mutual scatter shadowing, but involving a continuum spectrum of background waves, not the resonant line spectra of charged particles. Note that background waves are not coupled to any domains, and so do not consist of light quanta, which, according to the present theory, are waves coupled to massless domain pairs.><04-21-2023: The bisected hypersphere particle model predicts that subatomic particles will appear to be spheres of a definite fixed radius and having an effective volume 5.71 x greater than expected from the same radius of a sphere in flat space. (5.71=1+1.5x🥧) Background waves that enter the spherical surface will therefore be slow to leave, a property likely to be important for physics.>

03-08-2022: There may be a very close, even mutualistic, relationship between domain geometry and interface waves, all organized by the principle of seeking minimum energy. Atomic nuclei within the crystal could be much tinier domains, also wave-filled but at much shorter wavelengths. The nuclear domains would sit at exactly the peaks of the electron-wave antinodes because these are the only places where the electron waves have no net component in the plane of the interface. Most particles will have the following structure as modelled with reduced dimensionality: a pair of prisms joined at the end-faces, the joint plane coinciding with the 3-brane of our world. Mass is standing waves in the joint plane. One prism is a plus domain and the other a minus domain. Both project out of our 3-brane into cosmologically-sized domains of opposite type. 09-04-2022:  Neutrinos may violate this pattern if they are unpaired domains completely surrounded by the opposite type of space and having no obligatory presence in our 3-brane. This would explain the weakness of their interaction with other types of matter and the existence of more than one type of neutrino. This picture predicts two types but we know three exist, a difficulty for this theory.


Motion, friction, and the cosmological redshift

09-11-2021: How could anything besides waves move from place to place without violating the checkerboard rule? I postulate that domains in front of the moving object are being triggered to merge, with release of wave energy (which radiates away), and domains in the rear are being re-split to restore the status quo. The energy needed for re-splitting will come from the brane-wave packet attached to the object. This accounts for the slowing of moving objects due to friction. However, the moon orbits the Earth apparently without friction and yet is inside the Earth's gravitational field and thus dark-matter structure, and thus must obey the checkerboard rule. My solution is to point out that the moon's motion is not really friction-free because interplanetary space is not really a vacuum, but contains 5 particles of plasma per cubic centimeter. I postulate that each such particle sits at a 0-brane within a space foam made of mid-sized domains.

21-09-2021: This feature ensures the prima facie equivalence of the present theory with conventional accounts of how friction happens, thereby helping the theory pass the test of explanation. However, friction in the absence of a detectable medium made of conventional matter appears to remain as a theoretical possibility.

09-22-2021: Dark-matter friction could progressively slow down electromagnetic oscillations, resulting in the cosmological red shift. The waste heat from this process may account for the microwave background radiation.

12-13-2022: Alternatively, space may really be expanding. Similar to the previously published brane-collision theory, the Big Bang may have been due to contact between two cosmologically sized domains of four spatial dimensions and opposite type, and our 3-brane is the resulting interface. The matter in our universe would originate from the small domains caught between the merging cosmological hyper-domains. This could account for the inflationary era thought to have occurred immediately after the big bang. The subsequent linear expansion of space may be due to the light emitted by the stars; if light is an undulation in the 3-brane along a large extra dimension, then light emission creates more 3-brane all the time, because an undulating brane has more surface area than a flat one. 

* "Overview of Planiverse" page.


Wednesday, June 30, 2021

#70. How Noncoding RNA May Work [chemistry]

 CH

Red, theory; black, fact.

Back, DNA; red, long noncoding RNA; green, transcription complex. I am assuming that transcription initiation involves a DNA-RNA triplex, but this is not essential to the theory. Imagine that a loop closes through an RNA running from bottom to top.

No junk DNA?

The junk-DNA concept is quite dead, killed by the finding that the noncoding sections (sections that do not specify functional proteins) have base-pair sequences that are highly conserved in evolution and are therefore doing something useful.

What does long non-coding RNA do?

This DNA is useful because the RNA transcripts made from it are useful, serving as controllers of the transcription process itself and thus, indirectly, of protein expression. (Changes in protein expression may be considered the immediate precursor of a cell's response to its environment, analogous to muscle contractions in an intact human.) Small noncoding RNAs seem to be repressors of transcription and long noncoding RNAs (lncRNA) may either repress or promote. (lncRNA molecules also control chromatin remodeling, but this may be an aspect of stem-cell differentiation during development.) Despite the accumulation of much biochemical information, summaries of what lncRNA seems to do for the cell have been vague, unfocussed, and unsatisfactory (to me).

Control of gene expression: background

The classical scheme of protein expression, due to Jacob and Monod, was discovered in bacteria, in which a signal molecule from the environment (lactose in the original discovery) acts by binding to a protein to change its conformation (folding pattern). The changed protein loses the ability to bind to DNA upstream from the sequence that specifies the lactase enzyme, where it normally acts to block transcription. The changed protein then desorbs from DNA, which triggers transcription of lactase messenger RNA, which is then translated into lactase enzyme, which confers on the bacterium the ability to digest lactose. Thus, the bacterium adapts to the availability of this food source.

Since I have a neuroscience degree, I naturally wonder if all this can be modelled in neurobiological terms. Clearly, it's a reflex, comparable to the spinal reflexes in vertebrates. An elementary sensorium goes in, and an elementary response comes out. But vertebrates also have something higher than spinal reflexes: operations by the brain. (Don't worry, I am not about to go off the deep end on you.)

My neuron-inspired theory of long non-coding RNA

I propose a coordinating role for the noncoding RNAs: rather than relying on a bunch of independently acting reflexes, eukaryotic cells can sense many promoter signals at once, as a gestalt, and respond with the expression of many proteins at once, as another gestalt. You do not need an entire brain to model this process, just one neuron. The synaptic inputs to the dendrites of the neuron can model the multiple promoter activations, and the eventual output of a nerve impulse (action potential) can represent the signal to co-express a certain set of proteins, which is hard-wired to that metaphorical neuron by axon collaterals. In real neurons, action potentials are generated by a positive feedback between membrane depolarization and activation of the voltage-gated sodium channel, which causes further depolarization, so around we go. This potential positive feedback can be translated into terms of molecular biology by supposing a cyclic, autocatalytic pattern of lncRNA transcription, in which each lncRNA transcript in the cycle activates the enhancer (which is like a promoter) of the DNA of the next lncRNA in the cycle. The neuron model suggests that the entire cycle has a low level of baseline activity (is "constitutively active" to some extent) but the inhibitory effect of the small noncoding RNAs (analogous to what is called the rheobase current in neurons) suppresses explosive activation. However, when substantially all the promoters in the cycle are activated simultaneously, such explosive transcription does occur. The messenger RNA of the proteins to be co-expressed as the coordinated response is generated as a co-product of lncRNA hyper-transcription, and the various DNA coding regions involved do not have to be on the same chromosome.


Sunday, May 23, 2021

#69. Storming South [Evolution]

EV

Red, theory; black, fact.

This is a theory of the final stages of human evolution, when the large brain expansion occurred.
At least, they did. Sorry, I don't belong to this species. The Linnaean binomial literally means "wise man." What would be the Latin for "wise guy"?

Homo sapiens: created by ice

H. sapiens appears to have arisen from Homo erectus over the last 0.8 million years due to climate instability in the apparent origin area, namely East Africa. During this time, Europe was getting glaciated every 0.1 million years because of the astrophysical Milankovitch cycle, a rhythm in the amount of eccentricity in the Earth's orbit due to the influence of the planet Jupiter.
However, I am thinking of the hominins who had settled in Europe (or Asia, it doesn't matter for this argument) during the interglacial periods (remember that H. erectus was a great disperser) and when the ice began advancing again, were now facing much worse cooling and drying than in Africa, and thus much greater selection pressures. At least during the last continental glaciation, the ice cap only extended to the Baltic Sea at the maximum, but long before the ice arrives, the land is tundra, which can support only a very thin human population. In any given glaciation, the number of souls per hectare the land could support was relentlessly declining in northern Europe/Asia, and eventually the residents had to get out of Dodge City and settle on land further south, almost certainly over the dead bodies of the former owners. This would have selected Europeans or Asians for warlike tendencies and warfaring skills, which explains a lot of human history. 

Our large brains

However, our large brains seem to be great at something else besides concocting Games of Thrones: that is, environment modification. It's a no-brainer that the first thing someone living in the path of a 2-km wall of ice needs is to keep from freezing to death, and this would have been the first really good reason to modify environments. Unlike chipping a stone axe, environment modification involves fabricating something bigger than the fabricator. Even a parka has to be bigger than you or you can't get into it. This plausibly would have required a larger brain to enable a qualitatively new ability: making something you can't see all at once when it is at working distance.

Our rhythmic evolution

After parkas, early northerners might have evolved enough association cortex (maybe on the next glaciation cycle) to build something a little bigger, like a tent or a lean-to. On the next cycle, they might have been able to pull off a decent longhouse made of wattle. On the next, a jolly good castle surrounded by cultivated lands and drainage ditches. These structures would have delayed the moment of decision when you have to go and take on the Pleistocene-era Low-brows to the south. This will buy you time to build up your numbers, and I understand that winning battles is very much a numbers game. Therefore, environment modification skill would have been selected for in tandem with making like army ants.

Where is the fossil evidence for this theory?

Why do we not find fossil evidence of all this in Europe or Asia? <05-19-2022: Actually, we do: the Neanderthals and Denisovans, who have been difficult to account for in terms of previous theories of human origins.> My scenario can be defended against the inconvenient fossil evidence for a human origin in East Africa in general terms, by citing the well known incompleteness of the fossil record and its many biases, but, of course, I want details. Note, however, what else is in East Africa: the Suez, a land bridge to both Europe and Asia via the Arabian tectonic block, which was created by plate tectonics near the end of the Miocene, thus antedating both H. sapiens and H. erectus. Not only can hominins disperse through it to other continents during interglacials, but they can come back in, fiercer and brainier than before, when the ice is advancing again, to then deposit their fossil evidence in the Rift Valley region of East Africa. The Eurasian backflow event of 3000 years ago may be a relatively recent example of this. The Isthmus of Suez is low-lying and thus easily drowned by the sea, but the probability of this was minimal at times of continental glaciation, when sea levels are minimal. I assume that early hominins expanded like a gas into whatever continent they could access. Increasing glaciation/tundrafication of that continent would have recompressed the "gas" southward, causing it to retrace its path, partly back into Africa. 

Pleistocene selection pressures

To reiterate, this process would have been accompanied by great mortality and therefore, potentially, much selection. Moreover, during the period we are considering, temperatures were declining most of the time; the plot of temperature versus time has a saw-tooth pattern, with long declines alternating with short warming events, and it is the declines that would have been the times of natural selection of hominins living at high latitudes.

Plebius sapiens.

A limestone block in Canada showing scratches left by stones
embedded in the underside of a continental glacier.
The rock has also been ground nearly flat by the same process. Scary.

Glaciated boulder by night. Have a nice interglacial.


Sunday, December 6, 2020

#68. Consciousness is Google Searches Within Your Brain [neuroscience]

NE

Red, theory; black, fact.


The Google search is one of those things that are too good a trick for Nature to miss (TGTNM) and she didn't, and it's called consciousness.


Brain mechanism of consciousness

I conjecture that the human brain launches something like a Google search each time an attentional focus develops. This is not necessarily a literal focus of activity on the cortex; it is almost certainly a sub-network activation. The sub-net activity relays through the prefrontal cortex and then back to sensory cortex, where it activates several more sub-nets; each of these, in turn, activates further sub-nets via the prefrontal relay, and so on, exponentially. At each stage, however, the degree of activation declines, thereby keeping the total cortical activation limited.


Accounting for subjective experience

The first-generation associations are likely to be high in the search rankings, and thus subjectively "close" to the triggering attentional focus and relatively strongly in consciousness, although still in the penumbra that is subjectively "around" the attentional focus. Lower-ranking search results would form a vast crowd of associations only dimly in consciousness, but would give conscious experience its richness. Occasionally, an association far out in the penumbra will be just what you are looking for, and will therefore be promoted to the next attentional focus: you get an idea.


The role of emotions

The evaluation process responsible for this may involve the mediolateral connections of the cortex, which lead back to the limbic system, where emotions are thought to be mediated, at the cingulate gyrus. Some kind of pattern recognition seems unavoidable, whereby a representation of what you desire <06-25-2021: itself a sub-network activation> elaborated by the mediolateral system is matched to retrieved associations. Your target may be only a part of the retrieved association, but will suffice to pull the association into the attentional focus.

This is a great system, because it allows a mammal to converge everything it knows on every task, rather than having to perform as a blinkered if-then machine.


Brain mechanisms and our evolutionary history

01-02-2021: Why should we have this back-and-forthing between the prefrontal cortex and the sensory association cortex? Two possibilities: 1) the backward projections serve a priming function, getting certain if-then rules closer to firing threshold in a context-sensitive manner; 2) This is a uniquely human adaptation for our ecological niche as environment modifiers. In ordinary tool use and manufacturing, dating back to Homo habilis, the built thing is smaller than the builder's body, but in environment modification, the built thing is larger than the builder's body—an important distinction. Thus, the builder can only see one part of it at a time. Viewings must therefore be interleaved with reorientations involving the eyes, neck, trunk, and feet. These reorientations, being motoric in nature, will be represented frontally, and I place these representations in the prefrontal cortex. The mental representation of the macro-built-thing therefore ends up being an interleaved collection of views and reorientations. <07-11-2021: In other words, a simulation.> The reorientations would have to be calibrated by the vestibular system to allow the various views to be assembled into a coherent whole. By this theory, consciousness is associated with environment modification.

05-24-2021: Consistent with this theory, the cortical representation of vestibular sense data is atypical. There is no "primary vestibular area." Rather, islands of vestibular-responsive neurons are scattered over the sensory cortex, distributing across the other senses. This seems analogous to a little annotation for xyz coordinates, etc., automatically inserted in a picture, as seen in computer-generated medical diagnostic images.

Saturday, October 31, 2020

#67. The Trembling-Network Theory of Everything [physics]

PH

Red, theory; black, fact. 



I continue to mine the idea that the world of appearances is simulation-like, in that how we perceive it is strongly affected by the fact that our point of view is inside it, and illusions are rampant.


The slate-of-givens approach is intended to exploit consilience to arrive at a simplified physics that attributes as many phenomena as possible to historical factors and the observer's point of view. Simplified physics is viewed as a stepping-stone to the one, true TOE. The existence of widespread consilience implies that such exists.


The basic theory

The underlying reality is proposed to be a small-world network, whose nodes are our elementary particles and whose links ("edges" in graph theory) are seen collectively as the fields around those particles.

This network is a crude approximation to scale-free, but is structurally only a recursion of three generations (with a fourth in the process of forming), each comprised of two sub-generations, and not an infinite regress. The first generation to form after the big bang was a bunch of triangular networks that we call baryons. In the next generation, they linked up to form the networks underlying light atomic nuclei. These, and individual protons, were big enough to stably bond to single nodes (electrons) to form the network version of atoms. Above the atomic/molecular/electromagnetic level, further super-clustering took on the characteristics of gravitation, whose hallmark seems to be rotation. At the grandest cosmological scales, we may be getting into a fourth "force" that produces the foamy structure of galaxy distribution. The observations attributed to the presence of dark matter may be a sign that, at the intra-galactic scale, the nature of the "fields" is beginning to shift again.

I conjecture that throughout this clustering process, a continuous thermal-like agitation was running through all the links, and especially violent spikes in the agitation pattern could rupture links not sufficiently braced by other, parallel links. This would have been the basis of a trial-and error process of creation of small-world characteristics. The nature of the different "forces" we seem to see at different scales would be entirely conditioned by the type of clusters the links join at that scale, because cluster type would condition the opportunities for network stabilization by cooperative bracing. 


Reconciliation with known science

Formation and rupture of links would correspond to the quantum-mechanical phenomenon of wave-function collapse, and the endless converging, mixing, and re-diverging of the heat signals carried by the network would correspond to the smooth, reversible time-evolution of the wave-function between collapses. The experience of periodic motions would arise from resonances in closed paths embedded in the network. When you see the moon move across the sun in an eclipse, <11-27-2020: no net links are being made or broken; the whole spectacle somehow arises by an energetically balanced creation and rupture of links.>

The photoelectric effect that Einstein made famous can be given a network interpretation: the work function is the energy needed to simultaneously break all the links holding the electron to the cluster that is the electrode, and the observation of an electron that then seems to fly away from the electrode happens by calculation in the remaining network after it has been energized by heat-signal energy in excess of that needed to break the links, reflecting back into the network from the broken ends.

How distance would arise

All the ineffably large number of nodes in the universe would be equidistant from each other, which is possible if they exist in a topological space; such spaces have no distance measure. I think it likely that what you experience as distance is the number of nodes that you contain divided by the number of links connecting the cluster that is you with the cluster that you are observing. It remains to figure out how some of the concomitants of distance arise, such as delay in signal transmission and the cosmological redshift.

Reconciliation with the finite speed of light

11-01-2020: The time-delay effect of distance can be described by a hose-and-bucket model if we assume that all measurements require link breaking in the observer network. The energy received by the measuring system from the measured system is like water from a hose progressively filling a bucket. The delayed overflow of the bucket would correspond to the received energy reaching threshold for breaking a link in the observer network. The fewer the links connecting observer to observed relative to the observer size (i.e., the greater the distance), the slower the bucket fills and the longer signal transmission is observed to take.

11-02-2020: The above mechanism cannot transmit a pulsatile event such as a supernova explosion. It takes not one, but two integrations to convert an impulse into a ramp function suitable for implementing a precise delay. Signal theory tells us that if you can transmit an impulse, you can transmit anything. The second integration has already been located in the observer cluster, so the obvious place in which to locate the first integration is in the observed cluster. Then when the link in the observer cluster breaks, which is an endothermic event, energy is sucked out of both integrators at once, resetting them to zero. That would describe an observer located in the near field of the observed cluster. In the far field, the endothermic rupture would cool only the observer cluster; most of the radiative cooling of the observed cluster would come from the rupture of inter-cluster links, not intra-cluster links. Thus, hot clusters such as stars are becoming increasingly disconnected from the rest of the universe. This can account for the apparent recessional velocity of the galaxies, since I have conjectured that distance is inversely proportional to numbers of inter-cluster links.

Predictions of the fate of the universe

We often hear it said that the reason the night sky is black is that the expansion of the universe is continuously creating more space in which to warehouse all the photons emitted by all the stars. However, the network orientation offers a simpler explanation: inter-cluster links at the grandest scale are being endothermically destroyed to produce the necessary cooling, and the fewer these become, the longer the cosmological distances appear to be. I suppose that when these links are all gone, we all cook. The microwave background radiation may be a harbinger of this. Clearly, my theory favours the Big Rip scenario of the fate of the universe, but a hot Big Rip.

Accounting for the ubiquity of oscillations

05-01-2021: At this point, an improved theory of oscillations can be offered: Oscillating systems feature 4 clusters and thus 4 integrators connected in a loop to form a phase-shift oscillator. These integrators could be modeled as a pair of masses connected by a spring ( = 2 integrators) in each of the observer and observed systems ( = 2 x 2 = 4 integrators).

Motion and gravity

11-30-2020: Motion would be an energetically balanced breaking of links on one side of a cluster and making of links on the other. This could happen on a hypothetical background of spontaneous, random link making and breaking. Acceleration in a gravitational "field" would happen if more links are coming in from one side than the opposite side. More links will correspond to a stronger mutual bracing effect, preferentially inhibiting link breaking on that side. This will shift the making/breaking equilibrium toward making on that side, resulting in an acceleration. <12-11-2020: The universal gravitational constant G could be interpreted as expressing the probability of a link spontaneously forming between any two nodes per unit of time.>

Dimension and direction

01-13-2021: It is not clear how the direction and dimension concepts would emerge from a network representation of reality. If distance emerges from 2-way interactions of clusters, perhaps direction emerges from 3-way interactions and dimension arises from a power law of physical importance versus the number of interacting clusters in a cluster of clusters. This idea was inspired by the fact that four points are needed to define a volume, three are needed to define a plane, and two are needed to define a line.

02-13-2021: Alternatively, angle may be a matter of energetics. Assume that new links form spontaneously at an unalterable rate and only link rupture rate varies. The heat injected by link creation must be disposed of by a balanced rate of link rupture, but this will depend in detail on mutual bracing effects. If your rate of rupture of links to a given cluster is minimal, you will be approaching that cluster. The cluster with which your rupture rate is highest is the one you are receding from. Clusters with which you score average rupture rates will be 90 degrees off your line of travel. The distribution of clusters against angle is predicted from geometry and the appearance of the night sky to be proportional to sin(θ), but a random distribution of rupture rates would predict a bell curve (Gaussian) centered on the average rupture rate. Close, but no cigar. The tails of the Gaussian would produce a sparse zone both fore and aft. Moreover, since there must always be a maximum and minimum, you will always be heading exactly toward some cluster and exactly away from some other: not what we observe.

03-06,07-2021: That the universe is spatially at least three-dimensional can be reduced to a rule that links do not cross. Why the minimum dimensionality permitted by this rule is the one we observe remains to be explained. 

Momentum

Momentum can be explained by attributing it to the network surrounding a cluster, not to the cluster itself. Heat must flow without loss (how?) from in front of a travelling cluster around to the rear (I hope eventually to be able to purge this description of all its directional assumptions), suggesting closed flow-lines through the larger network reminiscent of magnetic field lines. (This is similar in outline to Mach's explanation of momentum, as being due to the interaction of the test mass with the distant galaxies.) It seems necessary to postulate that once this flow pattern is established, it persists by default. An especially large cluster in the vicinity will represent a high-conductivity path for the heat flow, possibly creating a tendency for links to form perpendicular to the line of travel and offset toward the large cluster, which might explain gravitational capture of objects into stable orbits. Finally, the overall universal law would be: heat of link formation = heat of link rupture + increases in network heat content due to increases in network melting point due to increases in mutual bracing efficiency. A simple concept of melting point is the density of triangles in the network. Still to be explained: repulsive forces.

Repulsive forces

04-04-2021: Repulsive forces are only seen with electromagnetism and then only after a local network has been energized somehow. When particles said to be oppositely charged recombine, neutral atoms are re-formed, which creates new triangles and thus increases melting point. The recombination of particles said to be of like charge creates relatively few triangles and is therefore disfavored, creating the impression of mutual repulsion.
 

More on the origin of momentum

Inter-cluster links are not individually bidirectional in their heat conductivity, but a (usually) 50:50 mixture of unidirectional links going each way. Momentum and spontaneous First Law motion become prevalent in classically-sized networks due to small imbalances in numbers of cluster A to cluster B links versus cluster B to cluster A links. This produces a random pattern of spontaneous heat flows across the universe. Converging flows burn out links (and are thus self-limiting) and diverging flows preserve links, causing them to increase in number locally. This process nucleates the gravitational clumping of matter. A directional imbalance in the interior of a cluster causes First Law motion by spontaneously transporting heat from front to back. Front and back are defined by differences in numbers of inter-cluster links (to an arbitrary external cluster) among subsets of cluster nodes.

Case study of a rocket motor

For a rocket motor to work, we have to assume that one of these asymmetrical links can only be burned out by heat applied to its inlet end. During liftoff, the intense heat down in the thruster chambers burns out (unidirectional) links extending up into the remainder of the craft. This leaves an imbalanced excess of links within the rocket body going the other way, leading to a persistent flow of heat downward from the nose cone. This cooling stabilizes links from overhead gravitationally sized clusters ending in the nose cone, causing them to accumulate, thereby shortening the "distance" from the nose cone to those clusters. Meanwhile, the heat deposited at the bottom of the rocket progressively burns out links from the rocket to the Earth, thereby increasing the "distance" between the rocket and the Earth. The exhaust gasses have an imbalanced excess of upward-directed asymmetric links due to the temperature gradient along the exhaust plume that serves to break their connection to the rocket and create the kind of highly asymmetrical cluster required for space travel. <04-11-2021: The details of this scenario all hang together if we assume that link stabilization is symmetrical with link burnout: that is, it is only responsive to what happens at the inlet (in this case, cooling).> Since kinetic energy is associated with motion, the directional link imbalance must be considered a form of energy in its own right, one not sensible as heat as usually understood.

Future directions

05-28-2021: To make further progress, I might have to assume that the links in the universal network are the real things and that the nodes are just their meeting places, which only appear to be real things because this is where the flow of energy changes direction. I then assume that all links are directional and that pairing of oppositely-directed links was actually the first step in the evolution of the universe. Finally, I decompose these directional links into an inlet part joined to an outlet part. With this decomposition, a link pair looks like this:
⚪⚫
⚫⚪
Notice the interesting ambiguity in how to draw the arrows. A purely directional link recalls the one-way nature of time and may represent undifferentiated space and time. A final decision was to treat a repulsive force as a link whose disappearance is exothermic, not endothermic, because this indirectly allows the formation of more of the default kind of link.