Monday, December 12, 2022

#70. How the Cerebellum May Adjust the Gains of Reflexes [neuroscience]

NE


Red, theory; black, fact




The cerebellum is a part of the brain involved in ensuring accuracy in the rate, range, and force of movements and is well known for its regular matrix-like structure and the many theories it has spawned. I myself spent years working on one such theory in a basement, without much to show for it. The present theory occurred to me decades later on the way home from a conference on brain-mind relationships at which many stimulating posters were presented.

Background on the cerebellum

The sensory inputs to the cerebellum are the mossy fibers, which drive the granule cells of the cerebellar cortex, whose axons are the parallel fibers. The spatial arrangement of the parallel fibers suggests a bundle of raw spaghetti or the bristles of a paint brush. These synapse on Purkinje cells at synapses that are probably plastic and thus capable of storing information. The Purkinje (pur-kin-gee) cells are the output cells of the cerebellar cortex. Thus, the synaptic inputs to these cells are a kind of watershed at which stimulus data becomes response data. The granule-cell axons are T-shaped: one arm of the T goes medial (toward the midplane of the body) and the other arm goes lateral (the opposite). Both arms are called parallel fibers. Parallel fibers are noteworthy for not being myelinated; the progress of nerve impulses through them is therefore steady and not by jumps. The parallel fibers thus resemble a tapped delay line, and Desmond and Moore proposed this in 1988.

The space-time graph of one granule-cell impulse entering the parallel-fiber array is thus V-shaped, and the omnibus graph is a lattice, or trellis, of intersecting Vs.

The cerebellar cortex is also innervated by climbing fibers, which are the axons of neurons in the inferior olive of the brainstem. These carry motion error signals and play a teacher role, teaching the Purkinje cells to avoid the error in future. Many error signals over time install specifications for physical performances in the cerebellar cortex. The inferior olivary neurons are all electrically connected by gap junctions, which allows rhythmic waves of excitation to roll through the entire structure. The climbing fibers only fire on the crests of these waves. Thus, the spacetime view of the cortical activity features climbing fiber impulses that cluster into diagonal bands. I am not sure what all this adds up to, but what would be cute?


A space-time theory

Cute would be to have the climbing fiber diagonals parallel to half of the parallel-fiber diagonals and partly coinciding with the half with the same slope. Two distinct motor programs could therefore be stored in the same cortex depending on the direction of travel of the olivary waves. This makes sense, because each action you make has to be undone later, but not necessarily at the same speed or force. The same region of cortex might therefore store an action and it’s recovery.


The delay-line theory revisited

As the parallel-fiber impulses roll along, they pass various Purkinje cells in order. If the response of a given Purkinje cell to the parallel-fiber action potential is either to fire or not fire one action potential, then the timing of delivery of inhibition to the deep cerebellar neurons could be controlled very precisely by the delay-line effect. (The Purkinje cells are inhibitory.) The output of the cerebellum comes from relatively small structures called the deep cerebellar nuclei, and there is a great convergence of Purkinje-cell axons on them, which are individually connected by powerful multiple synapses. If the inhibition serves to curtail a burst of action potentials in the deep-nucleus neuron triggered by a mossy-fiber collateral, then the number of action potentials in the burst could be accurately controlled. Therefore, the gain of a single-impulse reflex loop passing through the deep cerebellar nucleus could be accurately controlled. Accuracy in gains would plausibly be observed as accuracy in the rate, range, and force of movements, thus explaining how the cerebellum contributes to the control of movement. (Accuracy in the ranges of ballistic motions may depend on the accuracy of a ratio of gains in the reflexes ending in agonist vs. antagonist muscles.)


Control of the learning process

If a Purkinje cell fires too soon, the burst in the deep-nucleus neuron will be curtailed too soon, and the gain of the reflex loop will therefore be too low. The firing of the Purkinje cell will also disinhibit a spot in the inferior olive due to inhibitory feedback from the deep nucleus to the olive. I conjecture that if a movement error is subsequently detected somewhere in the brain, this results in a burst of synaptic release of some monoamine neuromodulator into the inferior olive, which potentiates the firing of any recently-disinhibited olivary cell. On the next repetition of the faulty reflex, that olivary cell reliably fires, causing long-term depression of concurrently active parallel fiber synapses. Thus, the erroneous Purkinje cell firing is not repeated. However, if the firing of some other Purkinje cell hits the sweet spot, this success is detected somewhere in the brain and relayed via monoamine inputs to the cerebellar cortex where the signal potentiates the recently-active parallel-fiber synapse responsible, making the postsynaptic Purkinje cell more likely to fire in the same context in future. Purkinje cell firings that are too late are of lesser concern, because their effect on the deep nucleus neuron is censored by prior inhibition. Such post-optimum firings occurring early in learning will be mistaken for the optimum and thus consolidated, but these consolidations can be allowed to accumulate randomly until the optimum is hit.


Role of other motor structures

The Laplace transform was previously considered in this blog to be a neural code, and its output is a complex number giving both gain and phase information. To convert a Laplace transform stored as poles (points where gain goes to infinity) in the cerebral cortex into actionable time-domain motor instructions, the eigenfunctions corresponding to the poles, which may be implemented by damped spinal rhythm generators, must be combined with gains and phases. If the gains are stored in the cerebellum as postulated above, where do the phases come from? The most likely source appears to be the basal ganglia. These structures are here postulated to comprise a vast array of delay elements along the lines of 555 timer chips. However, a delay is not a phase unless it is scaled to the period of an oscillation. This implies that each oscillation frequency maps in the basal ganglia to an array of time delays, of which none are longer than the period. These time delays would be applied individually to each cycle of an oscillation. Such an operation would be simplified if each cycle of the oscillation were represented schematically by one action potential.


Photo by Robina Weermeijer on Unsplash


Saturday, October 15, 2022

#69. Role of Personalities in the Human Swarm Intelligence [population]

PO


Red, theory; black, fact




Each of the Big Five personality traits may be a dimension along which people differ in some socially important behavioral threshold. These are, respectively: openness > uptake of innovations; conscientiousness > uptake of taboos; extraversion >  committing to collectivism*; agreeableness (-) > becoming militant; neuroticism > engaging in/submitting to persecution. The personality trait is written on the left of the ">" and the putatively impacted social threshold is on the right.

These threshold spectra may enable social shifts that are noise-resistant, sensitive to triggers, and rapid. Noise resistance and sensitivity together are called good “receiver operating characteristics,” a concept often used in the scientific literature.

A metaphor that suggests itself is lighting a camp fire. The spark is first applied to the tinder. Ignition of the tinder ignites the kindling. Ignition of the kindling ignites the small sticks. Ignition of the small sticks ignites the big sticks, and everything is consumed.

Orderly fire-starting appears to require a spectrum of thresholds for ignition in the fuel, as may orderly social shifts. To further extend the metaphor, note that the fuel must be dry (i.e., situational factors must be permissive).

Social novelties may spread upward to higher-threshold social strata by meme propagation reinforced by emotional contagion. The emotional energy necessary for emotional contagion would come from the individual’s interaction with the novelty, which would feature a positive feedback.

The governing neuromodulators of personality may be as follows:

  • Acetylcholine-Openness
  • Noradrenalin-Neuroticism 
  • Serotonin-Agreeableness 
  • Histamine-Conscienciousness
  • Dopamine-Extraversion 

Our capacities for all of the enumerated social shifts were selected in evolution and most can be assumed to be still adaptive when correctly triggered. In today’s world, shifts to persecution are probably the least likely to still be adaptive, and could be a holdover from our Homo erectus stage. Persecution leads to refugee production, and refugee production could have been the reason that guy was such a great disperser.

As the geologists say, “The present is the key to the past.”

A possible anti-invasion adaptation and predictable from geography. The sociological term for the corresponding failure mode may be siege mentality.


Thursday, September 1, 2022

#68. A Tripartite Genetic Code [genetics]

GE


Red, theory; black, fact


The filamentous alga Cladophora.

There are three genetic codes, not one. Conventional thinking holds that there is just one code, which encodes the amino acid sequence of proteins into DNA. Here are the two new ones:

A morphology code for the multicellular level

In the context of a growing embryo, control of the orientation of mitosis is arguably at the origin of organ and body morphology. For example, all cell division planes parallel will result in a filamentous organism like Cladophora. Planes free to vary in only one angle (azimuth or elevation) will produce a sheet of cells, a common element in vertebrate embryology. Programmed variation in both angles can produce a complex 3D morphology like the vertebrate skeleton. Thus we begin to see a genetic code for morphology, distinct from the classical genetic code that specifies amino acid sequences. 

The nucleus is tethered by cytoskeletal elements such as lamin, nesprin, actin, and tubulin to focal adhesions on the cell membrane, non-rotatably, so that all angle information can be referred to the previous mitotic orientation.

Observational Support 

The nucleus is usually spherical or ovoid and is about ten times more rigid than the surrounding cytoplasm, features which may be related to the demands of the morphology read-out process. Consistent with this, blood is a tissue without a morphology, and the nucleated cells of the blood have nuclei that are mostly irregular and lobate. The lymphocytes found in the blood have round nuclei, however, but these cells commonly form aggregates that can be considered to possess a simple morphology.

A morphology code for the single-cell level in cells with nuclei

A third genetic code would be a code for single-cell morphology, and cell morphology can be very elaborate, especially in neurons. This will probably involve storing information about cytoskeleton morphology in DNA. Neurons express especially many long noncoding RNAs (lncRNA), so I suggest that these transcripts can carry morphological information about cytoskeletal elements. This information could be read out by using the lncRNA as a template on which to assemble the cytoskeletal element, then removing the template by enzymic hydrolysis or by some enzyme analogous to a helicase. Greater efficiencies could be achieved by introducing some analog of transfer RNAs. LncRNAs are already implicated in transcriptional regulation, and this might be done indirectly by an action on the protein scaffolding of the chromatin. Moreover, as predicted, lncRNAs are abundant in cytoplasm as well as in the nucleus, and the cytoplasm contains the most conspicuous cytoskeletal structures. The template idea is similar to but goes beyond the already-established idea that lncRNAs act as scaffolds for ribonucleoprotein complexes. Since cytoskeletal elements are made from monomers of few kinds, we would expect the template to be highly repetitious, and lncRNAs are decidedly repetitious. Indeed, transposons and tandem repeats are thought to drive lncRNA evolution. See https://doi.org/10.1038/s41598-018-23334-1, in Results, subsection: "Repetitive sequences in lncRNAs," p. 4 in the PDF.

Why Three Codes?

The issue driving the evolution of the two additional genetic codes may be parsimony in coding (advantageously fewer and shorter protein-coding genes).

Disclaimer 

This next paragraph was written for researchers, not for patients or those at risk for cancer who may be seeking a cure outside the medical mainstream. 

Cancer Research May Be Held Back by the One-Code View

Mutations in the proposed cytoskeletal genome could be at the origin of cancer. Cancer cells will proliferate in a culture dish past the point of confluence, unlike healthy cells. If the cytoskeleton is required to sense confluence, as seems likely, a defective cytoskeleton incapable of performing this function could lead directly to uncontrolled growth and thus cancer. It is not clear how the immune system could detect a mutation like this, since no amino acid sequence is affected. Possibly, a special evolved system or reflex exists that telegraphs such mutations to the cell surface where the immune system has a chance of detecting them. The clustering of antigens on the cell surface is already known to enhance immunogenicity, so this hypothetical system may output a clustering signal on the cell surface that talks to the cytoskeletons of circulating immune-system cells. Alternatively, the immune-system cells may directly interrogate the body cells’ ability to detect confluence. For these ideas to apply to blood-borne cells such as leukocytes, the failure event would have to happen during maturation in the bone marrow while the cell is still part of a solid tissue.
YAP1 protein, which promotes cell proliferation when localized to the nucleus, may be gated through the nuclear pores by some kind of operculum attached to the lamin component of the nuclear envelope. The operculum would move down from the pore, thus unblocking it, when a region of nuclear membrane flattens in response to a localized loss of tensile forces in the cytoskeleton. The flattening causes a local excess of lamin area, which leads to buckling and delamination, which is coupled to operculum movement. A mutation that makes the operculum leaky to YAP1 when closed could lead to cancer. This mutation could be in an lncRNA that scaffolds key components of the nuclear membrane’s supporting proteins. A more subtle mechanism would be for the buckling and delamination to happen on a molecular scale and lead to a uniform regional increase in the porosity of the lamin layer, which would gate YAP1 permeation.
Loss of tissue adjacent to the cell would cause a loss of cytoskeletal tension on the nucleus not only on that side of the nucleus, but also on the side opposite. If these two slack regions directly dictate centriole placement on the next round of mitosis, then the new cell will automatically be placed to fill in the tissue hole. (This may constitute an important mechanism of wound healing and suggests a link between morphology and carcinogenesis.)

Evolutionary Considerations 

The multicellular morphology code was postulated to arise from precise control of the orientation of the plane of mitotic division. It now seems likely that this control will be implemented via bespoke cytoskeletal elements, since complex single-cell morphology and its genetic code probably preceded complex multicellular morphology in evolution. 

Mechanism of Multicellular Morphology Readout

These bespoke elements might be inserted into a cytoskeletal apparatus surrounding the nucleus that has commonalities with devices such as gimbals and armillary spheres. The centrioles are likely to be key components of this apparatus. Each centriole may create a hoop of microtubules encircling the nucleus, and the two hoops would be at right angles, like the centrioles themselves when parked outside the nucleus between cell divisions. During mitosis, in-plane revolution of one of the hoops through 180 degrees would be responsible for separating the centrioles. After this, both centrioles must be on this same hoop. Alternatively, the centrioles may move by synthesis at the new locations followed by disassembly of the old centrioles. Each hoop then forms a circular track for adjusting azimuth and elevation, respectively, relative to anchor points left over from the previous round of mitosis. The bespoke elements would lie along these tracks and function as variable-length shims. The remainder of the apparatus would translate these lengths into angles. The inner hoop would pass through two protein eyelets connected to the outer hoop and the outer hoop would pass through an eyelet connected to the anchor. The shims would fix the along-track distances between an inner eyelet and the outer eyelet and between an inner eyelet and a centriole (Fig. 1).


Figure 1. A hypothetical cytoskeletal apparatus for orienting mitosis; C, centrioles; zigzag, shims; dotted, a nuclear diameter; double line, anchor to cell membrane; EL, elevation; AZ, azimuth 




Top picture credit: Cladophora flavescens, Phycologia Britannica, William Henry Harvey, 1851.

Thursday, June 9, 2022

#67. Extended Theory of Mind [evolution]

EV


Red, theory; black, fact




Where is human evolution going at the moment? That is a good question. Let’s look around, then. I am writing this in a submarine sandwich joint where one sandwich maker is serving two customers. The radio brings in a ballad by a lady vocalist at a tempo suggestive of sex. Now a DJ (Mauler or Rush) is amusing the listeners with some patter. The window shows that rush hour is over and only a few home-bound stragglers are in the street. If I crane my neck, I can see the green beacon on the new electric charging station. 

That will do for starters. Sandwich maker, pro singer, DJ, bureaucrat, electrician—I couldn’t do any of that. We are a society of specialists, and such societies feature differentiation with integration. So, how far back does this go? At most, nine millennia; about 450 generations. Time enough for evolution? It doesn’t matter; we want direction here, not distance.

Contemporary natural selection of humans will therefore reward differentiability and integratability.

Differentiability: vocational choices often begin in childhood with hobbies, and there is a certain frame of mind associated with hobbies called “flow.” I therefore suggest that we are being selected for a susceptibility to "flow state." 

Integratability: society is held together by our ability to coordinate with others, and the key ability here is thought to be theory of mind, or the ability to infer the mental states of those with whom we interact. Likewise, we are being selected for theory-of-mind ability.

There may be something higher than theory of mind, which not everyone possesses at this time, that could be called "extended theory of mind": inferring the mental states of those not present, and whose very existence is itself inferred. A society strong in this trait will appear to be communicating with one another through solid walls, as if by ESP. 

Who are these Chosen? Probably military generals, politicians, and the executive class. Go figure.

However, the human cranium is probably as voluminous as it can get and still allow childbirth, so the gray matter subserving the new ability will have to be included at the expense of some other, preferably obsolete ability, like accuracy in spearing game animals.

So challenge your mayor to a game of darts and see how he does. This theory is falsifiable.

Tuesday, October 19, 2021

#66. How Enhancers May Work [biochemistry]

 CH


Red, theory; black, fact



Background on Enhancers

Enhancers are stretches of DNA that, when activated by second messengers like cyclic AMP, enhance the activity of specific promoters in causing the transcription of certain genes, leading to the translation of these genes into protein. Enhancers are known for causing the post-translational modification of the histones associated with them. Typically, lysine side chains on histones are methylated, doubly methylated, triply methylated, or acetylated. Serines are phosphorylated. In general, phosphorylation condenses chromatin and acetylation expands and activates it for transcription. Methylation increases positive electric charge on the histones, acetylation decreases positive charge, and phosphorylation increases negative charge. The enhancers of a promoter are usually located far away from it measuring along the DNA strand, and can even be on different chromosomes. 

The Mystery of Enhancer–promoter Interaction

How the distant enhancer communicates with its promoter is a big mystery. The leading theory is that the enhancer goes and sticks to the promoter, and the intervening length of DNA sticks out of the resulting complex as a loop. This is the "transcription hub" theory. 

An Alternative Theory of Interaction

When activated, the multiple enhancers may cause modification of their associated histones that place the same electric charge on all of them, which is also of the same sign as the charge on the promoter region. Mutual electrostatic repulsion of all these regions then expands the chromatin around the promoter. This effect reduces the fraction of the time that RNA polymerase II at the promoter cannot move down the DNA strand because unrelated chromatin loops are in the way, like trees fallen across the railway tracks. The result is gene activation.

It Gets Bigger

This could also be the mechanism of chromatin decondensation generally, which is known to be a precondition for the expression of protein-coding genes.

Possible Sophistications

The mutual electrostatic repulsion of enhancers does not necessarily accomplish decondensation directly, but may do so indirectly, by triggering a cascade of alternating chromatin expansions and histone modifications. Furthermore, this cascade is not necessarily deterministic. 

Future Directions

These ideas predict that raising the ionic strength in the nuclear compartment, which would tend to shield charges from each other, should inhibit gene activation. This manipulation will require genetic knockout of osmolarity regulating genes.

Monday, September 13, 2021

#65. Why There is Sex [evolution, genetics]

EV  GE

Red, theory; black, fact

The flower Coronilla varia L.

Sex as an evolvability adaptation

There are always two games in town: reproduction and evolution. Since we live on an unstable planet where the environment can change capriciously, species here have been selected for rapid evolvability per se to enable them to adapt to the occasional rapid environment changes and not go extinct. Apparently, mutations, the starting point for evolutionary adaptation, become more common when the organism is stressed, and stress may partly be a forecast of loss of fertility due to a developing genome-environment mismatch. Bacteria exhibit the large mutation of transformation under stress conditions, and three types of stress all increased the meiotic recombination rate of fruit flies (Stress-induced recombination and the mechanism of evolvability. Zhong W, Priest NK. Behavioral ecology and sociobiology. 2011;65:493-502). Recombination can involve unequal crossing-over in which changes in gene dose can occur due to gene duplication or deletion. However, since most mutations are deleterious (there are more ways to do something wrong than to do it better) many mutations will also reduce fertility, and at precisely the wrong moment: when a reduction in fertility is impending due to environment change. The answer was to split the population into two halves: the reproduction specialists and the selection specialists, and remix their respective genomes at each generation.

The roles of the two sexes

Females obviously do the heavy lifting of reproduction, and males seem to be the gene testers. So if a guy gets a bad gene, so long, and the luckier guy next to him then gets two wives. The phenomenon of greater male variability (Greater male than female variability in regional brain structure across the lifespan. Wierenga LM, Doucet GE, Dima D, Agartz I, Aghajani M, Akudjedu TN, Albajes‐Eizagirre A, Alnæs D, Alpert KI, Andreassen OA, Anticevic A. Karolinska Schizophrenia Project (KaSP) Consortium. Hum. Brain Mapp., doi:10.1002/hbm.25204, and I have never seen so many authors on a paper: 160.) suggests that mutations have more penetrance in males, as befits the male role of cannon fodder/selectees. What the male brings to the marriage bed, then, is field-tested genetic information. This system allows many mutations to be field tested with minimal loss of whole-population fertility, because it is the females who are the limiting factor in population fertility.

Chromosomal mechanisms of greater male variability

Chromosomal diploidy may be a system for sheltering females from mutations, assuming that the default process is for the phenotype that develops to be the average of the phenotypes individually specified by the paternal and maternal chromosome sets. Averaging tends to mute the extremes. The males, however, may set up a winner-take-all competition between homologous chromosomes early in development, with inactivation of one of them chosen at random. The molecular machinery for this may be similar to that of random x-inactivation in females. The result will be greater penetrance of mutations through to the phenotype and thus greater male variability. 

Quantitative prediction

This reasoning predicts that on a given trait, male variability (as standard deviation) will be 41% greater than the female variability, a testable prediction. 41% = [SQRT(2) -1] × 100. Already in my reading I have found a figure of 30%, which is suggestive. 

Mechanistic reconciliation with Mendel's laws

The postulated chromosome inactivation process may feature an exemption mechanism that operates on genes present in only one copy per parent. The effect will be to double the penetrance of dominant alleles at that gene. 

Sunday, July 25, 2021

#64. The Checkered Universe [physics]

 PH


Red, theory; black, fact



The basic theoretical vision

This is a theory of everything (TOE) based on a foam model. The foam is made up of two kinds of "bubbles," or "domains": "plus" and "minus." Each plus domain must be completely surrounded by minus domains, and vice versa. Any incipient violation of this rule, call it the "checkerboard rule," causes domains of like type to fuse instantly until the status quo is restored, with release of energy. The energy, typically electromagnetic waves, radiates as undulations in the plus-minus interfaces. The result of many such events is progressive enlargement and diversification of domain sizes. This process, run backward in time, appears to result in a featureless, grey nothingness (imagining contrasting domain types as black and white, inspired by the Yin-and-Yang symbol), thereby giving a halfway-satisfying explanation of how nothing became something. Halfway, because it’s an infinite regress: explaining the phenomenon in terms of the phenomenon. Invoking progressively deepening shades of gray in forward time, to gray out and thus censor the regress encountered in backward time, looks like a direction to explore. A law of conservation of nothingness would require plus domains and minus domains to be present in equal amounts, although this can be violated locally. This law may be at the origin of all the other known conservation laws. The cosmological trend favors domain fusion, but domain budding is nevertheless possible given enough energy.

The givens assumed for this theory of everything. Since there are givens, it is probably not the real theory of everything but rather a simplified physics and possibly a stepping stone to the TOE.

The dimensionality question

The foam is infinite-dimensional. Within the foam, interfaces of all dimensionalities are abundant. Following Hawking, I suggest that we live on a three dimensional brane within the foam because that is the only dimensionality conducive to life. The foamy structure of the large-scale galaxy distribution that we observe thus receives a natural explanation: these are the lower-dimensional foam structures visible from within our brane. The interiors of the domains are largely inaccessible to matter and energy. However, we have an infinite regress again, this time toward increasing dimensionality: we never get to the bulk. Is it time to censor again and postulate progressively lessened contrast with greater dimensionality, and asymptotic to zero contrast? No; the foam model implies a bulk and therefore a maximum dimensionality, but not necessarily three. But what is so special about this maximum dimensionality? Let us treat yin-yang separation as an ordinary chemical process and apply the second law of thermodynamics to see if there is some theoretical special dimensionality. Assuming zero free energy change, we set enthalpy increase (“work”) equal to entropy increase (“disorder”) times absolute temperature. Postulating that separation work decreases with dimensionality and the entropy of the resulting space foam increases with dimensionality, we can solve for the special dimensionality we seek. The separation process has no intrinsic entropy penalty because there are no molecules at this level of description. The real, maximum dimensionality would be greater than theoretical to provide some driving force, which real transformations require. However, is the solution stable? Moreover, the argument implies that temperature is non-zero. Temperature is here the relative motion of all the minute, primordial domains. This could be leftover separation motion. How could all this motion happen without innumerable checkerboard-rule violations and thus many fusion events? Fusion events can be construed as interactions, and extra dimensions, which we have here, suppress interactions. More on this below. That said, the idea of primordial infinite dimensionality remains beguiling in its simplicity and possibilities.

Since infinite dimensionality is a bit hard to get your mind around, let us inquire what is diminished upon increasing the dimensionality, and just set it to zero to represent infinite dimensionality. Some suggestions: order, interaction, and correlation. To illustrate, imagine two 2-dimensional Ardeans* walking toward each other. When they meet, one must lie down so the other can walk over them before either can continue on their way. That's a tad too much correlation and interaction for my taste.

My intuition is that as dimensions are added, order, correlation, and interaction decrease toward zero asymptotically. This would mean that 4D is not so different from 3D as 3D is from 2D. The latter comparison is the usual test case that people employ to try to understand extra dimensions, but it may be misleading. However, in 4D, room-temperature superconductivity may be the rule rather than the exception, due to extradimensional suppression of the interactions responsible for electrical resistance. The persistent, circulating 4D supercurrents, which are travelling electron waves, may look like electrostatic fields from within our 3-brane, which would help to eliminate action-at-a-distance from physics. Two legs of the electron-wave circulation would travel in a direction we cannot point. These ideas also lead to the conclusion that electrostatic fields can be diffracted, which is the classical electron diffraction experiment. The electrons are particles and are therefore not diffracted; they are accelerated in an electrostatic field that is diffracted, thereby building up a fringe pattern on the photographic plate. The particles are just acting as field tracers. A difficulty is that the electrically neutral neutron can also be diffracted. A diffraction experiment requires that they move, however, which will provide a solution to be discussed.


Motion

Motion can be modelled as the array of ripples that spreads across the surface of a pond after a stone is thrown in. A segment of the wave packet coincides with the many minute domains that define the atoms of the moving object, and moves them along. The foam model implies surface tension, whereby increases in interface area increase the total energy of the domain. If the brane is locally thrown into undulations, this will increase the surface area of the interface and thus the energy. This accounts for the kinetic energy of moving masses.  

Brane surface tension would be a consequence of the basic yin-yang de-mixing phenomenon, because increases in interfacial area without volume increase (interface crumpling) can be construed as incipient re-mixing, which would go against the cosmological trend. Thus, the interface always tends to minimum area, as if it had surface tension. This tension provides the restoring force that one needs for an oscillation, which is important because waves figure prominently in this theory. However, a wave also needs the medium to have inertia, or resistance to change, and this is a limitation of the present theory.


Mass and fields

Mass would be due to the domains being full of standing waves that store the energy (equivalent to mass) of many historical merging events. The antinodes in the standing wave pattern would be the regular array of atoms thought to make up a crystal (most solid matter is crystalline). The facets of the crystals would correspond to the domain walls. 

The waves could be confined inside the domains by travelling across our 3-brane at right angles, strictly along directions we cannot point. However, something has to leak into the 3-brane to account for electrostatic effects. Crossing the 3-brane perpendicularly is possible by geometry if each particle is a hypersphere exactly bisected by the 3-brane, and mass-associated waves travel around the hypersphere surface. The neutron produces no leakage waves, which could be assured by the presence of a nodal plane coinciding with the spherical intersection of the particle hypersphere with the 3-brane. Electrons and protons could emanate leakage waves, a possibility that suggests the origins of their respective electric fields. However, the fact that these particles have stable masses means that waves must be absorbed from the 3-brane as fast as they exit, meaning that an equilibrium with space must exist. For an equilibrium to be possible, space must be bounded somehow, which is already implied by the foam model. Since we know of only two charged, stable particles, two equilibrium points must exist. This scenario also explains why all electrons are identical and why all protons are identical. If their respective leakage waves are of different frequencies, the two particle types could equilibrate largely independently by a basic theorem of the Fourier transform. Particles of like charge would interact with each other's leakage wave, resulting in a tendency to be entrained by it. This would account for electrostatic repulsion. Particles of opposite charge would radiate at different frequencies and therefore not interact, leading to no entrainment. However, since each particle is in the other's scattering shadow, it will experience an imbalanced force due to the shadow, tending to push it toward the other particle. This effect could explain electrostatic attraction. Gravity may also be due to mutual scatter shadowing, but involving a continuum spectrum of background waves, not the line spectra of charged particles. Background waves are not coupled to any domains, and so do not consist of light quanta, which, according to the present theory, are waves coupled to massless domain pairs. A bisected hypersphere particle model predicts that subatomic particles will appear to be spheres of a definite fixed radius and having an effective volume 5.71 x greater than expected from the same radius of a sphere in flat space. (5.71 = 1 + π x 1.5) Background waves that enter the spherical surface will therefore be slow to leave, a property likely to be important for physics.

A very close, even mutualistic, relationship between domain geometry and interface waves may exist, all organized by the principle of seeking minimum energy. Atomic nuclei within the crystal could be much tinier domains, also wave-filled but at much shorter wavelengths. The nuclear domains would sit at exactly the peaks of the electron-wave antinodes because these are the only places where the electron waves have no net component in the plane of the interface. 


The big picture 

Similar to the brane-collision theory, the big bang may have been due to contact between two cosmologically sized domains of four spatial dimensions and opposite type, and our 3-brane is the resulting interface. The matter in our universe would originate from the small domains caught between the merging cosmological hyper-domains. This could account for the inflationary era thought to have occurred immediately after the big bang. The subsequent linear expansion of space may be due to the light emitted by the stars; if light is an undulation in the 3-brane along a large extra dimension, then light emission creates more 3-brane all the time, because an undulating brane has more surface area than a flat one. 

* "Overview of Planiverse" page.