Saturday, July 30, 2016

#10. The Two–test-tube Experiment: Part II [neuroscience]

Red, theory; black, fact.

At this point we have a problem. The experimenting-brain theory predicts zero hard-wired asymmetries between the hemispheres. However, the accepted theory of hemispheric dominance postulates that this arrangement allows us to do two things at once, one task with the left hemisphere and the other task with the right. The accepted theory is basically a parsimony argument. However, this argument predicts huge differences between the hemispheres, not the subtle ones actually found.

My solution is that hard-wired hemispheric dominance must be seen as an imperfection of symmetry in the framework of the experimenting brain caused by the human brain being still in the process of evolving, combined with the hypothesis that brain-expanding mutations individually produce small and asymmetric expansions. (See Post 45.) Our left-hemispheric speech apparatus is the most asymmetric part of our brain and these ideas predict that we are due for another mutation that will expand the right side, thereby matching up the two sides, resulting in an improvement in the efficiency of operant conditioning of speech behavior.

These ideas also explain why speech defects such as lisping and stuttering are so common and slow to resolve, even in children, who are supposed to be geniuses at speech acquisition.
This is how the brain would have to work if fragments of skilled behaviors are randomly stored in memory on the left or right side, reflecting the possibility that the two hemispheres play experiment versus control, respectively, during learning.
The illustration shows the theory of motor control I was driven to by the implications of the theory of the dichotomously experimenting brain already outlined. It shows how hemispheric dominance can be reversed independently of the side of the body that should perform the movement specified by the applicable rule of conduct in the controlling hemisphere. The triangular device is a summer that converges the motor outputs of both hemispheres into a common output stream that is subsequently gated into the appropriate side of the body. This arrangement cannot create contention because at any given time, only one hemisphere is active. Anatomically, and from stroke studies, it certainly appears that the outputs of the hemispheres must be crossed, with the left hemisphere only controlling the right body and vice-versa.

However, my theory predicts that in healthy individuals, either hemisphere can control either side of the body, and the laterality of control can switch freely and rapidly during skilled performance so as to always use the best rule of conduct at any given time, regardless of the hemisphere in which it was originally created during REM sleep.

The first bit is calculated and stored in the basal ganglia. It would be output from the reticular substantia nigra (SNr) and gate sensory input to thalamus to favor one hemisphere or the other, by means of actions at the reticular thalamus and intermediate grey of the superior colliculus. The second bit would be stored in the cerebellar hemispheres and gate motor output to one side of the body or the other, at the red nucleus. Conceivably, the two parts of the red nucleus, the parvocellular and the magnocellular, correspond to the adder and switch, respectively, that are shown in the illustration.

Under these assumptions, the corpus callosum is needed only to distribute priming signals from the motor/premotor cortices to activate the rule that will be next to fire, without regard for which side that rule happens to be on. The callosum would never be required to carry signals forward from sensory to motor areas. I see that as the time-critical step, and it would never depend on getting signals through the corpus callosum, which is considered to be a signaling bottleneck.

How would the basal ganglia identify the "best" rule of conduct in a given context? I see the dopaminergic compact substantia nigra (SNc) as the most likely place for a hemisphere-specific "goodness" value to be calculated after each rule firing, using hypothalamic servo-error signals processed through the habenula as the main input for this. The half of the SNc located in the inactive hemisphere would be shut down by inhibitory GABAergic inputs from the adjacent SNr. The dopaminergic nigrostriatal projection would permanently potentiate simultaneously-active corticostriatal inputs (carrying context information) to medium spiny neurons (MSNs) of enkephalin type via a crossed projection, and to MSNs of substance-P type via uncrossed projections. The former MSN type innervates the external globus pallius (GPe), and the latter type innervates the SNr. These latter two nuclei are inhibitory and innervate each other. 

I conjecture that this arrangement sets up a winner-take-all kind of competition between GPe and SNr, with choice of the winner being exquisitely sensitive to small historical differences in dopaminergic tone between hemispheres. The "winner" is the side of the SNr that shuts down sensory input to the hemisphere on that side. The mutually inhibitory arrangement could also plausibly implement hysteresis, which means that once one hemisphere is shut down, it stays shut down without the need for an ongoing signal from the striatum to keep it shut down.

Each time the cerebral cortex outputs a motor command, a copy goes to the subthalamic nucleus (STN) and could plausibly serve as the timing signal for a "refresh" of the hemispheric dominance decision based on the latest context information from cortex. The STN signal presumably removes the hysteresis mentioned above, very temporarily, then lets the system settle down again into possibly a new state.

We now need a system that decides that something is wrong, and that the time to experiment has arrived. This could plausibly be the role of the large, cholinergic inter neurons of the striatum. They have a diverse array of inputs that could potentially signal trouble with the status quo, and could implement a decision to experiment simply by reversing the hemispheric dominance prevailing at the time. Presumably, they would do this by a cholinergic action on the surrounding MSNs of both types.

Finally, there is the second main output of the basal ganglia to consider, the inner pallidal segment (GPi). This structure is well developed in primates such as humans but is rudimentary in rodents and even in the cat, a carnivore. It sends its output forward, to motor thalamus. I conjecture that its role is to organize the brain's knowledge base to resemble block-structured programs. All the instructions in a block would be simultaneously primed by this projection. The block identifier may be some hash of the corticostriatal context information. A small group of cells just outside the striatum called the claustrum seems to have the connections necessary for preparing this hash. Jump rules, that is, rules of conduct for jumping between blocks, would not output motor commands, but block identifiers, which would be maintained online by hysteresis effects in the basal ganglia.

The cortical representation of jump rules would likely be located in medial areas, such as Brodmann 23, 24, 31, and 32. BA23-24 is classed as limbic system, and BA31-32 is situated between this and neocortex. This arrangement suggests that, seen as a computer, the brain is capable of executing programs with three levels of indentation, not counting whatever levels may be encoded as chromatin marks in the serotonergic neurons. Dynamic changes in hemispheric dominance might have to occur independently in neocortex, medial cortex, and limbic system.

Sunday, July 24, 2016

#9. The Two–test-tube Experiment: Part I [neuroscience]

Your Brain is Like This.

Red, theory; black, fact.

The the motivating challenge of this post is to explain the hemispheric organization of the human brain. That is, why we seem to have two very similar brains in our heads, the left side and the right side.

Systems that rely on the principle of trial-and-error must experiment. The genetic intelligence mentioned in previous posts would have to experiment by mutation/natural selection. The synaptic intelligence would have to experiment by operant conditioning. I propose that both these experimentation processes long ago evolved into something slick and simplified that can be compared to the two–test-tube experiment beloved of lab devils everywhere.

Remember that an experiment must have a control, because "everything is relative." Therefore, the simplest and fastest experiment in chemistry that has any generality is the two-test-tube experiment; one tube for the "intervention," and one tube for the control. Put mixture c in both tubes, and add chemical x to only the intervention tube. Run the reaction, then hold the two test tubes up to the light and compare the contents visually (Remember that ultimately, the visual system only detects contrasts.) Draw your conclusions.

My theory is basically this: the two hemispheres of the brain are like the two test tubes. Moreover, the two copies of a given chromosome in a diploid cell are also like the two test tubes. In both systems, which is which varies randomly from experiment to experiment, to prevent phenomena analogous to screen burn. The hemisphere that is dominant for a particular action is the last one that produced an improved result when control passed to it from the other. The allele that is dominant is the last one that produced an improved result when it got control from the other. Chromosomes and hemispheres will mutually inhibit to produce winner-take-all dynamics in which at any given time only one is exposed to selection, but it is fully exposed. 

These flip-flops do not necessarily involve the whole system, but may be happening independently in each sub-region of a hemisphere or chromosome (e.g., Brodmann areas, alleles). Some objective function, expressing the goodness of the organism's overall adaptation, must be recalculated after each flip-flop, and additional flip flopping suppressed if an improvement is found when the new value is compared to a copy of the old value held in memory. In case of a worsening of the objective function, you quickly flip back to the allele or hemisphere that formerly had control, then suppress further flip flopping for awhile, as before. 

The foregoing implies multiple sub-functions, and these ideas will not be compelling unless I specify a brain structure that could plausibly carry out each sub-function. For example, the process of comparing values of the objective function achieved by left and right hemispheres in the same context would be mediated by the nigrostriatal projection, which has a crossed component as well as an uncrossed component. More on this next post.

Sunday, July 17, 2016

#8. What is Intelligence? Part II. Human Brain as Knowledge Base [neuroscience, engineering]

EN    NE    
Red, theory; black, fact.

7-17-2016
A previous post's discussion of the genetic intelligence will provide the framework for this post, in which functions will be assigned to brain structures by analogy.

The CCE-structural-gene realization of the if-then rule of AI appears to have three realizations in neuroscience. These are as follows: a single neuron (dendrites, sensors; axon, effectors) the basal ganglia (striatum, sensors; globus pallidus, effectors) and the corticothalamic system (postRolandic cortex, sensors; preRolandic cortex, effectors).

Taking the last system first, the specific association of a pattern recognizer to an output to form a rule of conduct would be implemented by white-matter axons running forward in the brain. Remember that the brain's sensory parts lie at the back, and its motor, or output, parts lie at the front.

The association of one rule to the next within a block of instructions would be by the white-matter axons running front-to-back in the brain. Since the brain has no clock, unlike a computer, the instructions must all be of the if-then type even in a highly automatic block, so each rule fires when it sees evidence that the purpose of the previous rule was accomplished. This leads to a pleasing uniformity, where all instructions have the same form. 

This also illustrates how a slow knowledge base can be morphed into something surprisingly close to a fast algorithm, by keeping track of the conditional firing probabilities of the rules, and rearranging the rules in their physical storage medium so that high conditional probabilities correspond to small distances, and vice-versa.

However, in a synaptic intelligence, the "small distances" would be measured on the voltage axis relative to firing threshold, and would indicate a high readiness to fire on the part of some neuron playing the role of decision element for an if-then rule, if the usual previous rule has fired recently. An external priming signal having the form of a steady excitation, disinhibition, or deinactivation could produce this readiness. Inhibitory inputs to thalamus or excitatory inputs to cortical layer one could be such priming inputs.

The low-to-medium conditional firing probabilities would characterize if-then rules that act as jump instructions between blocks, and the basal ganglia appear to have the connections necessary to implement these. 

In Parkinson disease, the basal ganglia are disabled, and the symptoms are as follows: difficulty in getting a voluntary movement started, slowness of movements, undershoot of movements, few movements, and tremor. Except for the tremor, all these symptoms could be due to an inability to jump to the next instruction block. Patients are capable of some routine movements once they get started, and these may represent the output of a single-block program fragment.

Unproven, trial-basis rules of the "jump" type that are found to be useful could be consolidated by a burst of dopamine secretion into the striatum. Unproven, trial-basis rules of the "block" type found useful could be consolidated by dopamine secretion into prefrontal cortex. [The last two sentences express a new idea conceived during the process of keying in this post.] Both of these dopamine inputs are well established by anatomical studies.

(See Deprecated, Part 9)...influence behavior with great indirection, by changing the firing thresholds of other rules, that themselves only operate on thresholds of still other rules, and so on in a chain eventually reaching the rules that actually produce overt behavior. The chain of indirection probably passes medial to lateral over the cortex, beginning with the limbic system. (Each level of indirection may correspond to a level of indentation seen in a modern computer language such as C.) The mid-line areas are known to be part of the default-mode network (DMN), which is active in persons who are awake but lying passively and producing no overt behavior. The DMN is thus thought to be closely associated with consciousness itself, one of the greatest mysteries in neuroscience.

7-19-2016
It appears from this post and a previous post that you can take a professionally written, high-level computer program and parse it into a number of distinctive features, to borrow a term from linguistics. Such features already dealt with are the common use of if-then rules, block structuring, and levels of indentation. Each such distinctive feature will correspond to a structural feature in each of the natural intelligences, the genetic and the synaptic. Extending this concept, we would predict that features of object-oriented programming will be useful in assigning function to form in the two natural intelligences. 

For example, the concept of class may correspond to the Brodmann areas of the human brain, and the grouping of local variables with functions that operate on them may be the explanation of the cerebellum, a brain structure not yet dealt with. In the cerebellum, the local variables would be physically separate from their corresponding functions, but informationaly bound to them by an addressing procedure that uses the massive mossy-fiber/parallel-fiber array.

Monday, July 4, 2016

#7. What is Intelligence? Part I. DNA as Knowledge Base [genetics, engineering]

EN     GE     
Red: theory; black, fact.

I have concluded that the world contains three intelligences: the genetic, the synaptic, and the artificial. The first includes (See Deprecated, Part 10) genetic phenomena and is the scientifically-accessible reality behind the concept of God. The synaptic is the intelligence in your head, and seems to be the hardest to study and the one most in need of elucidation. The artificial is the computer, and because we built it ourselves, we presumably understand it. Thus, it can provide a wealth of insights into the nature of the other two intelligences and a vocabulary for discussing them.

Artificial intelligence systems are classically large knowledge bases (KBs), each animated by a relatively small, general-purpose program, the "inference engine." The knowledge bases are lists of if-then rules. The “if” keyword introduces a logical expression (the condition) that must be true to prevent control from immediately passing to the next rule, and the “then” keyword introduces a block of actions the computer is to take if the condition is true. Classical AI suffers from the problem that as the number of if-then rules increases, operation speed decreases dramatically due to an effect called the combinatorial explosion.

A genome can be compared to a KB in that it contains structural genes and cis-acting control elements.(CCEs). The CCEs trigger the transcription of the structural genes into messenger RNAs in response to environmental factors and these are then translated into proteins that have some effect on cell behavior. The analogy to a list of if-then rules is obvious. A CCE evaluates the “if” condition and the conditionally translated protein enables the “action” taken by the cell if the condition is true.

Note that the structural gene of one rule precedes the CCE of the next rule along the DNA strand. Surely, would this circumstance not also represent information? However, what could it be used for? It could be used to order the rules along the DNA strand in the same sequence as the temporal sequence in which the rules are normally applied, given the current state of the organism’s world. This seems to be a possible solution to the combinatorial explosion problem, leading to much shorter delays on average for the transcriptase complex to arrive where it is needed. I suspect that someday, it will be to this specific arrangement that the word “intelligence” will refer.
The process of putting the rules into such a sequence may involve trial-and-error, with transposon jumping providing the random variation on which selection operates. A variant on this process would involve stabilization by methylation of recombination sites that have recently produced successful results. These results would initially be encoded in the organism's emotions, as a proxy to reproductive success. In this form, the signal can be rapidly amplified by inter individual positive feedback effects. It would then be converted into DNA methylation signals in the germ line. (See my post on mental illness for possible mechanisms.) DNA methylation is known to be able to cool recombination hot spots.

A longer-timescale process involving meiotic crossing-over may create novel rules of conduct by breaking DNA between promoter and structural gene of the same rule, a process analogous to the random-move generation discussed in my post on dreaming. Presumably, the longest-timescale process would be creating individual promoters and structural genes with new capabilities of recognition and effects produced, respectively. This would happen by point mutation and classical selection.
How would the genetic intelligence handle conditional firing probabilities in the medium to low range? This could be done by cross linking nucleosomes via the histone side chains in such a way as to cluster the CCEs of likely-to-fire-next rules near the end of the relevant structural gene, by drawing together points on different loops of DNA. The analogy here would be to a science-fictional “wormhole” from one part of space to another via a higher-dimensional embedding space. In this case, “space” is the one-dimensional DNA sequence with distances measured in kilobases, and the higher-dimensional embedding space is the three-dimensional physical space of the cell nucleus.

The cross linking is presumably created and/or stabilized by the diverse epigenetic marks known to be deposited in chromatin. Most of these marks will certainly change the electric charge and/or the hydrophobicity of amino acid residues on the histone side chains. Charge and hydrophobicity are crucial factors in ionic bonding between proteins. The variety of such changes that are possible.

Mechanistically, there seems to be a great divide between the handling of high and of medium-to-low conditional probabilities. This may correspond with the usual block structure of algorithms, with transfer of control linear and sequential within a block, and by jump instruction between blocks.

Another way of accounting for the diversity of epigenetic marks, mostly due to the diversity of histone marks, is to suppose that they can be paired up into negative-positive, lock-key partnerships, each serving to stabilize by ionic bonding all the wormholes in a subset of the chromatin that deals with a particular function of life. The number of such pairs would equal the number of functions.

Their lock-key specificity would prevent wormholes, or jumps, from forming between different functions, which would cause chaos. If the eukaryotic cell is descended from a glob-like array of prokaryotes, with internal division of labor and specialization, then by one simple scheme, the specialist subtypes would be defined and organized by something like mathematical array indexes. For parsimony, assume that these array indexes are the different kinds of histone marks, and that they simultaneously are used to stabilize specialist-specific wormholes. A given lock-key pair would wormhole specifically across regions of the shared genome not needed by that particular specialist.

 A secondary function of the array indexes would be to implement wormholes that execute between-blocks jumps within the specialist's own program-like KB. With consolidation of most genetic material in a nucleus, the histone marks would serve only to produce these secondary kind of jumps while keeping functions separate and maintaining an informational link to the ancestral cytoplasmic compartment. The latter could be the basis of sorting processes within the modern eukaryotic cell.

Monday, June 27, 2016

#6. Mental Illness as Communication [neuroscience, genetics]

NE     GE     
Red, theory; black, fact.

The effects of most deleterious mutations are compensated by negative feedback processes occurring during development in utero. However, if the population is undergoing intense Darwinian selection, many of these mutations become unmasked and therefore contribute variation for selection. (Jablonka and Lamb, 2005, The MIT Press, "Evolution in Four Dimensions")

However, since most mutations are harmful, a purely random process for producing them, with no pre-screening, is wasteful. Raw selection alone is capable of scrubbing out a mistake that gets as far as being born, at great cost in suffering, only to have, potentially, the very same random mutation happen all over again the very next day, with nothing learnt. Repeat ad infinitum. This is Absurd, and quarrels with the engineer in me, and I like to say that evolution is an engineer. Nowadays, evolution itself is thought to evolve. A simple example of this would be the evolution of DNA repair enzymes, which were game-changers, allowing much longer genes to be transmitted to the next generation, resulting in the emergence of more-complex lifeforms.

An obvious, further improvement would be a screening, or vetting process for genetic variation. Once a bad mutation happens, you mark the offending stretch of DNA epigenetically in all the close relatives of the sufferer, to suppress further mutations there for a few thousand years, until the environment has had time to change significantly.

Obviously, you also want to oppositely mark the sites of beneficial mutations, and even turn them into recombinant hot spots for a few millennia, to keep the party going. Hot spots may even arise randomly and spontaneously, as true, selectable epi-mutations. The downside of all this is that even in a hot spot, most mutations will still be harmful, leading to the possibility of "hitchhiker" genetic diseases that cannot be efficiently selected against because they are sheltered in a hot spot. Cystic fibrosis may be such a disease, and as the hitchhiker mechanism would predict, it is caused by many different mutations, not just one. It would be a syndrome defined by the overlap of a vital structural gene and a hot spot, not by a single DNA mutation. I imagine epigenetic hot spots to be much more extended along the DNA than a classic point mutation.

It is tempting to suppose that the methylation islands found on DNA are these hot spots, but the scanty evidence available so far is that methylation suppresses recombinant hot spots, which are generally defined non-epigenetically, by the base-pair sequence.

The human brain has undergone rapid, recent evolutionary expansion, presumably due to intense selection, presumably unmasking many deleterious mutations affecting brain development that were formerly silent. Since the brain is the organ of behavior, we expect almost all these mutations to indirectly affect behavior for the worse. That explains mental illness, right?

I'm not so sure; mental illnesses are not random, but cluster into definable syndromes. My reading suggests the existence of three such syndromes: schizoid, depressive, and anxious. My theory is that each is defined by a different recombinant hot spot, as in the case of CF, and may even correspond to the three recently-evolved association cortices of the brain, namely parietal, prefrontal, and temporal, respectively. The drama of mental illness would derive from its communication role in warning nearby relatives that they may be harbouring a bad hot spot, causing them to find it and cool it by wholly unconscious processes. Mental illness would then be the push back against the hot spots driving human brain evolution, keeping them in check and deleting them as soon as they are no longer pulling their weight fitness-wise. The variations in the symptoms of mental illness would encode the information necessary to find the particular hot spot afflicting a particular family.

Now all we need is a communication link from brain to gonads. The sperm are produced by two rounds of meiosis and one of mitosis from the stem-like, perpetually self-renewing spermatogonia, that sit just outside the blood-testes barrier and are therefore exposed to blood-borne hormones. These cells are known to have receptors for the hypothalamic hormone orexin A*, as well as many other receptors for signaling molecules that do or could plausibly originate in the brain as does orexin. Some of these receptors are:
  • retinoic acid receptor α
  • glial cell-derived neurotrophic factor (GDNF) receptor
  • CB2 (cannabinoid type 2) receptor
  • p75 (For nerve growth factor, NGF)
  • kisspeptin receptor.

*Gen Comp Endocrinol. 2016 May 9. pii: S0016-6480(16)30127-7. doi: 10.1016/j.ygcen.2016.05.006. [Epub ahead of print] Localization and expression of Orexin A and its receptor in mouse testis during different stages of postnatal development. Joshi D1, Singh SK2.

PS: for brevity, I left out mention of three sub-functions necessary to the pathway: an intracellular gonadal process transducing receptor activation into germ line-heritable epigenetic changes, a process for exaggerating the effects of bad mutations into the development of monsters or behavioral monsters for purposes of communication, and a process of decoding the communication located in the brains of the recipients.

Saturday, June 18, 2016

#5. Why We Dream [neuroscience]

NE
Red, theory; black, fact.

The Melancholy Fields








Something I still remember from Psych 101 is the prof's statement that "operant conditioning" is the basis of all voluntary behavior. The process was discovered in lab animals such as pigeons by B.F. Skinner in the 1950s and can briefly be stated as "If the ends are achieved, the means will be repeated." (Gandhi said something similar about revolutionary governments.)

I Dream of the Gruffalo. Pareidolia as dream imagery.

Let's say The Organism is in a supermarket checkout line and can't get the opposite sides of a plastic grocery bag unstuck from each other no matter how it rubs, blows, stretches, picks at, or pinches the bag. At great length, a rubbing behavior by chance happens near the sweet spot next to the handle, and the bag opens at once. Thereafter, when in the same situation, The Organism goes straight to the sweet spot and rubs, for a great savings in time and aggravation. This is operant conditioning, which is just trial-and-error, like evolution itself, only faster. Notice how it must begin: with trying moves randomly--behavioral mutations. However, the process is not really random like a DNA mutation. The Organism never tries kicking out his foot, for example, when it is the hand that is holding the bag. Clearly, common sense plays a role in getting the bag open, but any STEM-educated person will want to know just what this "common sense" is and how you would program it. Ideally, you want the  creativity and genius of pure randomness, AND the assurance of not doing anything crazy or even lethal just because some random-move generator suggested it. You vet those suggestions.

That, in a nutshell, is dreaming: vetting random moves against our accumulated better judgment to see if they are safe--stocking the brain with pre-vetted random moves for use the next day when stuck. This is why the emotions associated with dreaming are more often unpleasant than pleasant: there are more ways to go wrong than to go right (This is why my illustrations for this post are melancholy and monster-haunted.) The vetting is best done in advance (e.g., while we sleep) because there's no time in the heat of the action the next day, and trial-and-error with certified-safe "random" moves is already time-consuming without having to do the vetting on the spot as well.

Dreams are loosely associated with brain electrical events called "PGO waves," which begin with a burst of action potentials ("nerve impulses") in a few small brainstem neuron clusters, then spread to the visual thalamus, then to the primary visual cortex. I theorize that each PGO wave creates a new random move that is installed by default in memory in cerebral cortex, and is then tested in the inner theater of dreaming to see what the consequences would be. In the event of a disaster foreseen, the move is scrubbed from memory, or better yet, added as a "don't do" to the store of accumulated wisdom. Repeat all night.

If memory is organized like an AI knowledge base, then each random move would actually be a connection from a randomly-selected but known stimulus to a randomly-selected but known response, amounting to adding a novel if-then rule to the knowledge base. Some of the responses in question could be strictly internal to the brain, raising or lowering the firing thresholds of still other rules.

In "Evolution in Four Dimensions" [1st ed.] Jablonka and Lamb make the point that epigenetic, cultural, and symbolic processes can come up with something much better than purely random mutations: variation that has been subjected to a variety of screening processes.

Nightmares involving feelings of dread superimposed on experiencing routine activities may serve to disrupt routine assumptions that are not serving you well (that is, you may be barking up the wrong tree).

Thursday, June 9, 2016

#4. My First Theory of Everything (TOE) [physics]

PH
Red, theory; black, fact.

The nucleus around which a TOE will hopefully crystallize.

Alocia and Anaevia

In my first post, I made a case for the existence of absolute space and even suggested that space is some kind of condensate (e.g., a crystal). The divide-and-conquer strategy that has served us so well in science suggests that the next step is to conceptually take this condensate apart into particles. The first question that arises is whether these particles are themselves situated in an older, larger embedding space, or come directly out of spacelessness (i.e., a strange, hypothetical early universe that I call "Alocia," my best Latin for "domain of no space." Going even further back, there would have been "Anaevia," "domain of no time." Reasoning without time seems even trickier than reasoning without space.)

What came before space?

The expansion of our universe suggests that the original, catastrophic condensation event, the Big Bang, was followed by further, slower accretion that continues to this day. However, the resulting expansion of space is uniform throughout its volume, which would be impossible if the incoming particles had to obey the rules of some pre-existing space. If there were a pre-existing space, incoming particles could only add to the exterior surface of the huge condensate in which we all presumably live, and could never access the interior unless our universe were not only embedded in a 4-space, but hyper-pizza-shaped as well. The latter is unlikely because self-attraction of the constituent particles would crumple any hyper-pizza-shaped universe into a hypersphere in short order. (Unless it spins?) Conclusion: the particles making up space probably have no spatial properties themselves, and bind together in a purely informational sense, governed by Hebb's rule. 

Hebb's rule was originally a neuroscience idea about how learning happens in the brain. My use of it here does NOT imply that a giant brain somehow underlies space. Rather, the evolutionary process that led to the human brain re-invented Hebb's rule as the most efficient way of acquiring spatial information. 

Hebb's rule pertains to signal sources: how could hypothetical space-forming particles come up with the endless supply of energy required by pumping out white noise, waves, etc., 24/7? Answer: these "particles" are the growing tips of time lines, that themselves grow by an energy-releasing accretion process. The chunks that accrete are variable in size or interrupted by voids, so timeline extension has entropy associated with it that represents the signals needed by Hebb's rule.

I am well aware of all the space-bound terms in the previous paragraph (underlined), supposedly about goings-on in Alocia, the domain of no space; however, I am using models here as an aid to thought, a time-honored scientific technique.

Is cosmological expansion some kind of accretion?

I imagine that Alocia is home to large numbers of space-like condensates, with a size distribution favoring the microscopic, but with a long tail extending toward larger sizes. Our space grows because these mostly tiny pre-fab spaces are continually inserting themselves into it, as soon as their background signal pattern matches ours somewhere. This insertion process is probably more exothermic than any other process in existence. If the merging space happens to be one of the rarer, larger ones, the result would be a gamma ray burst bright enough to be observed at cosmological distances and generating enough pure energy to materialize all the cosmic rays we observe.

The boundary problem

I suspect that matter is annihilated when it reaches the edge of a space. This suggests that our space must be mostly closed to have accumulated significant amounts of matter. This agrees with Hawking's no-boundary hypothesis. The closure need not be perfect, however; indeed, that would be asking a lot of chance. Imperfections in the closure of our universe may take the form of pseudo-black holes: cavities in space that lack fields. If they subsequently acquire fields from the matter that happens to hit them, they could evolve to closely resemble super-massive black holes, and be responsible for nucleating galaxies.

Conclusions

  • Spatial proximity follows from correlations among processes, and does not cause them.
  • Any independence of processes is primordial and decays progressively.
  • The universe evolves through a succession of binding events, each creating a new property of matter, which can be interpreted as leftover entropy.
  • Analysis in the present theoretical framework proceeds by declaring familiar concepts to be conflations of these properties, e.g., time = change + contrast + extent + unidirectional sequence; space = time + bidirectional sequence.