Showing posts with label origin of universe. Show all posts
Showing posts with label origin of universe. Show all posts

Friday, March 8, 2024

#71. A Cosmological Setting for a GR-QM Unification [physics]


Red, theory; black, fact


To unify these points most simply, you have to go outside the region of points.


“The Sphere,” campus of the National Research Council, Ottawa 

Figure 1. The expanding 5-ball

Figure 2. A wave packet


The Big Picture

The spacetime of general relativity (GR) is here considered to be an expanding 4D hyperball (4-ball) on the surface of an expanding 5D hyperball (5-ball). The latter is surrounded by subatomic-sized 5-balls ("paramorfs") that can fuse with the big, nearby 5-ball, which is the mechanism by which the latter enlarges. (See Fig. 1). Technically, a “sphere” is just a surface, with a dimensionality one less than the embedding space. I use “ball” here to refer to the embedding space dimensionality.

The Little Picture

Each fusion event sends out a ripple on the surface of the big 5-ball that travels at the speed of light in vacuum. A sequence of fusions happening in the correct order causes the ripples to add up to a shock wave at some point. At the maximum of the shock wave, the surface of the 5-ball is thrown out especially far into the surrounding emulsion of paramorfs, where it makes contact with yet another paramorf, resulting in yet another fusion event and another ripple, which has the correct phase to add to the shock wave. The result is a self-sustaining cycle that leads to persistence and thus observable particle-like phenomena. (See Fig. 2). The shock-wave speed, or “group velocity” will be somewhat less than the speed of light, or “phase velocity” so that ripples will be bleeding out the front continuously. This feature of the theory was introduced to prevent the amplitude of the particle from growing without limit. If the particle is travelling exactly along the time dimension, this bleed will be into the future direction. Therefore, “the future” will have a limited physical reality.

This mechanism was inspired by the superradiant nitrogen laser, in which nitrogen is excited by a zone of corona discharge travelling at nearly the speed of light. This mechanism is also based on Born's rule of quantum mechanics (QM). If wave curvature rather than displacement amplitude determines paramorf fusion probability, then we get something even closer to Born’s rule, which states that the square of the wave function is proportional to the probability of observing a particle. The curvature of a sine wave is not its square, but the resemblance is striking. Perhaps an experimental verification of Born’s rule with unprecedented accuracy is warranted to distinguish the two theories. 

Making It Messier, Like Reality

The big 5-ball may be filled with an emulsion of yin paramorfs in a continuous yang phase, as well as being surrounded by an emulsion of yang paramorfs in a continuous yin phase. Droplets of yin space could get injected into the interior as a side effect after each yang paramorf fusion event. This would explain why curvature alone dictates fusion probability: a concavity reaching interior yin paramorfs is as effective as a convexity reaching exterior yang paramorfs, and no depletion zone will develop over time. Yin and yang space are terms coined in a previous post, “The Checkered Universe.”

The Inflationary Era

Particle formation is entropically disfavored (requires a precise configuration unlikely to arise by chance) and thus only happens when paramorf fusions are frequent due causes other than the presence of particles. Postulating that spontaneous fusions are more frequent when the curvature of the 5-ball is greater, spontaneous fusions will be abundant when the growing 5-ball is still tiny and thus intensely curved. This would be seen in our 4-ball as the inflationary era of the Big Bang. 

A problem is that the paramorfs themselves are the most intensely curved elements in this system. Possibly, a binary paramorf fusion event releases so much energy in such a confined space that the fusion product immediately splits apart, resulting in no net effect overall. Analogously, in gas-phase chemistry, some two-molecule reactions will not go without a third “collision partner” to carry off some of the energy released. 

Time

The surface of our 4-ball would be formed by the stable particles radiating out of our local inflationary zone on the 5-ball into newly-created, blank 4-surface (see Figure 1). This radiation would define the post-inflationary era. Our time dimension would be one of the radii. These particles propagate in time the given, as opposed to time the clock reading. The position of the particle along its track is the clock reading.

Mechanistic Variations

The illustrated mechanism of particle creation (see Figure 2) is periodic-deterministic and may account for photons and leptons. The corresponding chaotic mechanism may account for baryons, and the corresponding probabilistic mechanism may account for dark matter. The close relationship we see today between protons and electrons could have been due to their relationship during the inflationary era; the vicinity of one could have served as an incubator for the other.

Consistency with Relativity

The multitude of expanding spacetime ripples predicted to be around any massive object would comprise the spacetime curvature referred to by the Einstein tensor of the relativistic field equations. The asymmetry of the wave packet that leads to the shock wave accounts for momentum. According to special Relativity, mass-equivalent energy is just the spacetime component of the momentum along the time axis.

A Geometric Underpinning for this Theory

Fixing radius = 1, the 5-ball has the greatest volume of any ball dimensionality. (See the Wiki on “n-sphere”) Thus, this dimensionality could have been forced by some principle of minimizing the radius-to-volume ratio, call it a compaction principle (in a physical, not topological sense), the existence of which is already implied by the assumed ball shape. We cannot invoke gravity here to produce compaction because gravity emerges at a higher level of description than this. A surface tension-like effect related to the permittivity of free space may serve, which is already implied by invoking ripples on the surface. However, mention of ripples implies that the governing differential equation has oscillatory solutions, which seems to also require a medium with inertia, which may be related to the permeability of free space.

Beyond Geometry

If an overarching process of yin-yang separation existed, which would explain why all observations are ultimately observations of contrasts, this process would arguably have a smoothing effect on any resulting interfaces. Such smoothing would suggest surface tension when considered spatially and inertia when considered temporally. I suspect that electromagnetism and matter waves emerge from these simple ingredients. Conservation of paramorf volume would enter the mathematical proof as a constraint.

A limitation of this theory is that it does not explain the assumed presence of discrete, ancient inflationary zones on the surface of the 5-ball.

A Sixth Dimension Is Necessary

Close inspection of the volume versus dimensionality curve for n-balls of radius 1 suggests that maximum volume occurs at a fractional dimensionality somewhat above 5, which looks to be about five and a quarter. Under the compaction principle, this circumstance would lead to a squashed (oblate) 6-ball about one-quarter as thick as it is wide, with greatest curvature at the equator. (Here I am making an analogy with the Earth’s surface, which is an oblate spheroid.This uneven distribution of curvature would result in the equatorial region losing its inflationary status later than at the poles, suggesting that the universal equatorial region spawned all the particles we can now see during the late inflationary era and that our familiar 3-space corresponds to a line of latitude on the oblate 6-ball travelling steadily toward a pole. 

This scenario allows the existence of ancient, dilute matter of non-equatorial origin coexisting with our 3-space. This ancient, dilute matter could account for cosmic rays and some of the diffuse cosmic gamma glow. Some of these ancient particles would by chance approach us in our future light cones and would therefore interact with our 3-space as antimatter. The resulting annihilation events would produce gamma rays and neutrinos. Those particles that escape annihilation could potentially re-emerge from our spacetime in our past light cones and at a different point, becoming matter cosmic rays. Cosmic particles following spacelike trajectories may not interact strongly with us, like two waves crossing at right angles, but Born's rule predicts some interaction.

A Second Limitation of this Theory

Relativity theory denies the existence of an absolute frame of reference, which I have just re-introduced in the form of the surface of a large ball. Perhaps this limitation can be addressed by showing that the concept of no absolute frame of reference can be replaced with the concept of space-tilted matter, in which the lengths of meter sticks change due to a tilt of the structure of Figure 2 so that propagation is no longer purely in time, but now has a component in space, and the length change must be to a degree necessary to guarantee the null result of the Michelson--Morley experiment.

High Dimensionality

The surface of a 6-ball is a 5-dimensional space. Particle propagation on this surface uses up one of these dimensions, turning it into time. However, the resulting spacetime has four dimensions of space and we see only three. What happened to the other one? Most likely it was largely suppressed by black hole formation shortly after the inflationary era. Black hole formation should be very facile in four spatial dimensions because gravitational orbits are unstable and radiative cooling is relatively efficient. This places us on the event horizon of one of these 4-D black holes and suggests that the event horizon actually is the membrane it seems to be in some theoretical studies. Considered geometrically, the event horizon is a surface and will therefore have a dimensionality one less than that of the bulk. Life on this surface will therefore be three dimensional.
 
In addition, this theory clearly provides a multiverse, because there can be many such hyper black holes, thereby answering the fine-tuning-for-life problem that inspired the anthropic principle.

String theory posits that a particle is a one-dimensional vibrating string embedded in three dimensions. However, my theory posits that a particle is a three dimensional system embedded in six dimensions. We are situated in a privileged location in 6-space in which three of these dimensions have an inward and outward direction. An analogous point in 3-space would be the corner of a cube. The wave component of particles would oscillate along a vector that can rotate in a wholly extradimensional plane, and with an axis of rotation perpendicular to all three dimensions of space, possibly coinciding with time. This would be the spin of the particle. In the cube analogy, one of the edges parallel to the time dimension is spiraling. If the vector rotates in a plane contained within 3-space, this would be the circular polarization of light. A baryon might consist of a trio of fermions, one on each of the three edges meeting at the cube corner and each offset a short distance back from the corner. This arrangement might create a tiny, semi-closed chamber where ripples are concentrated and thus intensified. This, in turn, would enhance paramorf capture, which would dynamically stabilize the structure.

See Figure 3. In this figure, the instantaneous structure resembles one edge of a cube merging with a surface. The line between points A may function as a closed chamber for fusion ripples because of the right-angle relationships at each end, leading to intensified shock waves inside and intensified paramorf fusion. This, in turn, dynamically maintains the geometry shown.

Etymology: "warped spacetime," Greek: paramorfoménos chorochrónos, thus: "paramorf."


Figure 3. A hyper-black hole progressing across the surface of the big 6-ball. The three spatial dimensions of relativity theory have been suppressed for clarity and are represented by points A; t is time.

Tilting at a Conceptual Unification

In general, spacetime structures would tend to evolve to greater efficiency in paramorf capture, and deviations from these structures will appear to be opposed by forces. This can be cited as a general principle in exploring the present theory.

For example, two fermions could capture paramorfs cooperatively: capture by one triggers an expanding ripple that reaches the other and triggers its own capture. This second capture then sends a ripple back to the first fermion, where it triggers a third capture, and so on. This duetting action is formally like light bouncing back and forth between parallel mirrors, as in the light-clock thought experiment of special Relativity, and recalling the Michelson—Morley interferometer. If duetting efficiency maintains the length of meter sticks, we have the beginnings of the long-sought explanation of the null result of the Michelson—Morley experiment in terms that allow the existence of a medium for the wave aspect of particles.

Velocity in space relative to the medium upsets the spatial relationships necessary for efficient duetting, triggering a compensatory reorganization of the spacetime structure to re-optimize paramorf capture efficiency, by the general principle enunciated above. This leads to the Fitzgerald contraction, one of the two basic effects previously explained in terms of special Relativity. The Fitzgerald contraction was recently proven to be directly unobservable; rather, a rotation of the front of the object away from the line of travel is observed, as predicted by Penrose and Tyrell. https://doi.org/10.1038/s42005-025-02003-6. If this rotation looks the same from all observation angles (elevations), it would have to be a rotation into extradimensional space, which the present theory allows, and it is easy to visualize how that would maintain the efficiency of duetting at high velocity. Therefore, close study of the relativistic rotation effect may provide a window on extradimensional space.

The other basic relativistic effect is time dilation; if fermions are always literally travelling along a time dimension as postulated here in connection with the space-tilted matter concept, a greater velocity along any spatial direction must come at the expense of a lesser velocity along the time dimension, leading to time dilation.

Synchronization and anti-synchronization of fusion events between adjacent particles could account for the narrowness of the time slice we seem to be living in.

Duetting could account for attractive forces between fermions and duetting with destructive interference could account for repulsive forces. A difficulty is that the simple ripple model is one-sided whereas destructive interference assumes sinusoidal disturbances, which are two-sided. This could be remedied by assuming that the ripples have profiles like wavelets or the Laplacian of the Gaussian.

At the Limit of this Vision

Paramorf-ripple dynamics looks remarkably biological, featuring elementary processes that recall feeding and natural selection. Their cosmological setting cannot be the end of the story, however, because one naturally wonders where the entire ensemble of yin and yang space came from and why it has a bipartite nature. To answer these questions, it may be necessary to conceive an elemental version of the ultimate power of living things: reproduction. The ineffably great multiplicity of things demands an explanation.


Questions Arising 

  • Do we need a new representation system to tackle the question of ultimate origins? 
  • Do we merely need to shift from visual to verbal? 
  • Is the concept of differentiation valuable here? For example, primordial undifferentiated space and time, primordial undifferentiated time and causation, or primordial undifferentiated somethingness and nothingness. 
  • Is entropy increase the ultimate source of all differentiation? 
  • Is the concept of primordial fluctuations valuable here? For example, should I proceed as I did in the abiogenesis post, from vacuum fluctuation to persistence by self-repair to growth to reproduction? 
  • What is the effect of a vacuum fluctuation in the background of a previous fluctuation?
  • Is circularity a key concept here? 
  • Is positing an ultra-simplified version of something well known in other disciplines, a kind of consilience, a useful operation? 
  • Is the concept of a primordial less-structured space valuable? For example, a topological space is less structured than a Euclidean space. 
  • Is the strategy of bringing the observer into the system under study valuable here?
  • The further back I go, the fewer the raw materials, but the fewer the constraints. How do I keep from losing my way?

Snail universe beside the Rideau canal. There may be perspectives in which what we consider our own universe looks no grander than this.

Zen weeds in the Rideau Canal. No explanation.


Sunday, July 25, 2021

#64. The Checkered Universe [physics]

 PH


Red, theory; black, fact



The basic theoretical vision

This is a theory of everything (TOE) based on a foam model. The foam is made up of two kinds of "bubbles," or "domains": "plus" and "minus." Each plus domain must be completely surrounded by minus domains, and vice versa. Any incipient violation of this rule, call it the "checkerboard rule," causes domains of like type to fuse instantly until the status quo is restored, with release of energy. The energy, typically electromagnetic waves, radiates as undulations in the plus-minus interfaces. The result of many such events is progressive enlargement and diversification of domain sizes. This process, run backward in time, appears to result in a featureless, grey nothingness (imagining contrasting domain types as black and white, inspired by the Yin-and-Yang symbol), thereby giving a halfway-satisfying explanation of how nothing became something. Halfway, because it’s an infinite regress: explaining the phenomenon in terms of the phenomenon. Invoking progressively deepening shades of gray in forward time, to gray out and thus censor the regress encountered in backward time, looks like a direction to explore. A law of conservation of nothingness would require plus domains and minus domains to be present in equal amounts, although this can be violated locally. This law may be at the origin of all the other known conservation laws. The cosmological trend favors domain fusion, but domain budding is nevertheless possible given enough energy.

The givens assumed for this theory of everything. Since there are givens, it is probably not the real theory of everything but rather a simplified physics and possibly a stepping stone to the TOE.

The dimensionality question

The foam is infinite-dimensional. Within the foam, interfaces of all dimensionalities are abundant. Following Hawking, I suggest that we live on a three dimensional brane within the foam because that is the only dimensionality conducive to life. The foamy structure of the large-scale galaxy distribution that we observe thus receives a natural explanation: these are the lower-dimensional foam structures visible from within our brane. The interiors of the domains are largely inaccessible to matter and energy. However, we have an infinite regress again, this time toward increasing dimensionality: we never get to the bulk. Is it time to censor again and postulate progressively lessened contrast with greater dimensionality, and asymptotic to zero contrast? No; the foam model implies a bulk and therefore a maximum dimensionality, but not necessarily three. But what is so special about this maximum dimensionality? Let us treat yin-yang separation as an ordinary chemical process and apply the second law of thermodynamics to see if there is some theoretical special dimensionality. Assuming zero free energy change, we set enthalpy increase (“work”) equal to entropy increase (“disorder”) times absolute temperature. Postulating that separation work decreases with dimensionality and the entropy of the resulting space foam increases with dimensionality, we can solve for the special dimensionality we seek. The separation process has no intrinsic entropy penalty because there are no molecules at this level of description. The real, maximum dimensionality would be greater than theoretical to provide some driving force, which real transformations require. However, is the solution stable? Moreover, the argument implies that temperature is non-zero. Temperature is here the relative motion of all the minute, primordial domains. This could be leftover separation motion. How could all this motion happen without innumerable checkerboard-rule violations and thus many fusion events? Fusion events can be construed as interactions, and extra dimensions, which we have here, suppress interactions. More on this below. That said, the idea of primordial infinite dimensionality remains beguiling in its simplicity and possibilities.

Since infinite dimensionality is a bit hard to get your mind around, let us inquire what is diminished upon increasing the dimensionality, and just set it to zero to represent infinite dimensionality. Some suggestions: order, interaction, and correlation. To illustrate, imagine two 2-dimensional Ardeans* walking toward each other. When they meet, one must lie down so the other can walk over them before either can continue on their way. That's a tad too much correlation and interaction for my taste.

My intuition is that as dimensions are added, order, correlation, and interaction decrease toward zero asymptotically. This would mean that 4D is not so different from 3D as 3D is from 2D. The latter comparison is the usual test case that people employ to try to understand extra dimensions, but it may be misleading. However, in 4D, room-temperature superconductivity may be the rule rather than the exception, due to extradimensional suppression of the interactions responsible for electrical resistance. The persistent, circulating 4D supercurrents, which are travelling electron waves, may look like electrostatic fields from within our 3-brane, which would help to eliminate action-at-a-distance from physics. Two legs of the electron-wave circulation would travel in a direction we cannot point. These ideas also lead to the conclusion that electrostatic fields can be diffracted, which is the classical electron diffraction experiment. The electrons are particles and are therefore not diffracted; they are accelerated in an electrostatic field that is diffracted, thereby building up a fringe pattern on the photographic plate. The particles are just acting as field tracers. A difficulty is that the electrically neutral neutron can also be diffracted. A diffraction experiment requires that they move, however, which will provide a solution to be discussed.


Motion

Motion can be modelled as the array of ripples that spreads across the surface of a pond after a stone is thrown in. A segment of the wave packet coincides with the many minute domains that define the atoms of the moving object, and moves them along. The foam model implies surface tension, whereby increases in interface area increase the total energy of the domain. If the brane is locally thrown into undulations, this will increase the surface area of the interface and thus the energy. This accounts for the kinetic energy of moving masses.  

Brane surface tension would be a consequence of the basic yin-yang de-mixing phenomenon, because increases in interfacial area without volume increase (interface crumpling) can be construed as incipient re-mixing, which would go against the cosmological trend. Thus, the interface always tends to minimum area, as if it had surface tension. This tension provides the restoring force that one needs for an oscillation, which is important because waves figure prominently in this theory. However, a wave also needs the medium to have inertia, or resistance to change, and this is a limitation of the present theory.


Mass and fields

Mass would be due to the domains being full of standing waves that store the energy (equivalent to mass) of many historical merging events. The antinodes in the standing wave pattern would be the regular array of atoms thought to make up a crystal (most solid matter is crystalline). The facets of the crystals would correspond to the domain walls. 

The waves could be confined inside the domains by travelling across our 3-brane at right angles, strictly along directions we cannot point. However, something has to leak into the 3-brane to account for electrostatic effects. Crossing the 3-brane perpendicularly is possible by geometry if each particle is a hypersphere exactly bisected by the 3-brane, and mass-associated waves travel around the hypersphere surface. The neutron produces no leakage waves, which could be assured by the presence of a nodal plane coinciding with the spherical intersection of the particle hypersphere with the 3-brane. Electrons and protons could emanate leakage waves, a possibility that suggests the origins of their respective electric fields. However, the fact that these particles have stable masses means that waves must be absorbed from the 3-brane as fast as they exit, meaning that an equilibrium with space must exist. For an equilibrium to be possible, space must be bounded somehow, which is already implied by the foam model. Since we know of only two charged, stable particles, two equilibrium points must exist. This scenario also explains why all electrons are identical and why all protons are identical. If their respective leakage waves are of different frequencies, the two particle types could equilibrate largely independently by a basic theorem of the Fourier transform. Particles of like charge would interact with each other's leakage wave, resulting in a tendency to be entrained by it. This would account for electrostatic repulsion. Particles of opposite charge would radiate at different frequencies and therefore not interact, leading to no entrainment. However, since each particle is in the other's scattering shadow, it will experience an imbalanced force due to the shadow, tending to push it toward the other particle. This effect could explain electrostatic attraction. Gravity may also be due to mutual scatter shadowing, but involving a continuum spectrum of background waves, not the line spectra of charged particles. Background waves are not coupled to any domains, and so do not consist of light quanta, which, according to the present theory, are waves coupled to massless domain pairs. A bisected hypersphere particle model predicts that subatomic particles will appear to be spheres of a definite fixed radius and having an effective volume 5.71 x greater than expected from the same radius of a sphere in flat space. (5.71 = 1 + π x 1.5) Background waves that enter the spherical surface will therefore be slow to leave, a property likely to be important for physics.

A very close, even mutualistic, relationship between domain geometry and interface waves may exist, all organized by the principle of seeking minimum energy. Atomic nuclei within the crystal could be much tinier domains, also wave-filled but at much shorter wavelengths. The nuclear domains would sit at exactly the peaks of the electron-wave antinodes because these are the only places where the electron waves have no net component in the plane of the interface. 


The big picture 

Similar to the brane-collision theory, the big bang may have been due to contact between two cosmologically sized domains of four spatial dimensions and opposite type, and our 3-brane is the resulting interface. The matter in our universe would originate from the small domains caught between the merging cosmological hyper-domains. This could account for the inflationary era thought to have occurred immediately after the big bang. The subsequent linear expansion of space may be due to the light emitted by the stars; if light is an undulation in the 3-brane along a large extra dimension, then light emission creates more 3-brane all the time, because an undulating brane has more surface area than a flat one. 

* "Overview of Planiverse" page.


Saturday, October 31, 2020

#60. The Trembling-Network Theory of Everything [physics]

PH

Red, theory; black, fact

The world of appearances is simulation-like, in that how we perceive it is strongly affected by the fact that our point of view is inside it, and illusions are rampant.

The slate-of-givens approach is intended to exploit consilience to arrive at a simplified physics that attributes as many phenomena as possible to historical factors and the observer's point of view. Simplified physics is viewed as a stepping-stone to the true theory of everything. The existence of widespread consilience implies that such exists.


The basic theory

The underlying reality is proposed to be a small-world network, whose nodes are our elementary particles and whose links ("edges" in graph theory) are seen collectively as the fields around those particles.

This network would be a crude approximation to a scale-free network (fractal network), but is only a recursion of three generations (with a fourth in the process of forming), each comprised of two sub-generations, and not an infinite regress. The first generation to form after the big bang was triangular networks that we call baryons. In the next generation, they linked up to form the networks underlying light atomic nuclei. These, and individual protons, were big enough to stably bond to single nodes (electrons) to form the network version of atoms. Above the atomic/molecular/electromagnetic level, further super-clustering took on the characteristics of gravitation. At the grandest cosmological scales, we may be seeing a fourth "force" that produces the foamy structure of galaxy distribution. The observations attributed to the presence of dark matter may be a sign that, at the intra-galactic scale, the nature of the "fields" is beginning to shift again.

I conjecture that throughout this clustering process, a continuous thermal-like agitation was running through all the links, and especially violent spikes in the agitation pattern could rupture links not sufficiently braced by other, parallel links. This would have been the basis of a trial-and error process of creation of small-world characteristics. The nature of the different "forces" we seem to see at different scales would be entirely conditioned by the type of clusters the links join at that scale, because cluster type would condition the opportunities for network stabilization by cooperative bracing. 

Mapping to known science

Formation and rupture of links would correspond to the quantum-mechanical phenomenon of wave-function collapse, and the endless converging, mixing, and re-diverging of the heat signals carried by the network would correspond to the smooth, reversible time-evolution of the wave-function between collapses. The experience of periodic motions would arise from heat recirculation in closed paths embedded in the network. 

The photoelectric effect that Einstein made famous can be given a network interpretation: the work function is the energy needed to simultaneously break all the links holding the electron to the cluster that is the electrode, and the observation of an electron that then seems to fly away from the electrode happens by calculation in the remaining network after it has been energized by energy in excess of that needed to break the links, returning back into the network from the broken ends.

Distance

All the ineffably large number of nodes in the universe would be equidistant from each other, which is possible if they exist in a space with no distance measure. Distance would be the number of nodes that an “observer” cluster contains divided by the number of links connecting it with the observed cluster.

The finite speed of light

The time-delay effect of distance can be described by a hose-and-bucket model if we assume that all measurements require link breaking in the observer network. The energy received by the measuring system from the measured system is like water from a hose progressively filling a bucket. The delayed overflow of the bucket would correspond to the received energy reaching threshold for breaking a link in the observer network. The fewer the links connecting observer to observed relative to the observer size (i.e., the greater the distance), the slower the bucket fills and the longer signal transmission is observed to take.

The above mechanism cannot transmit a pulsatile event such as a supernova explosion. It takes not one, but two integrations to convert an impulse into a ramp function suitable for implementing a precise delay. Signal theory tells us that if you can transmit an impulse, you can transmit anything. The second integration has already been located in the observer cluster, so the obvious place in which to locate the first integration is in the observed cluster. Then when the link in the observer cluster breaks, which is an endothermic event, energy is sucked out of both integrators at once, resetting them to zero. That would describe an observer located in the near field of the observed cluster. In the far field, the endothermic rupture would cool only the observer cluster; most of the radiative cooling of the observed cluster would come from the rupture of inter-cluster links, not intra-cluster links. Thus, hot clusters such as stars are becoming increasingly disconnected from the rest of the universe. This can account for the apparent recessional velocity of the galaxies, since distance is inversely proportional to numbers of inter-cluster links.

Oscillations

Oscillating systems may feature 4 clusters and thus 4 integrators connected in a loop to form a phase-shift oscillator. These integrators could be modeled as a pair of masses connected by a spring ( = 2 integrators) in each of the observer and observed systems (2 x 2 = 4 integrators).

Motion and gravity

Motion would be an energetically balanced breaking of links on one side of a cluster and making of links on the other side. This could happen on a hypothetical background of spontaneous, random link making and breaking. Acceleration in a gravitational field would happen if more links are coming in from one side than the opposite side. More links will correspond to a stronger mutual bracing effect, preferentially inhibiting link breaking on that side. This will shift the making/breaking equilibrium toward making on that side, resulting in an acceleration. The universal gravitational constant G could be interpreted as expressing the probability of a link spontaneously forming between any two nodes per unit of time.

Dimension

That the universe is spatially at least three-dimensional can be reduced to a rule that links do not cross. Why the minimum dimensionality permitted by this rule is the one we observe remains unexplained and is a limitation of the present theory. 

A universal law 

Heat of link formation = heat of link rupture + increases in network heat content due to increases in network melting point due to increases in mutual bracing efficiency. Melting point measures the density of triangles in the network.

Repulsive forces

Repulsive forces are only seen with electromagnetism and then only after ionization. When particles said to be oppositely charged recombine, neutral atoms are re-formed, which creates new triangles and thus increases melting point. The recombination of particles said to be of like charge creates relatively few triangles and is therefore disfavored, creating the impression of mutual repulsion.
 

Momentum

Links are directional in their heat conduction. A direction imbalance in the interior of a cluster causes motion by spontaneously transporting heat from front to back. Front and back are defined by differences in numbers of links to an arbitrary external cluster between front and back sub-clusters.

Case study of a rocket motor

A directional link can only be burned out by heat applied to its inlet end. During liftoff, the intense heat down in the thruster chambers burns out links extending up into the remainder of the craft. This leaves an imbalanced excess of links within the rocket body going the other way, leading to a persistent flow of heat downward from the nose cone. This cooling stabilizes links from overhead gravitationally sized clusters ending in the nose cone, causing them to accumulate, thereby shortening the "distance" from the nose cone to those clusters. Meanwhile, the heat deposited at the bottom of the rocket progressively burns out links from the rocket to the Earth, thereby increasing the "distance" between the rocket and the Earth. The exhaust gasses have an imbalanced excess of upward-directed asymmetric links due to the temperature gradient along the exhaust plume that serves to break their connection to the rocket and create the kind of highly asymmetrical cluster required for space travel. Link stabilization is likewise only responsive to what happens at the inlet end. 

Future directions

Links in the universal network may be the real things and the nodes are just their meeting places, which only appear to be real things because this is where the angle of the flow of energy changes. 
All links may directional and pairing of oppositely-directed links was an early step in the evolution of the universe. 
Directional links may be representable as an inlet part joined to an outlet part. With this decomposition, a link pair looks like this:
⚪⚫
⚫⚪
A purely directional link recalls the one-way nature of time and may represent undifferentiated space and time. 

Monday, April 27, 2020

#58. Is the Theory of Everything Like This? [physics]


Red, theory; black, fact


Recursive.

The picture appears to contain a diagonal line, but if you look closely, you will see that nowhere is there a diagonal line. This picture was created by a simple recursive algorithm, and the disappearing diagonal is formed by successive approximations. 
In theorizing, then, you keep applying the same principle to the result of its previous application. Example: gravity makes a molten ball orbiting the Earth, namely the early moon, and then that same gravity brings meteorites down on the resulting smooth surface of basalt and anorthosite to create the pattern of craters we observe. Example: the evolution of evolution; all the molecular apparatus of evolution lies within the scope of that same evolution. Example: What if a quantum vacuum fluctuation happens in the background of a previous fluctuation? Did such a recursion create the observable universe?

Thursday, June 13, 2019

#52. Reality is Virtual but the Host Computer is Analog. [physics]



Red, theory; black, fact

The nucleus around which a theory of everything will hopefully crystallize.


An Idea About What Exists

Existence may be something like a Turing machine, but with a multiplicity of tapes and a multiplicity of processors. Each tape corresponds to one elementary particle in space-time and the processors implement only Hebb’s rule, a rule of learning first discovered in neuroscience, and that governs correlations among signal sources. The tapes are polymer-like, and their ongoing elongation by polymerization at the ends causes the passage of time. This elongation is associated with a white-noise signal unique to each particle/tape/strand because the monomers are sampled from a population with a Gaussian size distribution.

A theoretical schema showing the basic strand-processor interaction. The theory borrows from the Turing machine concept, Hebb's rule of learning, and the chemistry of polymerization. Black, one information-bearing polymer strand; blue, processor; red, monomers.


A still more complex, three-strand scheme. At a given point, multiple strands may adhere to the same processor, and vice-versa.

Mechanism

The first illustration is my current best guess as to what a Hebb processor is like, but as we say in research, "Many questions remain." The short, black lines are the catalyzed ligation points between monomers, and these are the points of attraction to the processor. If the rear pull-off point encounters a big gap between ligation points, the processor will advance an unusually great distance in one step, creating an unusually long catalytic pocket at the front, which will have a selectivity for longer monomers, thereby implementing a copying mechanism. Causally, this is backwards, but the alternative explanatory plan seems to involve a messy discussion of chemical equilibria.

The Big Picture

This machine is presumed to be naturally occurring and to belong to a particular stage in the physical evolution of the universe. I make no appeal to arguments of intelligent design, even by space aliens. By the anthropic principle, the machine is not necessarily typical of all structures extant at that stage. In other words, we are living in a natural sub-world that is simulation-like in that the view from within is entirely unlike the view from without, but the two views are related by something that could be called a sub-world law, an example of which would be d = K/rxy. This concept is nothing new because it could also serve in outline as a description of human conscious experience, which is most unlike a mass of neurons signaling to each other in the dark.  Thus, the theory of everything could turn out to be a series of nested sub-world laws.

Mathematization

The length of a strand relative to some interior point is a Gaussian white-noise time-series signal with an upward trend just steep enough to eliminate negative slopes. I will deal here with the detrended version of this signal because, on the laboratory distance scale, both the observed system and the observer will share the same trend, preventing its direct observation. Moreover, the polymerization process is postulated to preserve a permanent record of the detrended signal. Therefore, while the future does not exist in this model of time, the past is perfectly preserved. A set of distinguishable time series is called panel data and is a Euclidean space by the mathematical definition and can therefore map onto and explain physical space, at least on the laboratory scale.

Imagine some panel data consisting of two time series, X and Y, representing two elementary particles. Take a slice of this data ten samples long and plot them as two ten-dimensional vectors, X and Y. The dot product of these vectors is then easily computed as x₁y₁ + x₂y₂ + … x₁₀y₁₀. A Euclidean space is defined as an affine space (i.e., containing no special points like the origin) where the vector dot product is defined. Recall that this dot product is equal to the length of X times the length of Y times cosθ, where θ is the angle between the vectors. Moreover, cosθ is here equal to rₓy, aka Pearson’s r, a measure of correlation between signal sources. Pearson's r is commonly used in the sciences, including neuroscience, and ranges from -1 for negative correlations to +1 for positive correlations; zero indicates no correlation.

rxy may represent distance in space-time, and vector length represents mass-energy in space-time. An rxy of 0 would represent infinite distance and an rxy of 1 would represent 0 distance, a state found only in black-hole singularities or the big bang singularity

The Illusion of Euclidian Space

Processes would be experienced as close together because they are correlated, not correlated because they are close together. The latter is the usual assumption, usually rationalized as being due to the easy exchange of synchronizing photons or virtual photons at close range. However, we seem to be moving here toward the elimination of the light/photon construct from physics.

A Second-iteration Theory

A simpler possibility is d = K/rxy and rxy = K/d. This change advantageously limits how small space-time distances can become, thereby eliminating infinities from gravity calculations. K is this minimum length. With this revision, the dot product of two vectors in the simulation becomes equal to gravitational binding energy.

How Known Physics Comes from the Model

Orderliness

Each strand would have a Hebb processor on each end that may catalyze the polymerization process. Each processor would also be connected to a random point within the mass of strands. This connection moves along the associated strand like a phonograph needle,  reading the recorded signal while keeping pace with the growing end. The processor also has a front read-point in the present. The two read points may or may not be on the same strand. If a correlation is detected between the two read points, the correlation is permanently enhanced by a modification of the polymerization process, per Hebb’s rule. All the orderliness of the universe is supposed to emerge from this one type of calculation.

Space

At this point, we have not eliminated space from physics; we have merely replaced Euclidean space by another kind of space that is less structured, in which the spaghetti machine has its being. The new space is a metric space but has neither norm nor dot product axiomatically, although they can be present in simulation. The metric is bijective with the counting (natural) numbers and is equal to the number of primordial time elements (PTEs) in a polymer of same. The metric space may itself be a simulation running on a processor situated in a still simpler space, namely a topological space, in which only a few distinctions matter, such as inside-outside and closed-open.

Speed of Light

Degrees of correlation less than maximal are encoded in the processor as the number of strands adhering to the processor above the minimum for inter-strand copying, namely two. One of these middle strands is illustrated in the sketch of the three-strand scheme, and they putatively degrade the fidelity of the copying process and reduce Pearson's r in proportion to their numbers, while also introducing a time delay into the copying process, due to changes in the length of the processor. A reduction in Pearson's r, which increases the encoded distance, simultaneous with an increase in the copying time delay is responsible for the finite speed of light.

Hydrogen Spectrum

If N is the number of middle strands on a processor, then a reasonable guess as to its effect on Pearson’s r would be r = 1/(N + 1). Units analysis requires that the electron binding energy equal the length of the projection of the electron vector on the proton vector. This is not the dot product and it has the correct units, namely mass-energy. Rydberg's formula for predicting the energies of the spectral lines of hydrogen can thus be reproduced as:
ΔE = ||e||*(1/n₁ – 1/n₂), n₂ > n₁
Equation (1)

Gravity

The default interaction implemented by Hebb’s rule would be gravitational attraction. Black hole formation illustrates that gravitation has a built-in positive feedback, and this would derive from the positive feedback built into simple forms of Hebb’s rule.
To provide the hypothetical Hebb processors with such a positive feedback, I postulate the following: Middle strands come and go in an energy-dependent excision/reintegration process and have a small loop of processor adhering to them when free, which explains how the processor length changes occur. A high-fidelity copying process releases more energy than does a low-fidelity copying process, and excision requires energy input. These ingredients, together with the fidelity-degrading property of a middle strand, should be sufficient for a positive feedback.

Mass

If the two read points are on the same strand, the result will be an oscillation. Electromagnetism could be the set of interactions peculiar to oscillating strands. The variance needed to express the oscillation would be deducted from a fixed total, resulting in less variance available to represent distances by its correlations. A smaller distance signal will be more rapidly modified by the Hebb processors, resulting in faster responses to forces and a smaller mass according to Newton’s a = F/m. Thus, we expect neutral particles to be more massive than charged particles, and this tendency is indeed found among the mesons and when comparing the proton and neutron. The relatively great mass of the proton and neutron and the nuclear strong force itself may emerge from a cyclic pattern of three strands (the quarks) connected by three processors. The example of the benzene molecule teaches us to expect novel results from cyclization. The great mass of the baryons may come from the chaos inevitable in the celestial-mechanics version of the three-body problem, but the three bodies would be the three quarks. In the present theory, noise amplitude corresponds to mass-energy, and fast chaos looks like noise. 

An extension of this idea would be that chaos is the source of all mass and the properties of a timeline depend on how many highly-correlated others (HCO) there are and the degree of this correlation. One HCO produces an oscillation but no mass; two or more produce chaos and some amount of mass. High correlation can be understood as relative, leading to a hierarchy of relationships. 

Repulsive Forces

There may be an alternative kind of middle strand or mode of adhesion that enhances copying fidelity upon adhesion rather than degrading it. This amendment to the theory may be required to model interactions that appear repulsive.

The Transparency of the Universe

Hebb processors with their rear read points in the distant past would open up long-distance communication channels in space-time, giving us the by-now familiar experience of looking millions of years into the past through powerful telescopes, to see galaxies as they were when the universe was young. The communication would be instantaneous, but from an old source; not slow from a new source.

The big bang

The universe may have begun in a singularity for a trivial reason: it initially had no way to represent information, let alone correlations, because all the incoming monomers were PTEs, identical in size and having the smallest possible size. A slow direct condensation reaction in the monomer pool then gradually built up larger blocks of PTEs, causing the average size of the items adding to the strands by polymerization to increase progressively. The standard deviation of the size distribution would likewise have increased. 

Space would have expanded rapidly at first, as predicted by the inflationary hypothesis, because the first little bit of polymerization entropy to develop would have had a disproportionate effect on the system's ability to represent information. This predicts that the mass-energy of all particles has also been increasing ever since the big bang. Therefore, by equation (1), we expect that spectral lines from ancient luminous matter will be redder than comparable lines from contemporary matter, as found by Hubble, which explains the cosmological red shift.

Monday, June 5, 2017

#30. The Russian-dolls--multiverse Part II [physics]

PH

Red, theory; black, fact



Continuing from the previous post, leptons may arise as electromagnetic wave functions originating in p2 that are transported into our p3 universe/condensate by ordinary diffusion and convection. Wave functions in p2 that are already leptons become our baryons when they are transported in. The only kind of wave functions that are "native" to a given frame of reference are electromagnetic (photonic) in that frame of reference. If they subsequently propagate towards increasing p (inwards) they gain mass as matter; if they propagate towards decreasing p (outwards), they first lose mass as matter until they are photonic (i.e., massless) and then gain mass as antimatter.

To produce stable leptons from in-migrating photons, the first condensates, the p2s, would have had to be rotating simultaneously about three mutually perpendicular axes. If this is impossible for p3 physics, we have to appeal to the possibility of a different physics in p1 for any of these ideas to make sense.

A "universe" is something like an artist's canvas with a painting in progress on it. First, nature makes the blank canvas, and then, in a second stage, puts the information content on it. Consider the moon. It formed out of orbiting molten spray from the collision of two similarly-sized planetesimals. In the molten state, its self-gravity could easily round it up into a perfect sphere which could have solidified with a mostly smooth surface. Call this smooth surface the "canvas." Subsequently, the very same force of gravity would have brought down meteors to cover the surface in an elaborate pattern of craters. Call this the "painting." 

Now consider the neutronium core of a neutron star, viewed as a p4, or small universe. The tremendous energy release of the catastrophic gravitational collapse in which it forms homogenizes all the matter into pure neutrons, thought to be a superfluid. This creates the "canvas." Subsequently, matter and energy from our p3 migrate into the superfluid, producing a "painting" of leptons (our photons), baryons (our leptons), and "uberbaryons" (our baryons). Indeed, the neutron-star core is actually thought to be not pure neutronium, but neutronium containing a sprinkling of free protons and electrons (as seen in p3, of course).