Why the Universe Is Our Home: It’s Not a Coincidence (Part 2)
Posted by Ram Kumar Shrestha on March 27, 2013
By Deepak Chopra, Co-author, ‘Super Brain: Unleashing the Explosive Power of Your Mind to Maximize Health, Happiness, and Spiritual Well-Being’; founder, The Chopra Foundation
At the human level everyone would like to feel that life has meaning, which implies that the setting for life — the universe at large — isn’t a cold void ruled by random chance. There is a huge gap here, and for the past century science hasn’t budged from its grandest assumption, that creation is ruled by random events. There was good reason for this adamant position. The mathematics of modern physics is a marvel of precision and accuracy. No guiding hand, creator, higher intelligence or deity was needed as long as the equations worked.
Now there is a crack in the theory, tiny at first but opening into a fissure, that casts doubt on how science observes the universe. The fault isn’t that the mathematics was wobbly and loose. Quite the opposite. The universe is too finely tuned to fit the random model. God isn’t going to leap into the breach, although religion has reason to feel better about not accepting the so-called “accidental universe.” The real fascination lies in how to match reality “out there” with the potentiality of the human mind. Both are up for grabs.
In the modern era, Sir Arthur Eddington and especially Paul Dirac first noticed that certain “coincidences” in dimensionless ratios can be found. These ratios link microscopic with macroscopic quantities. For example, the ratio of the electric force to gravitational force (presumably a constant), is a large number (about 1040), while the ratio of the observable size of the universe (which is presumably changing) to the size of an elementary particle is also a large number, surprisingly close to the first number (also about 1040). It is hard to imagine that two very large and unrelated numbers would turn out to be so close to each other. Why are they? (For earlier examples of fine tuning, please see our first post, which gives some general background as well.)
Dirac argued that these fundamental numbers must be related. The essential problem is that the size of the universe is changing as the cosmos expands, while the first relationship is presumably constant, given that it involves only two supposed “constants.” Why should two very large numbers, one variable and the other not variable, be so close to each other? (It’s like seeing a person’s vocal chords vibrating in all kinds of ways and yet discovering that each word he speaks is exactly half a second apart — even this image is a simplification compared to the actual problem, which spans similar ratios in terms of light years and time in the trillionth of a second.)
Dirac’s “large number hypothesis” attempts to link ratios in such a way that they aren’t coincidental. But fine tuning was a pervasive finding in other places too, places where unexpected ratios match in terms like the number of particles in the universe to the entropy in the whole system. Such “coincidences” extend beyond cosmology to all-encompassing relationships at many levels. Let’s look at some cases at our level of reality, where matter is comfortably composed of atoms and molecules. The “fine structure constant” determines the properties of these atoms and molecules. It is a pure number, approximately 1/137. If the fine structure constant were different by as little as approximately 1 percent, no atoms or molecules would exist as we know them.
For example, the fine structure constant determines how solar radiation is transmitted and also how it is absorbed in the Earth’s atmosphere; it also applies to how photosynthesis works. Now, the Sun “happens” to emit the majority of its radiation in a part of the spectrum where the atmosphere of the Earth “happens” to be transparent to it. However, the radiation from the Sun is determined by the value of the gravitational constant. Why would a “macroscopic” quantity, namely the force of gravity, be such that the spectrum of radiation would just happen to be the right one to be transmitted through the Earth’s atmosphere and absorbed by plants (the atmospheric transmission being determined by the “microscopic” fine structure constant)? If these two effects did not work together exactly right, there would be no life as we know it. The initial question we asked — “How do humans fit into the universe?” — isn’t satisfactorily answered when one coincidence must be piled on another.
The coincidences don’t end there. For instance, massive supernova explosions are responsible for forming the heavy elements like iron that are in your body today, billions of years hence. The specifics of the supernova explosion are determined by the weak force, which exists at the infinitesimally small scale of the atomic nucleus. If this weak force were different by as little as 1 percent or so, there would be no supernova explosions, no formation of heavy elements and therefore no life as we know it.
The problem of fine tuning is one of the biggest embarrassments facing modern physical and biological science. These “coincidences” may be indicating the existence of some deep, underlying unity involving the fundamental constants, linking the microcosm to the macrocosm just as the ancients saw without mathematics. The so-called “anthropic principle” has been proposed to account for fine tuning as follows: If the constants were not exactly right, there would be no life on Earth, no humans, etc. Being here, we look around and find that the cosmos led to our existence. This is an attempt to preserve reality “out there” by limiting it to the aspects a human mind and the five senses can understand. However, if you think about it, the anthropic principle just states the obvious: “We are here because we are here.” It has little explanatory power.
As far as we are concerned, the dilemma boils down to two clear choices: On the one hand, fine tuning is indeed just coincidence piled higher and her, and humans “happen to be in the right universe.” This is the point of view favored by M-theory proponents, including Stephen Hawking, using superstring theory. M-theory posits trillions upon trillions of possible universes (the multiverse) that bubble away, churning out every possible combination of constants, zillions of which do not match to form life. But one did, and we live in it. This is the cosmic equivalent of putting a hundred monkeys to work tapping randomly on typewriters, eventually producing the complete works of Shakespeare after also producing an almost infinite mountain of gibberish. (At the mathematical level, superstring theory fits any number of models, extending beyond counting. Unfortunately, no observations support which model is correct, and worse still, it may be that no possible observation can, because superstrings exist — if they do at all — beyond time and space.)
Although superstring theory may one day be proven, there is no reason to try to invoke it for the extreme fine tuning we observe, simply as a means to avoid a huge embarrassment for today’s science. M-theory waves its hands around, invoking a randomly picked universe that “happens” to be right. As such, nothing in the end needs to be explained: Pure randomness rules if we live in nothing more than an incredibly unlikely universe of our own — lucky us.
How unlikely is it? Estimates from superstring theory yield 1 out of 10500, a number far greater than the number of particles in the universe. But it gets even more cumbersome: From chaotic inflation theory, the chances of being in the right universe are much smaller, 1/((1010)10)7! The proponents insist that we just live in one of the many, many universes and that there is nothing to explain. It is hard to swallow this view and see how scientists can be proposing anything so empty. It’s one thing to claim that a hundred monkeys can write Shakespeare, but it’s quite another to declare that there is no other way. The fact is that all these fine-tuned constants fit together far more precisely than any work of literature, even at Shakespeare’s level of genius.
We said that two clear choices exist. The other, which we favor, is that the universe is self-organizing; it is “self-driven” by its own working processes. In a self-organizing system, each new layer of creation must regulate the prior layer. So the generation of every new layer in the universe, from particle to black hole, cannot be considered random, given that it was created from a preexisting layer that in turn was regulating the layer that produced it. The same holds true throughout nature, including the workings of the human brain. This “recursive” system of self-organization, where every layer curves back on itself to monitor another layer, pervades physics and biology. For example, your genes produce proteins that monitor and regulate the genome itself.
In your brain, neural networks create new synapses (the connecting gaps between brain cells) that in turn monitor and regulate the preexisting synapses that gave rise to them. The brain integrates all new knowledge, information and sensory input by associating it with what you already know. Whether we’re speaking of genes and the brain or solar systems and galaxies, self-organization is present, operating through the constant activity of feedback loops. Existence requires balance, which demands feedback so that imbalances can be corrected automatically. Every new bit of the universe, however minuscule, must create a feedback loop with what gave rise to it. Otherwise it wouldn’t be connected to the whole; in human terms it would be homeless.
Viewed this way, even when a new creation appears to be random, purpose is invoked, beginning with the overarching purpose of homeostasis, the dynamic balance of all parts into a whole. In our view the fine tuning of the universe fits into the scheme; indeed, it shows how sensitive nature is, balancing galaxies by making sure that subatomic particles are in balance first. Self-organizing is embedded in the fabric of the cosmos, acting like an invisible off-stage choreographer to drive evolution (not the red herring of “intelligent design” by a supernatural God in the sky). The self-design of the universe is underpinned by quantum processes, rapidly picking choices that lead to optimal final results, as required by acts of observation. Certain assumptions are needed for this alternative, as they are for any theory:
- We are able to observe the universe because we are woven into its unfolding existence.
- The universe reinforces the patterns and forms that are successful in evolving from random ingredients.
- The same organizing principles that exist in us were inherited from the universe. These include creativity, intelligence and evolution.
- In some way the universe monitors and governs itself. Call it a “self-aware” or “conscious” universe; the terminology is secondary to a primary fact: Something knows how to self-design on the grandest scale as well as the most minute.
- This something permeates creation, including ourselves. The linkage between self-aware humans and a self-aware cosmos is necessary; it must exist or we wouldn’t be able to observe the world that surrounds us.
- Randomness isn’t the opposite of meaning and purpose but serves them, just as the randomly smeared colors on a painter’s palette serve the highly organized picture that is being made.
The two choices are infused into everyone’s life: either incredibly small odds of finding the right universe from a random set of universes “out there,” or a universe imbued with life and consciousness that drives itself. We hold the latter to be simpler and more logical and also more scientific. It fits the facts without abolishing any cherished, precise observations. Indeed, we are going to all this trouble in order to preserve observation. The precise matchup of so many constants, ruling the biggest and smallest domains in creation, can be taken as natural rather than accidental. Einstein hit upon a deep truth when he said, “I want to know the mind of God; everything else is just details.” Substitute “the mind of the universe,” and you have a goal worth pursuing for the coming century.
Deepak Chopra, M.D., is the author of more than 70 books, 21 of which were New York Times bestsellers. Murali Doraiswamy, M.D., is a professor of psychiatry at Duke University Medical Center in Durham, N.C., and a leading physician in the areas of mental health, cognitive neuroscience and mind-body medicine. Rudolph E. Tanzi, Ph.D. is the Joseph P. and Rose F. Kennedy Professor of Neurology at Harvard University and the director of the Genetics and Aging Research Unit at Massachusetts General Hospital (MGH). He is the co-author (with Deepak Chopra) of Super Brain: Unleashing the Explosive Power of Your Mind to Maximize Health, Happiness, and Spiritual Well-being (Harmony). Menas Kafatos, Ph.D., is the Fletcher Jones Endowed Professor in Computational Physics at Chapman University. He is the co-author (with Deepak Chopra) of the forthcoming book Who Made God (And Other Cosmic Riddles) (Harmony).