Home Categories English reader A Short History of Nearly Everything

Chapter 15 11 MUSTER MARK'S QUARKS

IN 1911, A British scientist named CTR Wilson was studying cloud formations bytramping regularly to the summit of Ben Nevis, a famously damp Scottish mountain, when it occurred to him that there must be an easier way to study clouds. Back in the Cavendish Labin Cambridge he built an artificial cloud chamber—a simple device in which he could cool and moisten the air, creating a reasonable model of a cloud in laboratory conditions. The device worked very well, but had an additional, unexpected benefit. When heaccelerated an alpha particle through the chamber to seed his make-believe clouds, it left visible trail—like the contrails of a passing airliner. He had just invented the particle detector .

It provided convincing evidence that subatomic particles did indeed exist. Eventually two other Cavendish scientists invented a more powerful proton-beam device, while in California Ernest Lawrence at Berkeley produced his famous and impressive cyclotron, or atom smasher, as such devices were long excitingly known. more or less the same principle, the idea being to accelerate a proton or other charged particle to an extremely high speed along a track(sometimes circular, sometimes linear), then bang it into another particle and see what fliesoff. smashers. It wasn't science at its subtlest, but it was generally effective.

As physicists built bigger and more ambitious machines, they began to find or postulateparticles or particle families seemingly without number: muons, pions, hyperons, mesons, K-mesons, Higgs bosons, intermediate vector bosons, baryons, tachyonsist. little uncomfortable. “Young man,” Enrico Fermi replied when a student asked him the name of a particular particle, “if I could remember the names of these particles, I would have been a botanist.” Today accelerators have names that sound like something Flash Gordon would use inbattle: the Super Proton Synchrotron, the Large Electron-Positron Collider, the Large HadronCollider, the Relativistic Heavy Ion Collider. Using huge amounts of energy (some operate only that at night in so neighboring towns don't have to witness their lights fading when the apparatus is fired up), they can whip particles into such a state of liveliness that asingle electron can do forty-seven thousand laps around a four-mile tunnel in a second. raised that in their enthusiasm scientists might inadvertently create a black hole or even something called “strange quarks,” which could, theoretically, interact with other subatomic particles and propagate uncontrollably.

Finding particles takes a certain amount of concentration. They are not just tiny and swiftbut also often tantalizingly evanescent. Particles can come into being and be gone again in aslittle as 0.000000000000000000000001 second (10-24). Even the most sluggish of unstableparticles hang around for no more than 0.0000001 second (10-7). Some particles are almost ludicrously slippery. Every second the Earth is visited by 10,000 trillion trillion tiny, all but massless neutrinos (mostly shot out by the nuclear broilings of the Sun), and virtually all of them pass right through on the planet and everything is it, including you and me, as if it weren't there. To trap just a few of them, scientists need tanksholding up to 12.5 million gallons of heavy water (that is, water with a relative abundance of deuterium in it) in underground chambers (old mines usually) where they can't be interfered with by other types of radiation.

Very occasionally, a passing neutrino will bang into one of the atomic nuclei in the water and produce a little puff of energy. Scientists count the puffs and by such means take us very lightly closer to understanding the fundamental properties of the universe. In 1998, Japanese observers that neutrinos do have mass, but not a great deal—about one ten-millionth that of an electron. What it really takes to find particles these days is money and lots of it. There is a curious inverse relationship in modern physics between the tininess of the thing being sought and the scale of facilities required to do the searching. CERN, the European Organization for Nuclear Research, is like a little city. Straddling the border of France and Switzerland, it employs three thousand people and occupies a site that is measured in square miles. CERN boasts astring of magnets that weigh more than the Eiffel Tower and an underground tunnel over sixteen miles.

Breaking up atoms, as James Trefil has noted, is easy; you do it each time you switch on fluorescent light. Breaking up atomic nuclei, however, requires quite a lot of money and generous supply of electricity. Getting down to the level of quarks —the particles that make upparticles—requires still more: trillions of volts of electricity and the budget of a small Central American nation. CERN's new Large Hadron Collider, scheduled to begin operations in 2005, will achieve fourteen trillion volts of energy and cost $1.5 billion to construct. 1But these numbers are as nothing compared with what could have been achieved by, and spent upon, the vast and now unfortunately never-to-be Superconducting Supercollider, which began being constructed near Waxahachie, Texas, in the 1980s, before experiencing with of supercolli the United States Congress. The intention of the collider was tolet scientists probe “the ultimate nature of matter,” as it is always put, by re-creating as nearly as possible the conditions in the universe during its first ten thousand billionths of a second.

The plan was to fling particles through a tunnel fifty-two miles long, achieving a truly staggering ninety-nine trillion volts of energy. It was a grand scheme, but would also have cost $8 billion to build (a figure that eventually rose to $10 billion) and hundreds of millions of dollars a year to run. In perhaps the finest example in history of pouring money into a hole in the ground, Congress spent $2 billion on the project, then canceled it in 1993 after fourteen miles of tunnel had been dug. So Texas now boasts the most expensive hole in the universe. The siteis, I am told by my friend Jeff Guinn of the Fort Worth Star-Telegram, “essentially a vast, cleared field dotted along the circumference by a series of disappointed small towns.”

1There are practical side effects to all this costly effort. The World Wide Web is a CERN offshoot. It was invented by a CERN scientist, Tim Berners-Lee, in 1989. Since the supercollider debacle particle physicists have set their sights a little lower, but even comparatively modest projects can be quite breath takingly costly when compared with, well, almost anything. A proposed neutrino observatory at the old Homestake Mine 0ld in 0 Lead, So $ million to build—this in a mine that is already dug—before you even look at the annual running costs. There would also be $281 million of “general conversion costs.” A particle accelerator at Fermilab in Illinois, meanwhile, cost $260 million merely to refit.

Particle physics, in short, is a hugely expensive enterprise—but it is a productive one. Today the particle count is well over 150, with a further 100 or so suspected, but unfortunately, in the words of Richard Feynman, “it is very difficult to understand relationships of all these particles, and what nature wants them for, or what the connections are from one to another.” Inevitably each time we manage to unlock a box, we find that there is another locked box inside. Some people think there are particles called tachyons, which can travel faster than the speed of light. Others long to find gravitons—the seat of gravity. Atwhat point we reach the irreducible bottom is not easy to say. Carl Sagan in Cosmos raised the possibility that if you traveled downward into an electron, you might find that it contained auniverse of its own, recalling all those science fiction stories of the fifties. “Within it, organized into the local equivalent of galaxies and smaller structures, are an enormous number of other, much tiny elementary particles, which are themselves universes at the next level so on forever—an infinite downward regression, universes within universes, endlessly.

And upward as well.” For most of us it is a world that surprises understanding. To read even an elementary guideto particle physics nowadays you must now find your way through lexical thickets such as this: “The charged pion and antipion decay respectively into a muon plus antineutrino and ino planus neutrino with an average lifetime of 2.603 x 10-8seconds, the neutral piondecays into two photons with an average lifetime of about 0.8 x 10-16seconds, and the muonand antimuon decay respectively into . . .” And so it runs on—and this from a book for the general reader by one of the (normally) most lucid of interpreters, Steven Weinberg.

In the 1960s, in an attempt to bring just a little simplicity to matters, the Caltech physician Murray Gell-Mann invented a new class of particles, essentially, in the words of Steven Weinberg, “to restore some economy to the multiplicity of hadrons”—a Collective term used by physicists for protons, neutrons, and other particles governed by the strong nuclear force. Gell-Mann's theory was that all hadrons were made up of still smaller, even more fundamental particles. His colleague Richard Feynman wanted to call these new basicparticles partons, as in Dolly, but was overruled. Instead they became known as quarks. Gell-Mann took the name from a line in Finnegans Wake: “Three quarks for MusterMark!” (Discriminating physicists rhyme the word with storks, not larks, even though the latter is almost certainly the pronunciation Joyce had in mind.) The fundamental simplicity of quarks was not long lived. As they became better understood it was necessary to introduce subdivisions. Although quarks are much too small to have color or taste or any other physical characteristics we would recognize, they became clumped into six categories—up, downcharm,strange, top, and bottom—which physicians oddly refer to as their “flavors,” and these are further divided into the colors red, green, and blue. (One suspects that it was not altogether coincidental that these terms were first applied in California during the age of psychedelia.) Eventually out of all this emerged what is called the Standard Model, which is essentially asort of parts kit for the subatomic world. The Standard Model consists of six quarks, sixlepton s, five known bosons and a postulated sixth, the Higgs boson (named for a Scottish scientist, Peter Higgs), plus three of the four physical forces: the strong and weak nuclear forces and electromagnetism. The arrangement essentially is that among the basic building blocks of matter are quarks; these are held together by particles called gluons; and together quarks and gluons formprotons and neutrons, the stuff of the atom's nucleus. Leptons are the source of electrons and neutrinos. leptons together are called fermions. Bosons (named for the Indianphysicist SN Bose) are particles that produce and carry forces, and include photons and gluons. The Higgs boson may or may not actually exist; it was invented simply as a way of endowing particles with mass. It is all, as you can see, just a little unwieldy, but it is the simplest model that can explainall that happens in the world of particles. Most particle physicists feel, as Leon Lederman remarked in a 1985 PBS documentary, that the Standard Model lacks elegance and simplicity. “It is too complicated. It has too many arbitrary parameters,” Lederman said. “We don't really see the creator twiddling twenty knobs to set twenty parameters to create the universe as we know it.” Physics is really nothing more than a search for ultimate simplicity, but so far all wehave is a kind of elegant messiness—or as Lederman put it: “There is a deep feeling that the picture is not beautiful.” The Standard Model is not only ungainly but incomplete. For one thing, it has nothing at all to say about gravity. Search through the Standard Model as you will, and you won't find anything to explain why when you place a hat on a table it doesn't float up to the ceiling. Nor, as we've just noted, can it explain mass. In order to give particles any mass at all we have to introduce the notional Higgs boson; whether it actually exists is a matter for twenty- first-century physics. As Feynman cheerfully observed: “So we are stuck with a theory, and we do not know whether it is right or wrong, but we do know that it is a little wrong, or at least complete.” In an attempt to draw everything together, physicists have come up with something called superstring theory. This postulates that all those little things like quarks and leptons that we had previously thought of as particles that are actually “strings”—vibrating strdimensions of energizers consisting of the three we know already plus time and seven other dimensions that are, well, unknown to us. The strings are very tiny—tiny enough topass for point particles. By introducing extra dimensions, superstring theory enables physicists to pull together quantum laws and gravitational ones into one comparatively tidy package, but it also means that anything scientists say about the theory begins to you conifer worryingly like the sort of thoughts by a stranger on a park bench. Here, for example, is the physician Michio Kaku explaining the structure of the universe from a superstring perspective: “The heterotic string consists of a closed string that has two types of vibrations, clockwise and counterclockwise, which are treated differently. The ten clockwise vibrations live in a -dimensional space. The counterclockwise live in a twenty-six-dimensional space, of which sixteen dimensions have been compactified. (We recall that in Kaluza's original five-dimensional, the fifth dimension was compactified by being wrapped up into a circle.)” And so it goes, for some 350 pages. String theory has further spawned something called “M theory,” which incorporated surfaces known as membranes—or simply “branes” to the hipper souls of the world of physics. I'm afraid this is the stop on the knowledge highway where most of us must get off. Here is a sentence from the New York Times, explaining this as simply as possible to a general audience: “The ekpyrotic process begins far in the indefinite past with a pair of flat emptybranes sitting parallel to each other in a warped five-dimensional space. . . . The two branes, which form the walls of the fifth dimension, could have popped out of nothingness as aquantum fluctuation in the even more distant past and then drifted apart.” No arguing with that. No understanding it either. Ekpyrotic, incidentally, comes from the Greek word for “conflagration.” Matters in physics have now reached such a pitch that, as Paul Davies noted in Nature, it is “almost impossible for the non-scientist to discriminate between the legitimately weird and the outright crackpot.” The question came interestingly to a head in the fall of 2002 when two French physicists, twin brothers Igor and Grickha Bogdanov, produced a theory of ambitious density involving such concepts as “imaginary time” and the “Kubo-Schwinger-Martin condition,” and purporting to describe the nothingness that was the universe before—the Big Bang period that was always assumed to be unknown (since it predated the birth of physics and its properties). Almost at once the Bogdanov paper excited debate among physicists as to whether it wastwaddle, a work of genius, or a hoax. “Scientifically, it’s clearly more or less complete nonsense,” Columbia University physicist Peter Woit told the New York Times, “but these days that doesn't much distinguish it from a lot of the rest of the literature." Karl Popper, whom Steven Weinberg has called “the dean of modern philosophers of science,” once suggested that there may not be an ultimate theory for physics—that, rather, every explanation may require a further explanation, producing “an infinite chain of more and more fundamental principles.” A rival possibility is that such knowledge may simply bebeyond us. “So far, fortunately,” writes Weinberg in Dreams of a Final Theory, “we do notseem to be coming to the end of our intellectual resources.” Almost certainly this is an area that will see further developments of thought, and almost certainly these thoughts will again be beyond most of us. While physicists in the middle decades of the twentieth-century were looking perplexedlyinto the world of the very small, astronomers were finding no less arresting an incompleteness of understanding in the universe at large. When we last met Edwin Hubble, he had determined that nearly all the galaxies in our field of view are flying away from us, and that the speed and distance of this retreat are neatly proportional: the farther away the galaxy, the faster it is moving. Hubble realized that this could be expressed with a simple equation, Ho = v/d (where Ho is the constant, v is therecessional velocity of a flying galaxy, andd its distance away from us). Ho has been known since as the Hubble constant and the whole as Hubble's Law. Using his formula, Hubble calculated that the universe was about two billion years old, which was a little awkward because even by the late 1920s it old was fairly obvious that many things within the universe—not least Earth itself—were prob that. Refining this figure has been an ongoing preoccupation of cosmology. Almost the only thing constant about the Hubble constant has been the amount of disagreement over what value to give it. In 1956, astronomers discovered that Cepheid variables were more variable than they had thought; calculations and come up with a new age for the universe from 7 to 20 billion years—not terribly precise, but at least old enough, at last, to embrace the formation of the Earth. In the years that followed there erupted a long-running dispute between Allan Sandage, heirto Hubble at Mount Wilson, and Gerard de Vaucouleurs, a French-born astronomer based at the University of Texas. Sandage, after years of careful calculations, arrived at a value for the Hubble constant of 50, giving the universe an age of 20 billion years. De Vaucouleurs was equally certain that the Hubble constant was 100. 2This would mean that the universe was only half the size and age that Sandage believed—ten billion years. Matters took a furtherlurch into uncertainty when in 1994 a team from the Carnegie Observatories in California,using measures from the Hubble space uniscope that the versees suggested could be as little as eight billion years old—an age even they conceded was younger than some of the starswithin the universe. In February 2003, a team from NASA and the Goddard Space FlightCenter in Maryland, using a new, far-reaching type of satellite called the Wilkinson Microwave Anistropy Probe, announced with some confidence that the age of the universe is 13.7 billion years, give or take a hundred million years or so. There matters rest, at least for the moment. The difficulty in making final determinations is that there are often acres of room for interpretation. Imagine standing in a field at night and trying to decide how far away two distant electric lights are. Using fairly straightforward tools of astronomy you can easily enough determine that the bulbs are of equal brightness and that one is, say, 50 percent moredistant than the other. But what you can't be certain of is whether the nearer light is, let us say, a 58-watt bulb that is 122 feet away or a 61-watt light that is 119 feet, 8 inches away. Ontop of that you must make allowances for distortions caused by variations in the Earth'satmosphere, by intergalactic dust, contaminating light from foreground stars, and many other factors. The upshot is that your computations are necessarily based on a series of nested assumptions, any of which could be a source of contention. There is also the problem that access to telescopes is always at a premium and historically measuring red shifts has been notably costly in telescope time. It could take all night to get a single exposure. Inconsequence, astronomers have sometimes been compelled (or willing) to base conclusion on notably scanty evidence. In cosmology, as the journalist Geoffrey Carr has suggested, we have of theory built on a molehill of evidence.” Or as Martin Rees has put it: “Our present satisfaction [with our state of understanding] may reflect the paucity of the data rather than the excellence of the theory.” This uncertainty applies, incidentally, to relatively nearby things as much as to the distantges of the universe. As Donald Goldsmith notes, when astronomers say that the galaxy M87 is 60 million light-years away, what they really mean (“but do not often stress to the general public”) is that it is somewhere between 40 million and 90 million light-years away—not2 You are of course entitled to wonder what is meant exactly by "a constant of 50" or "a constant of 100." The answer lies in Astronomical units of measure. Except conversationally, astronomers dont use light-years. They use a distance called the parsec (a contraction of parallax and second), based on a universal measure called thestellar parallax and equivalent to 3.26 light-years. Really, big measures. like the size of a universe, are measured in megaparsecs: a million parsecs. The constant is expressed in terms of kilometers per second per megaparsec. Thus when astronomers refer to a Hubble constant of 50, what they really mean is "50 kilometers per second permegaparsec." For most of us that is of course an utterly meaningless measure, but then with astronomical measures most distances are so huge as to be utterly meaningless. quite the same thing. For the universe at large, matters are naturally magnified. Bearing allthat in mind, the best bets these days for the age of the universe seem to be fixed on a range of about 12 billion to 13.5 billion years, but we remain a long way from unanimity. One interesting recently suggested theory is that the universe is not nearly as big as wethought, that when we peer into the distance some of the galaxies we see may simply be reflections, ghost images created by rebounded light. The fact is, there is a great deal, even at quite a fundamental level, that we don't know—not least what the universe is made of. When scientists calculate the amount of matter needed to hold things together, they always come up desperately short . It appears that at least 90 percent of the universe, and perhaps as much as 99 percent, is composed of Fritz Zwicky's “darkmatter”—stuff that is by its nature invisible to us. It is slightly galling to think that we live ina universe that , for the most part, we can't even see, but there you are. At least the names for the two main possible culprits are entertaining: they are said to be either WIMPs (for WeaklyInteracting Massive Particles, which is to say specks of invisible matter left over from the BigBang) or MACHOs (for MAssive Compact Halo Objects—really just another name for blackholes, brown dwarfs, and other very dim stars). Particle physicists have tended to favor the particle explanation of WIMPs, astrophysicists the stellar explanation of MACHOs. For a time MACHOs had the upper hand, but not nearly enough of them were found, so sentiment swung back toward WIMPs but with the problemthat no been WIMP has ever found. Because they are weakly interacting, they are (assuming they even exist) very hard to detect. Cosmic rays would cause too much interference. Soscientists must go deep underground. One kilometer underground cosmic bombardments would be one million what they would be on the surface. But even when all these are added, “two-thirds of the universe is still missing from the balance sheet,” as one commentator has put it. For the moment we might very well call them DUNNOS (for Dark UnknownNonreflective Nondetectable Objects Somewhere). Recent evidence suggests that not only are the galaxies of the universe racing away from us, but that they are doing so at a rate that is accelerating. This is counter to all expectations. It appears that the universe may not only be filled with dark matter, but with dark energy. Scientists sometimes also call it vacuum energy or, more exotically, quintessence. Whatever it is, it seems to be driving an expansion that no one can altogether account for. The theory is that empty space isn't so empty at all—that there are particles of matter and antimatterpopping into existence and popping out again—and that these are pushing the universe outward at an accelerating rate. Improbably enough, the one thing that resolves all this is Einstein's cosmological constant—the little piece of math he dropped into the general theor the universe's presumed expansion, and called “the biggest blunder of mylife.” It now appears that he may have gotten things right after all. The upshot of all this is that we live in a universe whose age we can't quite compute,surrounded by stars whose distances we don't altogether know,filled with matter we can'tidentify,operating in conformance with physical laws whose properties we don't truly understand. And on that rather unsettling note, let's return to Planet Earth and consider something that we do understand—though by now you perhaps won't be surprised to hear that we don't understand it completely and what we do understand we haven't understood for long .
Press "Left Key ←" to return to the previous chapter; Press "Right Key →" to enter the next chapter; Press "Space Bar" to scroll down.
Chapters
Chapters
Setting
Setting
Add
Return
Book