The story

When did scientists first postulate that Earth's atmosphere might have an upper limit?

When did scientists first postulate that Earth's atmosphere might have an upper limit?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

In plenty of myths and stories from ancient (and not-so-ancient) history, people are able to fly or climb all the way up to the "dome of the sky," breathing all the way. But even once science had developed enough to make it clear that there was no "dome" and that astronomical objects were an incredible distance away, people were nowhere near the level of technology to reach the upper atmosphere and see what conditions were like at high altitude.

At a certain point, measurements of air pressure could indicate that the higher you go on a mountain, the thinner the air gets. Such a measurement could easily lead to the deduction that at a certain elevation the air would disappear entirely, but was such a connection ever made?

When was the first time that scientists (or philosophers, etc) suggested that the air surrounding the world might only go so high, and that beyond it was an airless emptiness or void? And when it happened, what was the discovery or deduction that led them to believe that was the case?


Contrary to assertions above, the correct computation of the size of the atmosphere predates Kepler by five centuries. It is sometimes claimed that this computation was was performed (with a correct answer) by Al Hazen in Mizan al-Hikmah (Balance of Wisdom) around the turn of the Millenium but I could not find reputable sources for this claim. In fact, the earliest well-sourced evidence I found for the computation is found in a manuscript of Ibn Mu'adh (from the middle eleventh century), a single Hebrew copy of which can be found at the French National Library (Source).

However, an important point to note is that the idea that the atmosphere was finite was classical: Aristotle held it in part for philosophical reasons but Ptolemy gave hard physical evidence for the claim. Before explaining them, I will point out that until Pascal's work at least, the consensus was aether was found beyond the atmosphere, not void as the question presupposes (1600 years elapsed between the first physical proof that the atmosphere was finite and the first physical proof that void was found beyond).

Ptolemy's argument (which are not in fact the earliest recorded, Hipparchus knew some of them 200 years before) is simple: when stars appear near the horizon, they do not appear where they should but higher, and their position seem to oscillate. No such effect is discernible when the stars are close to their zenith. Likewise, the light of the Sun shines on the sky before the Sun appears and lingers on after it is has disappeared. Ptolemy correctly understood this to be the effect of the refraction of the light in the atmosphere.

Ibn Mu'adh computed the size of the atmosphere assuming that it was a homogeneous refracting material and that the aether beyond was non-refracting. Coupled with the observation that no lingering light is visible after the Sun has sunk deeper than around 18°, this yields a depth of the atmosphere of 80km.

The article here clearly shows that the knowledge of the fact that the atmosphere was finite was well-preserved during the European late middle-age and Renaissance.

In conclusion, the scientifically very well-grounded opinion that the atmosphere is finite is at least as old as Ptolemy, and the first scientifically sound computation at least as old as Ibn Mu'adh.


From THIS

It is a simple, seemingly obvious notion: air has weight; the atmosphere presses down on us with a real force. However, humans don't feel that weight. You aren't aware of it because it has always been part of your world. The same was true for early scientists, who never thought to consider the weight of air and atmosphere.

Evangelista Torricelli's discovery began the serious study of weather and the atmosphere. It launched our understanding of the atmosphere. This discovery helped lay the foundation for Newton and others to develop an understanding of gravity.

This same revelation also led Torricelli to discover the concept of a vacuum and to invent the barometer, the most basic, fundamental instrument of weather study.

On a clear October day in 1640, Galileo conducted a suction-pump experiment at a public well just off the market plaza in Florence, Italy. The famed Italian scientist lowered a long tube into the well's murky water. From the well, Galileo's tube draped up over a wooden cross-beam three meters above the well's wall, and then down to a hand-powered pump held by two assistants: Evangelista Torricelli, the 32-year-old the son of a wealthy merchant and an aspiring scientist, and Giovanni Baliani, another Italian physicist.

Torricelli and Baliani pumped the pump's wooden handlebar, slowly sucking air out of Galileo's tube, pulling water higher into the tube. They pumped until the tube flattened like a run-over drinking straw. But no matter how hard they worked, water would not rise more than 9.7 meters above the well's water level. It was the same in every test.

Galileo proposed that, somehow, the weight of the water column made it collapse back to that height.

In 1643, Torricelli returned to the suction pump mystery. If Galileo was correct, a heavier liquid should reach the same critical weight and collapse at a lower height. Liquid mercury weighted 13.5 times as much as water. Thus, a column of mercury should never rise any higher than 1/13.5 the height of a water column, or about 30 inches.

Torricelli filled a six-foot glass tube with liquid mercury and shoved a cork into the open end. Then he inverted the tube and submerged the corked end in a tub of liquid mercury before he pulled out the stopper. As he expected, mercury flowed out of the tube and into the tub. But not all of the mercury ran out.

Torricelli measured the height of the remaining mercury column, 30 inches, as expected. Still, Torricelli suspected that the mystery's true answer had something to do with the vacuum he had created above his column of mercury.

The next day, with wind and a cold rain lashing at the windows, Torricelli repeated his experiment, planning to study the vacuum above the mercury. However, on this day the mercury column only rose to a height of 29 inches.

Torricelli was perplexed. He had expected the mercury to rise to the same height as yesterday. What was different? Rain beat on the windows as Torricelli pondered this new wrinkle.

What was different was the atmosphere, the weather. Torricelli's mind latched onto a revolutionary new idea. Air, itself, had weight. The real answer to the suction pump mystery lay not in the weight of the liquid, nor in the vacuum above it, but in the weight of the atmosphere pushing down around it.

Torricelli realized that the weight of the air in the atmosphere pushed down on the mercury in the tub. That pressure forced mercury up into the tube. The weight of the mercury in the tube had to be exactly equal to the weight of the atmosphere pushing down on the mercury in the tub.

When the weight of the atmosphere changed, it would push down either a little bit more or a little bit less on the mercury in the tub and drive the column of mercury in the tube either a little higher or a little lower. Changing weather must change the weight of the atmosphere. Torricelli had discovered atmospheric pressure and a way to measure and study it.

Home barometers rarely drop more than 0.5 inch of mercury as the weather changes from fair to stormy. The greatest pressure drop ever recorded was 2.963 inches of mercury, measured inside a South Dakota tornado in June 2003.

So once you know that the atmosphere has finite, and changeable weight at any one point, all the other items become necessary followups. As T himself wrote:

Noi viviamo sommersi nel fondo d'un pelago d'aria. (We live submerged at the bottom of an ocean of air.)


This goes back a lot earlier than Torricelli or Kepler. Aristotle taught that the tangible world is formed from the four sub-lunar elements: earth, water, air, fire. These occupy the space between the centre of the cosmos (that is: the centre of the earth) and the sphere of the moon. The heavenly bodies are made of the fifth element: aether. Thus, there is no air beyond the sphere of the moon. This remained the standard theory throughout antiquity and the Middle Ages.


It was Kepler who first computed the height of the atmosphere at between 40 and 50 miles based on the refraction of the light from the sun at twilight. A correlating study was also made by him on the magnitude of the shadow of the earth on the moon during a lunar eclipse. These computations were later elaborated by Philippe de la Hire, and were more or less correct. Later, Dr. Francis Wollaston took up the matter using the newly discovered powers of the barometer and confirmed that Kepler's estimates were essentially correct. It was Wollaston who was responsible for publicizing the finding and eventually making students and the public at large aware that there was a limit to the atmosphere.


When did scientists first postulate that Earth's atmosphere might have an upper limit? - History

PEOPLE the world over speak of the "Space Age" as beginning with the launching of the Russian Sputnik on 4 October 1957. Yet Americans might well set the date hack at least to July 1955 when the White House, through President Eisenhower's press secretary, announced that the United States planned to launch a man-made earth satellite as an American contribution to the International Geophysical Year. If the undertaking seemed bizarre to much of the American public at that time, to astrophysicists and some of the military the government's decision was a source of elation: after years of waiting they had won official support for a project that promised to provide an invaluable tool for basic research in the regions beyond the upper atmosphere. Six weeks later, after a statement came from the Pentagon that the Navy was to take charge of the launching program, most Americans apparently forgot about it. It would not again assume great importance until October 1957.

Every major scientific advance has depended upon two basic elements, first. imaginative perception and, second, continually refined tools to observe, measure. and record phenomena that support, alter, or demolish a tentative hypothesis. This process of basic research often seems to have no immediate utility, hut, as one scientist pointed out in 1957, it took Samuel Langley's and the Wright brothers' experiments in aerodynamics to make human flight possible, and Hans Bethe's abstruse calculations on the nature of the sun's energy led to the birth of the hydrogen bomb. just as Isaac Newton's laws of gravity, motion, and thermodynamics furnished the principles upon the application of which the exploration of outer space began and is proceeding. In space exploration the data fed back to scientists from instrumented satellites have been of utmost importance. The continuing improvement of such research tools opens up the prospect of greatly enlarging knowledge of the world we live in and making new applications of that knowledge.

In the decade before Sputnik. however, laymen tended to ridicule the idea of putting a man-made object into orbit about the earth. Even if the feat were possible, what purpose would it serve except to show that it could be done? As early as 1903, to be sure. Konstantin Tsiolkovskiy, a Russian scientist, had proved mathematically the feasibility of using the reactive force that lifts a rocket to eject a vehicle into space above the pull of the earth's gravity. Twenty years later Romanian-born Hermann Oberth had independently worked out similar formulas, but before the l950s, outside a very small circle of rocket buffs, the studies of both men remained virtually unknown in the English-speaking world. Neither had built a usable rocket to demonstrate the validity of his theories, and, preoccupied as each was with plans for human journeys to the moon and planets, neither had so much as mentioned an unmanned artificial satellite. 1 Indeed until communication by means of radio waves had developed far beyond the techniques of the 1930s and early l940s, the launching of an inanimate body into the heavens could have little appeal for either the scientist or the romantic dreamer. And in mid-century only a handful of men were fully aware of the potentialities of telemetry. 2

Of greater importance to the future of space exploration than the theoretical studies of the two European mathematicians was the work of the American physicist, Robert Goddard. While engaged in post-graduate work at Princeton University before World War I, Goddard had demonstrated in the laboratory that rocket propulsion would function in a vacuum, and in 1917 he received a grant of $5,000 from the Smithsonian Institution to continue his experiments. Under this grant the Smithsonian published his report of his theory and early experiments, Method of Reaching Extreme Altitudes . In 1918 he had successfully developed a solid-fuel ballistic rocket in which, however, even the United States Army lost interest after the Armistice. Convinced that rockets would eventually permit travel into outer space, Goddard after the war had continued his research at Clark University, seeking to develop vehicles that could penetrate into the ionosphere. In contrast to Tsiolkovskiy and Oberth, he set himself to devising practical means of attaining the goal they all three aspired to. In 1926 he successfully launched a rocket propelled by gasoline and liquid oxygen, a "first" that ranks in fame with the Wright brothers' Kitty Hawk flights of 1903. With the help of Charles Lindbergh after his dramatic solo transatlantic flight. Goddard obtained a grant of $5,000 from Daniel Guggenheim and equipped a small laboratory in New Mexico where he built several rockets. In 1937, assisted by grants from the Daniel and Florence Guggenheim Foundation, he launched a rocket that reached an altitude of 9,000 feet. Although not many people in the United States knew much about his work, a few had followed it as closely as his secretiveness allowed them to among them were members of the American Interplanetary Space Society, organized in 1930 and later renamed the American Rocket Society. With the coming of World War II Goddard abandoned his field experiments, but the Navy employed him to help in developing liquid propellants for JATO, that is, jet-assisted takeoff for aircraft. When the Nazi "buzz" bombs of 1943 and the supersonic "Vengeance" missile-the "V-2s" that rained on London during 1944 and early 1945-awakened the entire world to the potentialities of rockets as weapons, a good many physicists and military men studied his findings with attention. By a twist of fate, Goddard, who was even more interested in astronautics than in weaponry, died in 1945, fourteen years before most of his countrymen acknowledged manned space exploration as feasible and recognized his basic contribution to it by naming the government's new multi-million-dollar experimental station at Beltsville, Maryland, "The Goddard Space Flight Center." 3


Robert H. Goddard and colleages examine rocket components after 19 May 1937 rocket flight.
(Photo courtesy of Mrs. Robert H. Goddard)

During 1943 and early 1944, Commander Harvey Hall, Lloyd Berkner, and several other scientists in Navy service examined the chances of the Nazis' making such advances in rocketry that they could put earth satellites into orbit either for reconnaissance or for relaying what scare pieces in the press called "death rays." While the investigators foresaw well before the first V-2 struck Britain that German experts could build rockets capable of reaching targets a few hundred miles distant, study showed that the state of the art was not yet at a stage to overcome the engineering difficulties of firing a rocket to a sufficient altitude to launch a body into the ionosphere. the region between 50 and 250 miles above the earth's surface. In the process of arriving at that conclusion members of the intelligence team, like Tsiolkovskiy and Oberth before them, worked out the mathematical formulas of the velocities needed. Once technology had progressed further, these men knew, an artificial earth-circling satellite would be entirely feasible. More important, if it were equipped with a transmitter and recording devices, it would provide an invaluable means of obtaining information about outer space. 4

At the end of the war, when most Americans wanted to forget about rockets and everything military, these men were eager to pursue rocket development in order to further scientific research. In 1888 Simon Newcomb, the most eminent American astronomer of his day, had declared:- "We are probably nearing the limit of all we can know about astronomy." In 1945, despite powerful new telescopes and notable advances in radio techniques, that pronouncement appeared still true unless observations made above the earth's atmosphere were to become possible. Only a mighty rocket could reach beyond the blanket of the earth's atmosphere and in the United States only the armed services possessed the means of procuring rockets with sufficient thrust to attain the necessary altitude. At the same time a number of officers wanted to experiment with improving rockets as weapons. Each group followed a somewhat different course during the next few years, but each gave some thought to launching an "earth-circling spaceship,'' since, irrespective of ultimate purpose, the requirements for launching and flight control were similar. The character of those tentative early plans bears examination, if only because of the consequences of their rejection.

"Operation Paperclip." the first official Army project aimed at acquiring German know-how about rocketry and technology, grew out of the capture of a hundred of the notorious V-2s and out of interrogations of key scientists and engineers who had worked at the Nazi's rocket research and development base at Peenemuende. Hence the decision to bring to the United States about one hundred twenty of the German experts along with the captured missiles and spare parts. Before the arrival of the Germans, General Donald Putt of the Army Air Forces outlined to officers at Wright Field some of the Nazi schemes for putting space platforms into the ionosphere when his listeners laughed at what appeared to be a tall tale, he assured them that these were far from silly vaporings and were likely to materialize before the end of the century. Still the haughtiness of the Germans who landed at Wright Field in the autumn of 1945 was not endearing to the Americans who had to work with them. The Navy wanted none of them, whatever their skills. During a searching interrogation before the group left Germany a former German general had remarked testily that had Hitler not been so pig-headed the Nazi team might now be giving orders to American engineers to which the American scientist conducting the questioning growled in reply that Americans would never have permitted a Hitler to rise to power. 5

At the Army Ordnance Proving Ground at White Sands in the desert country of southern New Mexico, German technicians, however, worked along with American officers and field crews in putting reassembled V-2s to use for research. As replacing the explosive in the warhead with scientific instruments and ballast would permit observing and recording data on the upper atmosphere. the Army invited other government agencies and universities to share in making high-altitude measurements by this means. Assisted by the German rocketeers headed by Wernher von Braun, the General Electric Company under a contract with the Army took charge of the launchings. Scientists from the five participating universities and from laboratories of the armed services designed and built the instruments placed in the rockets' noses. In the course of the next five years teams from each of the three military services and the universities assembled information from successful launchings of forty instrumented V-2s. In June 1946 a V-2, the first probe using instruments devised by members of the newly organized Rocket Sonde Research Section of the Naval Research Laboratory, carried to an altitude of sixty-seven miles a Geiger-counter telescope to detect cosmic rays, pressure and temperature gauges, a spectrograph, and radio transmitters. During January and February 1946 NRL scientists had investigated the possibility of launching an instrumented earth satellite in this fashion, only to conclude reluctantly that engineering techniques were still too unsophisticated to make it practical for the time being, the Laboratory would gain more by perfecting instruments to be emplaced in and recovered from V-2s. As successive shots set higher altitude records, new spectroscopic equipment developed by the Micron Waves Branch of the Laboratory's Optics Division produced a number of excellent ultraviolet and x-ray spectra, measured night air glow, and determined ozone concentration. 6 In the interim the Army's "Bumper" project produced and successfully flew a two-stage rocket consisting of a "WAC Corporal" missile superimposed on a V-2.

After each launching, an unofficial volunteer panel of scientists and technicians, soon known as the Upper Atmosphere Rocket Research Panel, discussed the findings. Indeed the panel coordinated and guided the research that built up a considerable body of data on the nature of the upper atmosphere. Nevertheless, because the supply of V-2s would not last indefinitely, and because a rocket built expressly for research would have distinct advantages, the NRL staff early decided to draw up specifications for a new sounding rocket. Although the Applied Physics Laboratory of the Johns Hopkins University. under contract with the Navy's Bureau of Ordnance and the Office of Naval Research, was modifying the "WAC Corporal" to develop the fin-stabilized Aerobee research rocket, NRL wanted a model with a sensitive steering mechanism and gyroscopic controls. In August 1946 the Glenn L. Martin Company won the contract to design and construct a vehicle that would meet the NRL requirements. 7

Four months before the Army Ordnance department started work on captured V-2s, the Navy Bureau of Aeronautics had initiated a more ambitious research scheme with the appointment of a Committee for Evaluating the Feasibility of Space Rocketry. Unmistakably inspired by the ideas of members of the Navy intelligence team which had investigated Nazi capabilities in rocketry during the war, and, like that earlier group, directed by the brilliant Harvey Hall, the committee embarked upon an intensive study of the physical requirements and the technical resources available for launching a vessel into orbit about the earth. By 22 October 1945, the committee had drafted recommendations urging the Bureau of Aeronautics to sponsor an experimental program to devise an earth-orbiting "space ship" launched by a single-stage rocket, propelled by liquid hydrogen and liquid oxygen, and carrying electronic equipment that could collect and transmit back to earth scientific information about the upper atmosphere. Here was a revolutionary proposal. If based on the speculative thinking of Navy scientists in 1944, it was now fortified by careful computations. Designed solely for research, the unmanned instrumented satellite weighing about two thousand pounds and put into orbit by a rocket motor burning a new type of fuel should he able to stay aloft for days instead of the seconds possible with vertical probing rockets. Nazi experts at Peenemuende, for all their sophisticated ideas about future space flights, had never thought of building anything comparable.8

The recommendations to the Bureau of Aeronautics quickly led to exploratory contracts with the Jet Propulsion Laboratory of the California Institute of Technology and the Aerojet General Corporation, a California firm with wartime experience in producing rocket fuels. Cal Tech's report, prepared by Homer J. Stewart and several associates and submitted in December 1945, verified the committee's calculations on the interrelationships of the orbit, the rocket's motor and fuel performance, the vehicle's structural characteristics, and payload. Aerojet's confirmation of the committee computations of the power obtainable from liquid hydrogen and liquid oxygen soon followed. Thus encouraged, BuAer assigned contracts to North American Aviation, Incorporated, and the Glenn L. Martin Company for preliminary structural design of the "ESV," the earth satellite vehicle, and undertook study of solar-powered devices to recharge the satellite's batteries and so lengthen their life. But as estimates put the cost of carrying the program beyond the preliminary stages at well over $5 million, a sum unlikely to be approved by the Navy high brass, ESV proponents sought Army Air Forces collaboration. 9 Curiously enough, with the compartmentation often characteristic of the armed services, BuAer apparently did not attempt to link its plans to those of the Naval Research Laboratory. 10

In March 1946, shortly after NRL scientists had decided that a satellite was too difficult a project to attempt as yet, representatives of BuAer and the Army Air Forces agreed that "the general advantages to he derived from pursuing the satellite development appear to be sufficient to justify a major program, in spite of the fact that the obvious military, or purely naval applications in themselves, may not appear at this time to warrant the expenditure." General Curtis E. LeMay of the Air Staff did not concur. Certainly he was unwilling to endorse a joint Navy-Army program. On the contrary. Commander Hall noted that the general was resentful of Navy invasion into a field "which so obviously, he maintained, was the province of the AAF." Instead, in May 1946, the Army Air Forces presented its own proposition in the form of a feasibility study by Project Rand, a unit of the Douglas Aircraft Company and a forerunner of the RAND Corporation of California. 11 Like the scientists of the Bureau of Aeronautics committee, Project Rand mathematicians and engineers declared technology already equal to the task of launching a spaceship. The ship could be circling the earth, they averred, within five years, namely by mid-1951. They admitted that it could not be used as a carrier for an atomic bomb and would have no direct function as a weapon, but they stressed the advantages that would nevertheless accrue from putting an artificial satellite into orbit: "To visualize the impact on the world, one can imagine the consternation and admiration that would be felt here if the United States were to discover suddenly that some other nation had already put up a successful satellite." 12

Officials at the Pentagon were unimpressed. Theodore von Kármán, chief mentor of the Army Air Forces and principal author of the report that became the research and development bible of the service, advocated research in the upper atmosphere but was silent about the use of an artificial satellite. Nor did Vannevar Bush have faith in such a venture. The most influential scientist in America of his day and in 1946 chairman of the Joint Army and Navy Research and Development Board. Bush was even skeptical about the possibility of developing within the foreseeable future the engineering skills necessary to build intercontinental guided missiles. His doubts, coupled with von Kármán's disregard of satellite schemes, inevitably dashed cold water on the proposals and helped account for the lukewarm reception long accorded them. 13

Still the veto of a combined Navy-Army Air Forces program did not kill the hopes of advocates of a "space ship." While the Navy and its contractors continued the development of a scale model 3,000-pound-thrust motor powered by liquid hydrogen and liquid oxygen, Project Rand completed a second study for the Army Air Forces. But after mid-1947, when the Air Force became a separate service within the newly created Department of Defense, reorganization preoccupied its officers for a year or more, and many of them, academic scientists believed, shared General LeMay's indifference to research not immediately applicable to defense problems. At BuAer, on the other hand, a number of men continued to press for money to translate satellite studies into actual experiments. Unhappily for them, a Technical Evaluation Group of civilian scientists serving on the Guided Missiles Committee of the Defense Department's Research and Development Board declared in March 1948 that "neither the Navy nor the USAF has as yet established either a military or a scientific utility commensurate with the presently expected cost." 14 In vain, Louis Ridenour of Project Rand explained, as Hall had emphasized in 1945 and 1946, that "the development of a satellite will be directly applicable to the development of an intercontinental rocket missile," since the initial velocity required for launching the latter would be "4.4 miles per second, while a satellite requires 5.4." 15

In the hope of salvaging something from the discard, the Navy at this point shifted its approach. Backed up by a detailed engineering design prepared under contract by the Glenn L. Martin Company, BuAer proposed to build a sounding rocket able to rise to a record altitude of more than four hundred miles, since a powerful high-altitude test vehicle, HATV, might serve the dual purpose of providing hitherto unobtainable scientific data from the extreme upper atmosphere and at the same time dramatize the efficiency of the hydrogen propulsion system. Thus it might rally financial support for the ESV. But when The First Annual Report of the Secretary of Defense appeared in December 1948, a brief paragraph stating that each of the three services was carrying on studies and component designs for "the Earth Satellite Vehicle Program" evoked a public outcry at such a wasteful squandering of taxpayers' money one outraged letter-writer declared the program an unholy defiance of God's will for mankind. That sort of response did not encourage a loosening of the military purse-strings for space exploration. Paper studies, yes hardware, no. The Navy felt obligated to drop HATV development at a stage which, according to later testimony, teas several years ahead of Soviet designs in its proposed propulsion system and structural engineering. 16

In seeking an engine for an intermediate range ballistic missile, the Army Ordnance Corps, however, was able to profit from North American Aviation's experience with HATV design an Air Force contract for the Navaho missile ultimately produced the engine that powered the Army's Jupiter C, the launcher for the first successful American satellite. Thus money denied the Navy for scientific research was made available to the Army for a military rocket. 17 Early in 1949 the Air Force requested the RAND Corporation, the recently organized successor to Project Rand, to prepare further utility studies. The paper submitted in 1951 concentrated upon analyzing the value of a satellite as an "instrument of political strategy," and again offered a cogent argument for supporting a project that could have such important psychological effects on world opinion as an American earth satellite. 18 Not until October 1957 would most of the officials who had read the text recognize the validity of that point.

In the meantime, research on the upper atmosphere had continued to nose forward slowly at White Sands and at the Naval Research Laboratory in Washington despite the transfer of some twenty "first line people" from NRL's Rocket Sonde Research Section to a nuclear weapons crash program. While the Navy team at White Sands carried on probes with the Aerobee, by then known as "the workhorse of high altitude research," 19 a Bumper-Wac under Army aegis-a V-2 with a Wac-Corporal rocket attached as a second stage-made a record-breaking flight to an altitude of 250 miles in February 1949. Shortly afterward tests began on the new sounding rocket built for NRL by the Glenn L. Martin Company. Named "Neptune" at first and then renamed "Viking," the first model embodied several important innovations: a gimbaled motor for steering, aluminum as the principal structural material, and intermittent gas jets for stabilizing the vehicle after the main power cut off. Reaction Motors Incorporated supplied the engine, one of the first three large liquid-propelled rocket power plants produced in the United States. Viking No. l, fired in the spring of 1949, attained a 50-mile altitude Viking No. 4, launched from shipboard in May 1950, reached 104 miles. Modest compared to the power displayed by the Bumper-Wac, the thrust of the relatively small single-stage Viking nevertheless was noteworthy. 20



The Navy's High-Altitude Test Vehicle (HATV).
It was proposed in 1946 and was to have launched a satellite by 1951.

While modifications to each Viking in turn brought improved performance, the Electron Optics Branch at NRL was working out a method of using ion chambers and photon counters for x-ray and ultraviolet wavelengths, equipment which would later supply answers to questions about the nuclear composition of solar radiation. Equally valuable was the development of an electronic tracking device known as a "Single-Axis Phase-Comparison Angle-Tracking Unit," the antecedent of "Minitrack," which would permit continuous tracking of a small instrumented body in space. When the next to last Viking, No. 11, rose to an altitude of 158 miles in May 1954, the radio telemetering system transmitted data on cosmic ray emissions, just as the Viking 10, fired about two weeks before, had furnished scientists with the first measurement of positive ion composition at an altitude of 136 miles. 21 This remarkable series of successes achieved in five years at a total cost of less than $6 million encouraged NRL in 1955 to believe that, with a more powerful engine and the addition of upper stages, here was a vehicle capable of launching an earth satellite.



RAND Corporation proposal for a rocket to launch
an "Earth Circling Satellite", 1951.

Essential though this work was to subsequent programs, the Naval Research Laboratory in the late l940s and the l950s was hampered by not having what John P. Hagen called "stable funding" for its projects. Hagen, head of the Atmosphere and Astrophysics Division., found the budgetary system singularly unsatisfactory. NRL had been founded in 1923, but a post-World-War-II reorganization within the Navy had brought the Office of Naval Research into being and given it administrative control of the Laboratory's finances. ONR allotted the Laboratory a modest fixed sum annually, but other Navy bureaus and federal agencies frequently engaged the Laboratory's talents and paid for particular jobs. The arrangement resembled that of a man who receives a small retainer from his employer but depends for most of his livelihood on fees paid him by his own clientele for special services. NRL's every contract, whether for design studies or hardware, had to be negotiated and administered either by ONR or by one of the permanent Navy bureau-in atmospheric research, it was by the Navy Bureau of Aeronautics. The cancellation of a contract could seriously disrupt NRL functioning, as the years 1950 to 1954 illustrated. 22

With the outbreak of the Korean War, the tempo of missile research heightened in the Defense Department. While the Navy was working on a guided missile launchable from shipboard and a group at NRL on radio interferometers for tracking it, rocketeers at Redstone Arsenal in Alabama were engaged in getting the "bugs" out of a North American Aviation engine for a ballistic missile with a 200-mile range, and RAND was carrying on secret studies of a military reconnaissance satellite for the Air Force. In June 1952 NRL got approval for the construction of four additional Vikings similar to Viking No. 10 to use in ballistic missile research, but eleven months later BuAer withdrew its support and canceled the development contract for a high-performance oxygen-ammonia engine that was to have replaced the less powerful Viking engine this cancellation postponed by over three years the availability of a suitable power plant for the first stage of the future Vanguard rocket. Similarly in 1954 lack of funds curtailed an NRL program to design and develop a new liquid-propelled Aerobee-Hi probing rocket. At the request of the Western Development Division of the Air Force in July 1954, the Laboratory investigated the possible use of an improved Viking as a test vehicle for intercontinental ballistic missiles, ICBMs. The study, involving a solution of the "reentry problem," that is, how to enable a missile's warhead to return into the atmosphere without disintegrating before reaching its target, produced the design of an M-I0 and M-15 Viking, the designations referring to the speeds, measured by Mach number, at which each would reenter the atmosphere. But the Air Force later let the development contracts to private industry. 23 In these years the Department of Defense was unwilling to spend more than token sums on research that appeared to have only remote connection with fighting equipment.

The creation of the National Science Foundation in May 1950 tended to justify that position, for one of the new agency's main functions was to encourage and provide support for basic research chiefly by means of grants- in-aid to American universities. The mission of the Army, Navy, and Air Force was national defense, that of the Foundation the fostering of scientific discovery. It was a responsibility of the Foundation to decide what lines of fundamental research most merited public financial aid in their own right, whereas other federal agencies must by law limit their basic research to fields closely related to their practical missions. While the Foundation's charter forbade it to make grants for applied research and development- the very area in which the military would often have welcomed assistance-any government department could ask the National Academy of Sciences for help on scientific problems. The Academy, founded in 1863 as a self-perpetuating body advisory to but independent of the government, included distinguished men in every scientific field. When its executive unit, the National Research Council, agreed to sponsor studies for federal agencies, the studies sometimes involved more applied than pure research. The Academy's Research Council, and the Science Foundation, however, frequently worked closely together in choosing the problems to investigate 24 .



Aerobee-Hi Viking 10 on the launch pad at White Sands Bumper-Wac

Certainly the composition of the ionosphere, the region that begins about fifty miles above the earth's surface, and the nature of outer space were less matters for the Pentagon than for the National Academy, the Science Foundation, and the academic scientific world. Indeed, the panel of volunteers which analyzed the findings from each instrumented V-2 shot and later appraised the results of Aerobee, Viking, and Aerobee-Hi flights contained from the first some future members of the Academy. Among the participants over the years were Homer J. Stewart and William H. Pickering of Cal Tech's Jet Propulsion Laboratory, Milton W. Rosen, Homer E. Newell, Jr., and John W. Townsend, Jr., of NRL, and James A. Van Allen of the Applied Physics Laboratory of the Johns Hopkins University and later a professor at the State University of Iowa. Under Van Allen's chairmanship, the Panel on Upper Atmosphere Rocket Research came to be a strong link between university physicists and the Department of Defense, a more direct link in several respects than that afforded by civilian scientists who served on advisory committees of the DoD's Research and Development Board. 25

While the armed services were perforce confining their research and development programs chiefly to military objectives, no service wanted to discourage discussions of future possibilities. In the autumn of 1951 several doctors in the Air Force and a group of physicists brought together by Joseph Kaplan of the University of California, Los Angeles, met in San Antonio, Texas, for a symposium on the Physics and Medicine of the Upper Atmosphere. The participants summarized existing knowledge of the region named the "aeropause," where manned flight was not yet possible, and examined the problems of man's penetrating into that still unexplored area. The papers published in book form a year later were directly instrumental, Kaplan believed, in arousing enthusiasm for intensive studies of the ionosphere. 26

A few months before the San Antonio sessions, the Hayden Planetarium of New York held a first annual symposium on space exploration, and about the same time the American Rocket Society set up an ad hoc Committee on Space Flight to look for other ways of awakening public interest and winning government support for interplanetary exploration. From a few dozen men who had followed rocket development in the early l930s the society had grown to about two thousand members, some of them connected with the aircraft industry, some of them in government service, and some who were purely enthusiasts caught up by the imaginative possibilities of reaching out into the unknown. The committee met at intervals during the next two years at the Society's New York headquarters or at the Washington office of Andrew Haley. the Society's legal counsel, but not until Richard W. Porter of the General Electric Company sought out Alan T. Waterman, Director of the National Science Foundation, and obtained from him an assurance that the Foundation would consider a proposal, did a formal detailed statement of the committee's credo appear. Milton Rosen, the committee chairman and one of the principal engineers directing the development and tests of the Viking sounding rocket, then conceived and wrote the report advocating a thorough study of the benefits that might derive from launching an earth satellite. Completed on 27 November 1954, the document went to the Foundation early the next year. 27

Without attempting to describe the type of launching vehicle that would he needed, the paper spelled out the reasons why space exploration would bring rich rewards. Six appendixes, each written by a scientist dealing with his own special field, pointed to existing gaps in knowledge which an instrumented satellite might fill. Ira S. Bowen, director of the Palomar Observatory at Mt. Wilson, explained how the clearer visibility and longer exposure possible in photoelectronic scanning of heavenly phenomena from a body two hundred miles above the earth would assist astronomers. Howard Schaeffer of the Naval School of Aviation Medicine wrote of the benefits of obtaining observations on the effects of the radiation from outer space upon living cells. In communications, John R. Pierce, whose proposal of 1952 gave birth to Telstar a decade later, 28 discussed the utility of a relay for radio and television broadcasts. Data obtainable in the realm of geodesy. according to Major John O'Keefe of the Army Map Service, would throw light on the size and shape of the earth and the intensity of its gravitational fields, information which would be invaluable to navigators and mapmakers. The meteorologist Eugene Bollay of North American Weather Consultants spoke of the predictable gains in accuracy of weather forecasting. Perhaps most illuminating to the nonscientifically trained reader was Homer E. Newell's analysis of the unknowns of the ionosphere which data accumulated over a period of days could clarify.

Confusing and complex happenings in the atmosphere, wrote Newell, were "a manifestation of an influx of energy from outer space. What was the nature and magnitude of that energy? Much of the incoming energy was absorbed in the atmosphere at high altitudes. From data transmitted from a space satellite five hundred miles above the earth, the earth-hound scientist might gauge the nature and intensity of the radiation emanating from the sun, the primary producer of that energy. Cosmic rays. meteors, and micrometeors also brought in energy. Although they probably had little effect on the upper atmosphere, cosmic rays, with their extremely high energies, produced ionization in the lower atmosphere. Low-energy particles from the sun were thought to cause the aurora and to play a significant part in the formation of the ionosphere. Sounding rockets permitted little more than momentary measurements of the various radiations at various heights, but with a satellite circling the earth in a geomagnetic meridian plane it should be possible to study in detail the low-energy end of the cosmic ray spectrum, a region inaccessible to direct observation within the atmosphere and best studied above the geomagnetic poles. Batteries charged by the sun should be able to supply power to relay information for weeks or months.

Contrary to what an indifferent public might have expected from rocket "crackpots," the document noted that "to create a satellite merely for the purpose of saying it has been done would not justify the cost. Rather, the satellite should serve useful purpose-purposes which can command the respect of the officials who sponsor it, the scientists and engineers who produce it, and the community who pays for it." The appeal was primarily to the scientific community, but the intelligent layman could comprehend it. and its publication in an engineering journal in February 1955 gave the report a diversified audience. 29

A number of men in and outside government service meantime had continued to pursue the satellite idea. In February 1952 Aristid V. Grosse of Temple University, a key figure in the Manhattan Project in its early days, had persuaded President Truman to approve a study of the utility of a satellite in the form of an inflatable balloon visible to the naked eye from the surface of the earth. Aware that Wernher von Braun, one of the German-born experts from Peenemuende, was interested, the physicist took counsel with him and his associates at Redstone Arsenal in Huntsville, Alabama. Fifteen months later Grosse submitted to the Secretary of the Air Force a description of the "American Star" that could rise in the West. Presumably because the proposed satellite would be merely a show piece without other utility, nothing more was heard of it. 30

A series of articles in three issues of Collier's, however, commanded wide attention during 1952. Stirred by an account of the San Antonio symposium as Kaplan described it over the lunch table, the editors of the magazine engaged Wernher von Braun to write the principal pieces and obtained shorter contributions from Kaplan, Fred L. Whipple, chairman of the Harvard University Department of Astronomy, Heinz Haber of the Air Force Space Medicine Division, the journalist Willy Ley, and others. The editors' comment ran: "What are we waiting for?", an expression of alarm lest a communist nation preempt outer space before the United States acted and thereby control the earth from manned space platforms equipped with atomic bombs. On the other hand, von Braun's articles chiefly stressed the exciting discoveries possible within twenty-five years if America at once began building "cargo rockets" and a wheel-shaped earth-circling space station from which American rocket ships could depart to other planets and return. Perhaps because of severe editing to adapt material to popular consumption, the text contained little or no technical data on how these wonders were to be accomplished the term "telemetry" nowhere appeared. But the articles, replete with illustrations in color, and a subsequent Walt Disney film fanned public interest and led to an change of letters between von Braun and S. Fred Singer, a brilliant young physicist at the University of Maryland. 31

At the fourth Congress of the International Astronautics Federation in Zurich, Switzerland, in summer 1953, Singer proposed a Minimum Orbital Unmanned Satellite of the Earth, MOUSE, based upon a study prepared two years earlier by members of the British Interplanetary Society who had predicated their scheme on the use of a V-2 rocket. The Upper Atmosphere Rocket Research Panel at White Sands in turn discussed the plan in April 1954, and in May Singer again presented his MOUSE proposal at the Hayden Planetarium's fourth Space Travel Symposium. On that occasion Harry Wexler of the United States Weather Bureau gave a lecture entitled, "Observing the Weather from a Satellite Vehicle." 32 The American public was thus being exposed to the concept of an artificial satellite as something more than science fiction.

By then, Commander George Hoover and Alexander Satin of the Air Branch of the Office of Naval Research had come to the conclusion that recent technological advances in rocketry had so improved the art that the feasibility of launching a satellite was no longer in serious doubt. Hoover therefore put out feelers to specialists of the Army Ballistic Missile Agency at Huntsville. There von Braun, having temporarily discarded his space platform as impractical, was giving thought to using the Redstone rocket to place a small satellite in orbit. Redstone, a direct descendant of the V-2, was, as one man described it, a huge piece of "boiler plate." sixty-nine feet long, seventy inches in diameter, and weighing 61,000 pounds, its power plant using liquid oxygen as oxidizer and an alcohol-water mixture as fuel. A new Redstone engine built by the Rocketdyne Division of North American Aviation, Inc., and tested in 1953 was thirty percent lighter and thirty-four percent more powerful than that of the V-2. 33 If Commander Hoover knew of the futile efforts of BuAer in 1947 to get Army Air Forces collaboration on a not wholly dissimilar space program, that earlier disappointment failed to discourage him. And as he had reason to believe he could now get Navy funds for a satellite project, he had no difficulty in enlisting von Braun's interest. At a meeting in Washington arranged by Frederick C. Durant, III, past president of the American Rocket Society, Hoover, Satin, von Braun, and David Young from Huntsville discussed possibilities with Durant, Singer. and Fred Whipple, the foremost American authority on tracking heavenly bodies. The consensus of the conferees ran that a slightly modified Redstone rocket with clusters of thirty-one Loki solid-propellant rockets for upper stages could put a five-pound satellite into orbit at a minimum altitude of 200 miles. Were that successful, a larger satellite equipped with instruments could follow soon afterward. Whipple's judgment that optical tracking would suffice to trace so small a satellite at a distance of 200 miles led the group to conclude that radio tracking would be needless. 34

Whipple then approached the National Science Foundation begging it to finance a conference on the technical gains to be expected from a satellite and from "the instrumentation that should be designed well in advance of the advent of an active satellite vehicle." The Foundation, he noted some months later, was favorable to the idea but in 1954 took no action upon it. 35 Commander Hoover fared better. He took the proposal to Admiral Frederick R. Furth of the Office of Naval Research and with the admiral's approval then discussed the division of labor with General H. T. Toftoy and von Braun at Redstone Arsenal. The upshot was an agreement that the Army should design and construct the booster system, the Navy take responsibility for the satellite, tracking facilities, and the acquisition and analysis of data. No one at ONR had consulted the Naval Research Laboratory about the plan. In November 1954 a full description of the newly named Project Orbiter was sent for critical examination and comment to Emmanuel R. Piore, chief scientist of ONR, and to the government-owned Jet Propulsion Laboratory in Pasadena which handled much of the Army Ballistic Missile Agency's research. Before the end of the year, the Office of Naval Research had let three contracts totaling $60,000 for feasibility analyses or design of components for subsystems. Called a "no-cost satellite," Orbiter was to be built largely from existing hardware. 36



A Redstone rocket on the static-firing stand at the
Army Ballistic Missile Agency, Huntsville, Alabama.

At this point it is necessary to examine the course scientific thought had been taking among physicists of the National Academy and American universities, for in the long run it was their recommendations that would most immediately affect governmental decisions about a satellite program. This phase of the story opens in spring 1950, at an informal gathering at James Van Allen's home in Silver Spring, Maryland. The group invited by Van Allen to meet with the eminent British geophysicist Sydney Chapman consisted of Lloyd Berkner, head of the new Brookhaven National Laboratory on Long Island, S. Fred Singer, J. Wallace Joyce, a geophysicist with the Navy BuAer and adviser to the Department of State, and Ernest H. Vestine of the Department of Terrestrial Magnetism of the Carnegie Institution. As they talked of how to obtain simultaneous measurements and observations of the earth and the upper atmosphere from a distance above the earth, Berkner suggested that perhaps staging another International Polar Year would be the best way. His companions immediately responded enthusiastically. Berkner and Chapman then developed the idea further and put it into form to present to the International Council of Scientific Unions. The first International Polar Year had established the precedent of international scientific cooperation in 1882 when scientists of a score of nations agreed to pool their efforts for a year in studying polar conditions. A second International Polar Year took place, in 1932. Berkner's proposal to shorten the interval to 25 years was timely because 1957-1958, astronomers knew, would be a period of maximum solar activity. 37 European scientists subscribed to the plan. In 1952 the International Council of Scientific Unions appointed a committee to make arrangements, extended the scope of the study to the whole earth, not just the polar regions, fixed the duration at eighteen months, and then renamed the undertaking the International Geophysical Year, shortened in popular speech to IGY. It eventually embraced sixty-seven nations. 38



Meeting on Project Orbiter, 17 March 1955 in Washington, D.C.

In the International Council of Scientific Unions the National Academy of Sciences had always been the adhering body for the United States. The Council itself, generally called ICSU, was and is the headquarters unit of a nongovernmental international association of scientific groups such as the International Union of Geodesy and Geophysics. the International Union of Pure and Applied Physics, the International Scientific Radio Union, and others. When plans were afoot for international scientific programs which needed governmental support, Americans of the National Academy naturally looked to the National Science Foundation for federal funds. Relations between the two organizations had always been cordial, the Foundation often turning for advice to the Academy and its secretariat, the National Research Council, and the Academy frequently seeking financing for projects from the Foundation. At the end of 1952 the Academy appointed a United States National Committee for the IGY headed by Joseph Kaplan to plan for American participation. The choice of Kaplan as chairman strengthened the position of men interested in the upper atmosphere and outer space.

During the spring of 1953 the United States National Committee drafted a statement which the International Council later adopted, listing the fields of inquiry which IGY programs should encompass-oceanographic phenomena, polar geography, and seismology, for example, and, in the celestial area, such matters as solar activity, sources of ionizing radiations, cosmic rays, and their effects upon the atmosphere. 39 In the course of the year the Science Foundation granted $27,000 to the IGY committee for planning, but in December, when Hugh Odishaw left his post as assistant to the director of the Bureau of Standards to become secretary of the National Committee, it was still uncertain how much further support the government would give IGY programs. Foundation resources were limited. Although in August Congress had removed the $15,000,000 ceiling which the original act had placed on the Foundation's annual budget, the appropriation voted for FY 1954 had totaled only $8 million. In view of the Foundation's other commitments, that sum seemed unlikely to allow for extensive participation in the IGY. In January 1954 the National Committee asked for a total of $13 million. Scientists' hopes rose in March when President Eisenhower announced that, in contrast to the $100 million spent in 1940 on federal support of research and development, he was submitting a $2-billion research and development budget to Congress for FY 1955. Hope turned to gratification in June when Congress authorized for the IGY an over-all expenditure of $13 million as requested and in August voted for FY 1955 an appropriation of $2 million to the National Science Foundation for IGY preparations. 40

Thus reassured, the representatives from the National Academy set out in the late summer for Europe and the sessions of the International Scientific Radio Union, known as URSI, and the International Union of Geodesy and Geophysics, IUGG. As yet none of the nations pledged to take part in the IGY had committed itself to definite projects. The U.S.S.R. had not joined at all, although Russian delegates attended the meetings. Before meetings opened, Lloyd V. Berkner, president of the Radio Union and vice president of Comité Spéciale de l'Année Géophysique Internationale (CSAGI) set up two small informal committees under the chairmanship of Fred Singer and Homes E. Newell, Jr., respectively, to consider the scientific utility of a satellite. The National Academy's earlier listing of IGY objectives had named problems requiring exploration but had not suggested specific means of solving them. For years physicists and geodesists had talked wistfully of observing the earth and its celestial environment from above the atmosphere. Now, Berkner concluded, was the time to examine the possibility of acting upon the idea. Singer was an enthusiast who inclined to brush aside technical obstacles. Having presented MOUSE the preceding year and shared in planning Project Orbiter, he was a persuasive proponent of an IGY satellite program. Newell of NRL was more conservative, but be too stressed to IUGG the benefits to be expected from a successful launching of an instrumented "bird," the theme that he incorporated in his later essay for the American Rocket Society. URSI and IUGG both passed resolutions favoring the scheme. But CSAGI still had to approve. And there were potential difficulties.

Hence on the eve of the CSAGI meeting in Rome, Berkner invited ten of his associates to his room at the Hotel Majestic to review the pros and cons, to make sure, as one man put it. that the proposal to CSAGI was not just a "pious resolution" such as Newton could have submitted to the Royal Society. The group included Joseph Kaplan, U.S. National Committee chairman, Hugh Odishaw, committee secretary, Athelstan Spilhaus, Dean of the University of Minnesota's Institute of Technology. Alan H. Shapley of the National Bureau of Standards, Harry Wexler of the Weather Bureau, Wallace Joyce, Newell, and Singer. The session lasted far into the night. Singer outlined the scientific and technical problem-the determination of orbits, the effects of launching errors, the probable life of the satellite, telemetering and satellite orientation, receiving stations, power supplies, and geophysical and astrophysical applications of data. Newell, better versed than some of the others in the technical difficulties to be overcome, pointed out that satellite batteries might bubble in the weightless environment of space, whereupon Spilhaus banged his fist and shouted: "Then we'll get batteries that won't!" Singer's presentation was exciting, but the question remained whether an artificial body of the limited size and weight a rocket could as yet put into orbit could carry enough reliable instrumentation to prove of sufficient scientific value to warrant the cost money and effort poured into that project would not be available for other research, and to attempt to build a big satellite might be to invite defeat.

Both Berkner and Spilhaus spoke of the political and psychological prestige that would accrue to the nation that first launched a man-made satellite. As everyone present knew, A. N. Nesmeyanov of the Soviet Academy of Sciences had said in November 1953 that satellite launchings and moon shots were already feasible and with Tsiolkovskiy's work now recognized by Western physicists, the Americans had reason to believe in Russian scientific and technological capabilities. In March 1954 Moscow Radio had exhorted Soviet youth to prepare for space exploration, and in April the Moscow Air Club had announced that studies in interplanetary flight were beginning. Very recently the U.S.S.R. had committed itself to IGY participation. While the American scientists in September 1954 did not discount the possible Russian challenge, some of them insisted that a satellite experiment must not assume such emphasis as to cripple or halt upper atmosphere research by means of sounding rockets. The latter was an established useful technique that could provide, as a satellite in orbit could not, measurements at a succession of altitudes in and above the upper atmosphere, measurements along the vertical instead of the horizontal plane. Nevertheless at the end of the six-hour session, the group unanimously agreed to urge CSAGI to endorse an IGY satellite project. 41

During the CSAGI meeting that followed, the Soviet representatives listened to the discussion but neither objected, volunteered comment, nor asked questions. On 4 October CSAGI adopted the American proposal: "In view," stated that body,

of the great importance of observations during extended periods of time of extra-terrestrial radiations and geophysical phenomena in the upper atmosphere, and in view of the advanced state of present rocket techniques, CSAGI recommends that thought be given to the launching of small satellite vehicles, to their scientific instrumentation, and to the new problems associated with satellite experiments, such as power supply, telemetering, and orientation of the vehicle. 42

What had long seemed to most of the American public as pure Jules Verne and Buck Rogers fantasy now had the formal backing of the world's most eminent scientists.

Thus by the time the United States Committee for the IGY appointed a Feasibility Panel on Upper Atmosphere Research, three separate, albeit interrelated, groups of Americans were concerned with a possible earth satellite project: physicists, geodesists, and astronomers intent on basic research officers of the three armed services looking for scientific means to military ends and industrial engineers, including members of the American Rocket Society, who were eager to see an expanding role for their companies. The three were by no means mutually exclusive. The dedicated scientist, for instance, in keeping with Theodore von Kármán's example as a founder and official of the Aerojet General Corporation, might also be a shareholder in a research-orientated electronics or aircraft company, just as the industrialist might have a passionate interest in pure as well as applied science, and the military man might share the intellectual and practical interests of both the others. Certainly all three wanted improvements in equipment for national defense. Still the primary objective of each group differed from those of the other two. These differences were to have subtle effects on Vanguard's development. Although to some people the role of the National Academy appeared to be that of a Johnny-come-lately, the impelling force behind the satellite project nevertheless was the scientist speaking through governmental and quasi-governmental bodies.


The Institute for Creation Research

After reviewing evolutionists' speculations on the origin of life, Clemmey and Badham say, ". the dogma has arisen that Earth's early atmosphere was anoxic. " 1 By "anoxic" they mean an atmosphere without free oxygen gas (O2), very different from the oxidizing mixture we breathe. The generally accepted model for the evolution of the atmosphere 2 supposes that before about 1.9 billion years ago the earth's atmosphere was a reducing mixture of nitrogen (N2), methane (CH4), water vapor (H2O), and possibly ammonia (NH3). Solar radiation and lightning discharges into the reducing gas mixture are believed by the consensus of evolutionists to have produced natural organic compounds and eventually life itself. The reason evolutionists postulate an anoxic and reducing atmosphere is mentioned by Miller and Orgel, "We believe that there must have been a period when the earth's atmosphere was reducing, because the synthesis of compounds of biological interest takes place only under reducing conditions." 3

If the dogma of the Precambrian reducing atmosphere is true, we would expect to find geologic evidence in the Archean and lower Proterozoic strata (believed by evolutionists to be older than 1.9 billion years). Although altered by diagenesis and metamorphism, the oldest sedimentary rocks should possess distinctive chemical composition and unusual mineral assemblages.

PLACERS OF UNSTABLE METALLIC MINERALS

Pebble and sand placer deposits of upper Archean and lower Proterozoic age occur in southern Canada, South Africa, southern India and Brazil. Some of these are known to be cemented by a matrix containing mineral grains of pyrite (FeS2) and uraninite (UO2). Pyrite has the reduced state of iron (without oxygen, but with sulfur) which is unstable as sedimentary grains in the presence of oxygen. Uraninite has the partly oxidized state of uranium which is oxidized to UO3 in the presence of the modern atmosphere. These unstable mineral grains in gravel and sand concentrates have been claimed by some geologists to indicate a reducing atmosphere at the time of deposition.

Although ancient placers of unstable metallic minerals occur in various places, these are by no means the only types of heavy mineral concentrates known from Archean and lower Proterozoic strata. Davidson 4 studied heavy mineral concentrates of completely modern aspect in strata nearly contemporaneous with the unstable concentrates. If deposition occurred under a reducing atmosphere all sediments would be expected to contain pyrite. The normally oxidized concentrates could be better used to argue for oxidizing atmosphere with the unstable assemblages being accumulated under locally reducing conditions.

Clemmey and Badham 5 are bold enough to propose that the unstable minerals were disaggregated by mechanical weathering, with limited chemical and biological weathering, under an oxidizing atmosphere. Support comes from Zeschke 6 who has shown that uraninite is transported by the oxidizing water of the modern Indus River in Pakistan. Grandstaff 7 has shown that the ancient uraninite placers contain the form of thorium-rich uraninite which is most stable under modern oxidizing conditions. Pyrite has also been reported in modern alluvial sediments, especially in cold climates. 8 It is noteworthy that magnetite, an oxide of iron unstable in modern atmospheric conditions, is the most common mineral constituent of the black sand concentrates on modern beaches. Evidently, brief exposures to special oxidizing conditions are not sufficient to oxidize many unstable minerals. Thus, these metallic mineral placers do not require a reducing atmosphere.

IRON DEPOSITS

Another frequently cited evidence for an early reducing atmosphere comes from ancient iron ore deposits called "banded iron formations." These are common in Archean and Proterozoic strata, the best known being the ores of the Lake Superior region. The iron deposits consist typically of thin laminae of finely crystalline silica alternating with thin laminae of iron minerals. Magnetite (Fe3O4), an incompletely oxidized iron mineral, and hematite (Fe2O3), a completely oxidized iron mineral, are common in the banded iron formations. Magnetite may be considered a mixture of equal parts of FeO (iron in the less oxidized, ferrous state) and Fe2O3 (iron in the oxidized, ferric state). Because magnetite would be more stable in an atmosphere with lower oxygen pressure, some evolutionists have argued that banded iron accumulated during the transition from a reducing to a fully oxidizing atmosphere some 1.9 billion years ago. Soluble ferrous iron abundant in the early reducing sea, they suppose, was precipitated as oxygen produced the insoluble, ferric iron of the modern oxidizing sea.

Three problems confront the transition hypothesis. First, the banded iron is not direct evidence of a reducing atmosphere it only suggests that an earlier reducing atmosphere may have existed. Other options are certainly possible. The iron formations contain oxidized iron and would require an oxidizing atmosphere or other abundant source of oxygen!

A second problem is that the iron formations do not record a simultaneous, worldwide precipitation event, but are known to occur in older strata when the atmosphere was supposed to be reducing and in younger strata when the atmosphere was undoubtedly oxidizing. Dimroth and Kimberley 9 compare Archean iron formations (believed to have been deposited at the same time as unstable metallic mineral placers more than 2.3 billion years ago) with Paleozoic iron formations (believed to have been deposited in an oxidizing atmosphere less than 0.6 billion years ago). The similarities can be used to argue that the Archean atmosphere was oxidizing.

A third problem is that red, sandy, sedimentary rocks called "red beds" are found in association with banded iron formations. The red color in the rock is imparted by the fully oxidized iron mineral hematite, and the rocks are characteristically deficient in unoxidized or partly oxidized iron minerals (e.g., pyrite and magnetite). Red beds are known to occur below one of the world's largest Proterozoic iron formations and have been reported in Archean and lower Proterozoic rocks. 10 By their association with iron formations, red beds also indicate oxidizing conditions.

SULFATE DEPOSITS

When sulfur combines with metals under reducing conditions the result is sulfide minerals such as pyrite (FeS2), galena (PbS), and sphalerite (ZnS). When sulfur combines with metals under oxidizing conditions the result is sulfate minerals such as barite (BaSO4), celestite (SrSO4), anhydrite (CaSO4), and gypsum (CaSO42H2O). If the earth had a reducing atmosphere, we might expect extensive stratified, sulfide precipitates in Archean sedimentary rocks. These would not have formed by volcanic-exhalative processes (as some sulfide minerals do even today), but directly from sea water (impossible in our modern oxidizing ocean). No deposits of this type have been found. Instead, Archean bedded sulfate has been reported from western Australia, South Africa, and southern India. 11 Barite appears to have replaced gypsum which was the original mineral deposited as a chemical precipitate. This provides evidence of ancient oxidizing surface conditions and oxidizing ground water. The extent of the oxidizing sulfate environment and its relation to ancient atmospheric composition are speculation, but, again we see evidence of Archean oxygen.

OXIDIZED WEATHERING CRUSTS

When a rock fragment is deposited, its surface is in contact with the external environment and can be altered chemically. Thus, pebbles and lava flows in the modern atmosphere weather to form oxide minerals at their surfaces. Even in the ocean this weathering occurs. In a similar fashion, Dimroth and Kimberley 12 report oxidative weathering of pebbles occurring below a banded iron formation and describe hematite weathering crusts on Archean pillow basalt (believed to represent a submarine lava flow). Again, Archean oxygen is indicated.

Much more could be written concerning the ancient atmosphere. Water-concentrated, unstable metallic minerals are not diagnostic of reducing conditions. The many mineral forms of ferrous and ferric iron in Archean and lower Proterozoic rocks are most suggestive of oxygen-rich conditions. Sulfate in the oldest rocks indicates oxygen in the water. Weathered crusts on ancient rocks appear to require oxygen in both air and water. To the question, "Did the early earth have a reducing atmosphere?" we can say that reducing evidence has not been documented in the rocks. An evolutionist can maintain that a reducing atmosphere existed before any rocks available for study formed, but such a belief is simply a matter of faith. The statement of Walker is true, "The strongest evidence is provided by conditions for the origin of life. A reducing atmosphere is required." 13 The proof of evolution rests squarely on the assumption of evolution!


The Search For Extraterrestrial Life: A Brief History

If (or, as some would say,_ when_) humans make contact with alien intelligence, the scientists who devote their careers to the search will be our first point of contact. Here, we look at the history of one of humankind's most persistent fascinations

For as long as humans have looked to the night sky to divine meaning and a place in the universe, we have let our minds wander to thoughts of distant worlds populated by beings unlike ourselves. The ancient Greeks were the first Western thinkers to consider formally the possibility of an infinite universe housing an infinite number of civilizations. Much later, in the 16th century, the Copernican model of a heliocentric solar system opened the door to all sorts of extraterrestrial musings (once the Earth was no longer at the center of creation and was merely one body in a vast cloud of celestial objects, who was to say God hadn’t set other life-sustaining worlds into motion?) While that line of thinking never sat well with the church, speculation about alien life kept pace with scientific inquiry up through the Enlightenment and on into the twentieth century.

But it wasn’t until the close of the 1950s that anyone proposed a credible way to look for these distant, hypothetical neighbors. The space age had dawned, and science was anxious to know what lay in wait beyond the confines of our thin, insulating atmosphere. The Russians had, in 1957 and 1958, launched the first three Sputnik satellites into Earth orbit the United States was poised to launch in 1960 the successful Pioneer 5 interplanetary probe out toward Venus. We were readying machines to travel farther than most of us could imagine, but in the context of the vast reaches of outer space, we would come no closer to unknown planetary systems than if we’d never left Earth at all.

Our only strategy was to hope intelligent life had taken root elsewhere and evolved well beyond our technological capabilities—to the point at which they could call us across the empty plains of space. Our challenge was to figure out which phone might be ringing and how exactly to pick it up. And so it was in mid-September of 1959 that two young physicists at Cornell University authored a two-page article in Nature magazine entitled “Searching for Interstellar Communications.” With that, the modern search for extraterrestrial life was born, and life on Earth would never again be the same.

_Launch the gallery to see how the search began and where it will take us next._

The Birth of SETI

Giuseppe Cocconi and Philip Morrison—two physicists at Cornell—began their 1959 article in Nature magazine quite frankly: we can’t reliably estimate the probability of intelligent life out in the universe, but we can’t dismiss the possibility of it either. We evolved and we’re intelligent, so wouldn’t it stand to reason that alien civilizations could arise on planets around other sun-like stars? In all likelihood, some of those civilizations would be older and more advanced than ours and would recognize our Sun as a star which could be host to life, with whom they would want to make contact. The central question of the paper was then: how would the beings send out their message? Electromagnetic waves were the most logical choice. They travel at the speed of light and would not disperse over the tremendous distances between stars. But at which frequency? The electromagnetic spectrum is far too wide to scan in its entirety, so they made an assumption that has remained central to SETI research ever since. They would listen in at 1420 MHz, which is the emission frequency of hydrogen, the most abundant element in the universe. They reasoned it was the one obvious astronomical commonality we would share with an unknown civilization and that they would recognize it too.

The Drake Equation

Only a few years later, in 1961, the nebulous assumptions Cocconi and Morrison parlayed in their article got a bonafide mathematical equation. Frank Drake [with equation, at left], along with a handful of other astronomers and scientists (including Carl Sagan) met in Green Bank, West Virginia to hash out the formula and variables necessary to make an educated guess at just how many intelligent civilizations might be living in our galaxy. As it turns out, assigning numbers to nebulous assumptions nets you an answer with enough variance to make you wonder if you were really clarifying those assumptions in the first place. The group came up with a range from less than a thousand to nearly a billion. You might think the formula would have been refined over the years, but that is not the case. It has held up surprisingly well (though, for such a nebulous equation “held up” is a relative phrase). Data collected since the 1960s, which can be used to support the original estimates of measurable quantities like how often sun-like stars form and how many of those stars have planets, has proven those estimates to have been relatively accurate. The rest of the variables will never be quantified, such as what fraction of life evolves to become intelligent and what the average lifetime of an intelligent civilization is. Still, the equation has served as a focal point for SETI investigations over the years and continues to be valuable framework, however controversial.

Astrobiology

When we aren’t looking for beacons from intelligent life forms in deep space, our studies in the realm of extraterrestrial life turn inward. How did life on Earth originate? How did intelligent life on Earth originate? These are two of the key questions at the heart of the interdisciplinary field known as astrobiology. While much of the work of astrobiologists can be speculative—extrapolating what may be elsewhere from what we know to be on Earth—that speculation must first come from solid research on what we see around us. From what we know of life, it’s generally assumed that extraterrestrials will be carbon-based, will need the presence of liquid water, and will exist on a planet around a sun-like star. Astrobiologists use those guidelines as the starting point for looking outward. Of course, the discipline includes traditional astronomy and geology as well. These are necessary fields for understanding where we should be looking for life outside of Earth and which properties we should seek when studying stars and their planets. While astrobiologists are looking deep into space for evidence of all these things, the largest single object of study is currently right in our literal backyard: Mars.

Life on Mars

We can safely assume we won’t find any little green men on Mars. Likely, too, that we won’t come upon any grey humanoid beings with almond-shaped, black onyx eyes and elongated skulls. But the chances are good that we could find alien life in the form of bacteria or extremophiles, which are bacteria-like organisms that can live in seemingly inhospitable environments. We have sent a variety of probes, landers, and orbiters to Mars, from the Mariner 4 in 1965 to the Phoenix mission, which landed in the planet’s polar region this past May and continues to send back a tremendous amount of data. What we’re looking for first and foremost is water, whether liquid or ice, one of the three keys to extraterrestrial life. “I think it’s probably the best bet for life nearby,” says Dr. Seth Shostak, Senior Astronomer at the SETI Institute. “You could argue that some of the Jovian moons—Europa, Ganymede, Callisto—or Titan and Enceladus, these moons of Saturn, might have life. Even Venus might have life in the upper atmosphere. All those are possible because all those are worlds that might have liquid water. Mars you can see things on the ground, you can go dig around in the dirt, so we have a lot of people who worry about Mars. They’re looking for life and we hope it’s one of the right places.” Even without visiting the red planet, scientists have been poring over meteorites from Mars, tracing fine lines in the rocks which they have theorized were left by bacteria. The trails contain no DNA, however, so the theory remains unproven.

Project Cyclops

Cocconi and Morrison’s 1959 article about a systematic search for intelligent life took over a decade to filter through the various arteries of the burgeoning exploratory programs at NASA before it took the shape of a formalized research team. Known as Project Cyclops, the team and its resulting report document were the first large-scale investigation into practical SETI. It outlined many of the same conclusions Cocconi and Morrison reached: that SETI was a legitimate scientific undertaking and that it should be done in the low frequency end of the microwave spectrum. What was not advantageous to the endeavor was the report’s scope of cost, scale, and timeline. It called for a budget of 6 to 10 billion dollars to build and maintain a large radio telescope array over 10 to 15 years. It also made note of the fact that the search would likely take decades to be successful, requiring “a long term funding commitment.” Certainly that was the project’s death knell, and indeed, funding for Project Cyclops was terminated shortly after the report was issued. It would be 21 years before NASA finally implemented a working SETI program, called the High Resolution Microwave Survey Targeted Search (HRMS). But, like its predecessor, it would be exceptionally short-lived, losing operational funding nearly a year to the day later in October of 1993.

Pioneer Plaques (Pioneers 10 and 11)

As the search for signals from intelligent life was gaining credibility in the late 60s and early 70s, plans were at the same time underway to send out messages of our own. The mission of the Pioneer 10 and 11 spacecrafts in 1973 was to explore the Asteroid Belt, Jupiter, and Saturn after that point, they would continue their trajectories past Pluto and on into the interstellar medium. With that distant course in mind, Carl Sagan was approached to design a message that an alien race might decipher should either craft be one day intercepted. Together with Frank Drake, Sagan designed a plaque [left] which shows the figures of a man and woman to scale with an image of the spacecraft, a diagram of the wavelength and frequency of hydrogen, and a series of maps detailing the location of our Sun, solar system, and the path the Pioneer took on its way out. It was a pictogram designed to cram the most information possible into the smallest space while still being readable, but was criticized for being too difficult to decode. While the Pioneer 10 became the first man-made object to leave the solar system in 1983, it will be at least two million years before either reaches another star.

Arecibo Message

Since the advent of powerful radio and television broadcasting antennas, the Earth has been a relatively noisy place. News and entertainment signals have for decades been bounced off the upper reaches of our atmosphere, with plenty leaking out every which way into space. Those not pulled in by our TVs could one day reach distant stars, in a kind of scatter-shot bulletin announcing our presence through I Love Lucy and Seinfeld. (An unintended consequence of satellite and cable transmissions is the gradual end of high-powered radio signals, making the Earth a much more difficult place to “hear” for anyone listening in.) In 1974, however, a formalized message was beamed out from the newly renovated Arecibo telescope in Puerto Rico. Again designed by Drake and Sagan, the binary radio signal [left] held within it information about the makeup of our DNA and pictographs of a man, the solar system, and the Arecibo telescope. The broadcast was ultimately more a symbolic demonstration of the power of the new Arecibo equipment than a systematic attempt at making contact with ET. The star cluster to which the signal was sent was chosen largely because it would be in the sky during the remodeling ceremony at which the broadcast was to take place. What’s more, the cluster will have moved out of range of the beam during the 25,000 years it will take the message to get there. It was an indication that we would likely not be in the business of sending messages, as it was much cheaper and easier to use radio telescopes to listen, rather than talk. But Sagan and Drake would have one more shot at deep space communications in 1977 with the launch of the Voyager probes.

Voyager Golden Records (Voyagers 1 and 2)

While the Pioneer Plaques were devised during a compressed timeline of three weeks and the Arecibo Message was sent according to the timetable of a cocktail party, the Voyager Golden Records were meant to be a brief compendium of the human experience on Earth and so were given the time and NASA committee resources to make them exceptional. The golden records contain 115 video images, greetings spoken in 55 languages, 90 minutes of music from around the world, as well as a selection of natural sounds like birdsongs, surf, and thunder. Again, hydrogen is the key to unlocking the messages the same lowest states diagram which appeared on the Pioneer Plaques is here describing the map locating the sun in the Milky Way. It informs the discoverer how to play the record, at what speed, and what to expect when looking for the video images. It’s even electroplated with a sample of Uranium so that it might be half-life dated far in the future. Since the Voyager probes are moving much more slowly than radio waves, it will take them nearly twice as long as the Arecibo Message to reach their target stars. Even then, after 40,000 years, they’ll only come to within a light-year and a half away. That’s equivalent to about 130 times the distance Pluto is from our sun. It’s an understatement to say that any of these beacons we’ve sent have a very long shot of reaching an intelligent civilization, if one exists and happens to exist in the general direction in which they’re traveling. It’s a reminder of just how inhuman the scales become when we measure the distances in outer space and try to find ways to best them in our search for others like us.

Meteorites

As astrobiologists contemplate the origin of life on our planet, they often look to external sources for the ingredients. Asteroids, comets, and meteorites are the ancient relics of the birth of our solar system. They’re the icy and rocky bits zipping around, crashing into each other and into moons and planets, delivering minerals, water, and, as it turns out, amino acids. It’s amino acids—twenty in particular—that are the basis for protein formation, which in turn are the basis for life. So far, we have only discovered eight of those twenty in meteorites. Where the others formed may be one of the secrets to life on Earth and possibly life on other planets. In the historic 1953 Miller-Urey experiment, a concoction of water and the elements of a primordial atmosphere were mixed and electrified to simulate the soup of early Earth. At the end of a week, amino acids had been formed. Of course, there are myriad other unknown processes which need to occur to take us from amino acids to life. As Dr. Seth Shostak of the SETI Institute put it, “just because you have a brickyard in your backyard doesn’t mean you’re going to see a skyscraper appear one day.”

Extremophiles

Studying extremophiles may be as close as we get to studying aliens before we actually find extraterrestrial life. Extremophiles are organisms which live in environments inhospitable to all other life as we know it. Some may even physically require these extremes of temperature, pressure, and acidity to survive. They have been found miles under the ocean’s surface and at the tops of the Himalayas, from the poles to the equator, in temperatures ranging from nearly absolute zero to over 300 degrees Fahrenheit. Most extremophiles are single-celled microorganisms, like the domain Archea, whose members may account for 20 percent of the Earth’s biomass. These are the kind of creatures we would expect to find on Mars. But maybe the most alien-like of all extremophiles known to man are the millimeter-long tardigrades, or water bears [left], so called because they have the ability to undergo cryptobiosis. It’s an extreme form of hibernation during which all metabolic activity comes to a near complete standstill and allows the animals to survive everything from massively fatal doses of radiation (to humans) to the vacuum of space. Some argue this suspended state doesn’t technically qualify tardigrades as extremophiles because they aren’t thriving in these environments, they are merely protecting themselves from death. Nevertheless, the more we understand about these organisms’ ability to withstand environments thought to be inhospitable to life, the closer we may come to discovering them outside our planet.

The Wow! Signal

Though NASA killed Project Cyclops before it begin, that didn’t mean no one was listening in on the cosmos during the 1970s. Several small-scale SETI projects existed around the country and around the world, many of them operating on university equipment. One of the most prominent—and longest running on SETI work—was the Big Ear radio telescope operated by Ohio State University. The Big Ear was the size of three football fields and looked like a giant silver parking lot with scaffolding for enormous drive-in movie screens at either end. On August 15, 1977, the Big Ear received a signal for 72 seconds which went so far off the charts that the astronomer monitoring the signal print-outs circled the alphanumeric sequence and wrote “Wow!” in the margin. The pattern of signal rose and fell perfectly in sync with the way the telescope was moving through its beam of focus. As it came into view, it became progressively stronger. If the signal had been terrestrial, it would have come in at full strength. It was the best anyone had yet seen. Unfortunately, two other attributes of the Wow! signal worked against it being a legitimate ET beacon. The first had to do with how the Big Ear collected radio waves. It used two collectors, spaced three minutes apart, side-by-side. Any signal caught by the first would have to be caught by the second three minutes later, but that wasn’t the case with the Wow! signal. Only the first horn caught it. Even more discouraging, it hasn’t been seen since. Many operations have tried, using more sensitive equipment and focusing for much longer on the alleged source to no avail.

Project Phoenix and the SETI Institute

NASA’s High Resolution Microwave Survey Targeted Search really never stood a chance. Just as soon as it got underway in 1992, members of Congress began to hold it up as a waste of taxpayer money and deride it as frivolous (even though it accounted for less than 0.1 percent of NASA’s annual operating budget). When it was cancelled in the fall of 1993, the SETI Institute moved in to save the core science and engineering team and continue the work under its auspices. It was renamed Phoenix Project and ran for a decade from 1994 to 2004 entirely on funding from private donations. The project used a variety of large telescopes from around the world to conduct its research, observing nearly 800 stars in the neighborhood of up to 240 light years away. After sweeping through a billion frequency channels for each of the 800 stars over the course of 11,000 observation hours, the program ended without having detected a viable ET signal.

[email protected] at UC Berkeley

If you know anything about SETI and are of a certain age, chances are you know about it because of the [email protected] project at the University of California, Berkeley. [email protected] was one of the earliest successful distributed computing projects. The concept behind these projects works like this: researchers who have tremendous amounts of raw data and no possible way to process it all themselves split it into tiny chunks and subcontract it out. When you sign up for a distributed project, your computer gets one of these chunks and works on it when it’s not busy, say when you leave your desk to get a coffee or take lunch. When your computer finishes, it sends that chunk back and asks for another. Taken as a whole, distributed computing projects are able to harness an otherwise impossible amount of processing power. The [email protected] project currently gets all its data from the Arecibo radio telescope. It piggybacks on other astronomical research by collecting signals from wherever the telescope happens to be pointed during the brief moments when it is not being used. While the project has not yet detected an ET signal, it has been tremendously beneficial in proving that distributed computing solutions do work and work well, having logged over two million years of aggregate computing time.

Vatican Observatory

Galileo wasn’t the only astronomer to have been accused by the Catholic Church of heresy for his beliefs in a heliocentric universe. Giordano Bruno was burned at the stake in the 16th century for arguing that every star had its own planetary system. How far the Church has come, then, with the announcement earlier this year from the Vatican Observatory that you can believe in God and in aliens and it isn’t a contradiction in faith. The Reverend Joes Gabriel Funes, director of the Observatory, says the sheer size of the universe points to the possibility of extraterrestrial life. Because an ET would be part of creation, they would be considered God’s creatures.

Extrasolar Planets

If it could be said a single discovery kick-started the search for extrasolar planets, it would be that of 51 Pegasi b [left], in 1995. It was the first extrasolar planet to be found orbiting a normal star and was discovered using the same Doppler effect we experience every day when a siren passes by us at high speed. It was a popular news story at the time—finally we had confirmation that just maybe our solar system was not unique. Since that day, we’ve learned how common, in fact, our system may be. As of early June 2008, the number of confirmed extrasolar planets is nearly 300 it climbs exponentially every year as our technologies for detection grow more sophisticated. To be sure, the vast majority of these planets are gas giants in close, short orbits around their stars—not the kind of celestial bodies on which we expect to find life. That’s not to say that Earth-like, terrestrial planets aren’t out there as well. It’s just that the gas giants are much easier to “see” when we go looking because they tend to zip around their parent stars in a matter of days. We watch those stars for variations in the way they give off light, but don’t actually spot the planets themselves because they are so many magnitudes dimmer than their parent stars. Gas giants are large enough and move quickly enough to produce a noticeable effect on their stars from here on Earth, but for a planet similar to Earth’s size, that’s not the case. In order to find an Earth-sized planet, we would need to watch a star nonstop for years on end and be able to detect the slightest change in brightness as the planet passed in front of it (known as a transit). Fortunately for SETI enthusiasts, NASA has just that mission on its schedule for launch next year.

The Kepler Mission

Looking for planets is necessarily hard work. In the astronomical scheme of things, most planets are very small and Earth-like planets are tremendously, even imperceptibly small. It is difficult enough for astronomers to detect planets on the scale of a Jupiter nearly impossible to find an Earth, some 1,000 times smaller. NASA’s Kepler Mission is the solution to that problem. It’s a space telescope [left] designed to point itself at one field of stars in our galaxy for nearly four years, never wavering from that single point of focus, continuously monitoring the brightness of more than 100,000 stars. The idea behind the mission is to use the transit method of discovery to find extrasolar planets like Earth. A transit occurs when a planet passes between its star and the observer (the Kepler telescope) during which time the star appears momentarily to dim, lasting anywhere from 2 to 16 hours. Of course, the orbit of the planet must be lined up to our plane of view, the chances of which are 0.5 percent for any given sun-like star. But with the tracking of 100,000 stars, NASA hopes at the very least to detect 50 Earth sized planets by the time the mission is complete more if the observable planets prove to be up to twice as large as Earth.

Contents

Pauli's proposal Edit

The neutrino [a] was postulated first by Wolfgang Pauli in 1930 to explain how beta decay could conserve energy, momentum, and angular momentum (spin). In contrast to Niels Bohr, who proposed a statistical version of the conservation laws to explain the observed continuous energy spectra in beta decay, Pauli hypothesized an undetected particle that he called a "neutron", using the same -on ending employed for naming both the proton and the electron. He considered that the new particle was emitted from the nucleus together with the electron or beta particle in the process of beta decay. [16] [b]

James Chadwick discovered a much more massive neutral nuclear particle in 1932 and named it a neutron also, leaving two kinds of particles with the same name. Earlier (in 1930) Pauli had used the term "neutron" for both the neutral particle that conserved energy in beta decay, and a presumed neutral particle in the nucleus initially he did not consider these two neutral particles as distinct from each other. [16] The word "neutrino" entered the scientific vocabulary through Enrico Fermi, who used it during a conference in Paris in July 1932 and at the Solvay Conference in October 1933, where Pauli also employed it. The name (the Italian equivalent of "little neutral one") was jokingly coined by Edoardo Amaldi during a conversation with Fermi at the Institute of Physics of via Panisperna in Rome, in order to distinguish this light neutral particle from Chadwick's heavy neutron. [17]

In Fermi's theory of beta decay, Chadwick's large neutral particle could decay to a proton, electron, and the smaller neutral particle (now called an electron antineutrino):

Fermi's paper, written in 1934, unified Pauli's neutrino with Paul Dirac's positron and Werner Heisenberg's neutron–proton model and gave a solid theoretical basis for future experimental work. The journal Nature rejected Fermi's paper, saying that the theory was "too remote from reality". He submitted the paper to an Italian journal, which accepted it, but the general lack of interest in his theory at that early date caused him to switch to experimental physics. [18] : 24 [19]

By 1934, there was experimental evidence against Bohr's idea that energy conservation is invalid for beta decay: At the Solvay conference of that year, measurements of the energy spectra of beta particles (electrons) were reported, showing that there is a strict limit on the energy of electrons from each type of beta decay. Such a limit is not expected if the conservation of energy is invalid, in which case any amount of energy would be statistically available in at least a few decays. The natural explanation of the beta decay spectrum as first measured in 1934 was that only a limited (and conserved) amount of energy was available, and a new particle was sometimes taking a varying fraction of this limited energy, leaving the rest for the beta particle. Pauli made use of the occasion to publicly emphasize that the still-undetected "neutrino" must be an actual particle. [18] : 25 The first evidence of the reality of neutrinos came in 1938 via simultaneous cloud-chamber measurements of the electron and the recoil of the nucleus. [20]

Direct detection Edit

In 1942, Wang Ganchang first proposed the use of beta capture to experimentally detect neutrinos. [21] In the 20 July 1956 issue of Science, Clyde Cowan, Frederick Reines, Francis B. "Kiko" Harrison, Herald W. Kruse, and Austin D. McGuire published confirmation that they had detected the neutrino, [22] [23] a result that was rewarded almost forty years later with the 1995 Nobel Prize. [24]

In this experiment, now known as the Cowan–Reines neutrino experiment, antineutrinos created in a nuclear reactor by beta decay reacted with protons to produce neutrons and positrons:

The positron quickly finds an electron, and they annihilate each other. The two resulting gamma rays (γ) are detectable. The neutron can be detected by its capture on an appropriate nucleus, releasing a gamma ray. The coincidence of both events – positron annihilation and neutron capture – gives a unique signature of an antineutrino interaction.

In February 1965, the first neutrino found in nature was identified by a group which included Jacques Pierre Friederich (Friedel) Sellschop. [25] The experiment was performed in a specially prepared chamber at a depth of 3 km in the East Rand ("ERPM") gold mine near Boksburg, South Africa. A plaque in the main building commemorates the discovery. The experiments also implemented a primitive neutrino astronomy and looked at issues of neutrino physics and weak interactions. [26]

Neutrino flavor Edit

The antineutrino discovered by Cowan and Reines is the antiparticle of the electron neutrino.

In 1962, Leon M. Lederman, Melvin Schwartz and Jack Steinberger showed that more than one type of neutrino exists by first detecting interactions of the muon neutrino (already hypothesised with the name neutretto), [27] which earned them the 1988 Nobel Prize in Physics.

When the third type of lepton, the tau, was discovered in 1975 at the Stanford Linear Accelerator Center, it was also expected to have an associated neutrino (the tau neutrino). The first evidence for this third neutrino type came from the observation of missing energy and momentum in tau decays analogous to the beta decay leading to the discovery of the electron neutrino. The first detection of tau neutrino interactions was announced in 2000 by the DONUT collaboration at Fermilab its existence had already been inferred by both theoretical consistency and experimental data from the Large Electron–Positron Collider. [28]

Solar neutrino problem Edit

In the 1960s, the now-famous Homestake experiment made the first measurement of the flux of electron neutrinos arriving from the core of the Sun and found a value that was between one third and one half the number predicted by the Standard Solar Model. This discrepancy, which became known as the solar neutrino problem, remained unresolved for some thirty years, while possible problems with both the experiment and the solar model were investigated, but none could be found. Eventually, it was realized that both were actually correct and that the discrepancy between them was due to neutrinos being more complex than was previously assumed. It was postulated that the three neutrinos had nonzero and slightly different masses, and could therefore oscillate into undetectable flavors on their flight to the Earth. This hypothesis was investigated by a new series of experiments, thereby opening a new major field of research that still continues. Eventual confirmation of the phenomenon of neutrino oscillation led to two Nobel prizes, to Raymond Davis, Jr., who conceived and led the Homestake experiment, and to Art McDonald, who led the SNO experiment, which could detect all of the neutrino flavors and found no deficit. [29]

Oscillation Edit

A practical method for investigating neutrino oscillations was first suggested by Bruno Pontecorvo in 1957 using an analogy with kaon oscillations over the subsequent 10 years, he developed the mathematical formalism and the modern formulation of vacuum oscillations. In 1985 Stanislav Mikheyev and Alexei Smirnov (expanding on 1978 work by Lincoln Wolfenstein) noted that flavor oscillations can be modified when neutrinos propagate through matter. This so-called Mikheyev–Smirnov–Wolfenstein effect (MSW effect) is important to understand because many neutrinos emitted by fusion in the Sun pass through the dense matter in the solar core (where essentially all solar fusion takes place) on their way to detectors on Earth.

Starting in 1998, experiments began to show that solar and atmospheric neutrinos change flavors (see Super-Kamiokande and Sudbury Neutrino Observatory). This resolved the solar neutrino problem: the electron neutrinos produced in the Sun had partly changed into other flavors which the experiments could not detect.

Although individual experiments, such as the set of solar neutrino experiments, are consistent with non-oscillatory mechanisms of neutrino flavor conversion, taken altogether, neutrino experiments imply the existence of neutrino oscillations. Especially relevant in this context are the reactor experiment KamLAND and the accelerator experiments such as MINOS. The KamLAND experiment has indeed identified oscillations as the neutrino flavor conversion mechanism involved in the solar electron neutrinos. Similarly MINOS confirms the oscillation of atmospheric neutrinos and gives a better determination of the mass squared splitting. [30] Takaaki Kajita of Japan, and Arthur B. McDonald of Canada, received the 2015 Nobel Prize for Physics for their landmark finding, theoretical and experimental, that neutrinos can change flavors.

Cosmic neutrinos Edit

As well as specific sources, a general background level of neutrinos is expected to pervade the universe, theorized to occur due to two main sources.

Cosmic neutrino background (Big Bang originated)

Around 1 second after the Big Bang, neutrinos decoupled, giving rise to a background level of neutrinos known as the cosmic neutrino background (CNB).

Diffuse supernova neutrino background (Supernova originated)

Raymond Davis, Jr. and Masatoshi Koshiba were jointly awarded the 2002 Nobel Prize in Physics. Both conducted pioneering work on solar neutrino detection, and Koshiba's work also resulted in the first real-time observation of neutrinos from the SN 1987A supernova in the nearby Large Magellanic Cloud. These efforts marked the beginning of neutrino astronomy. [31]

SN 1987A represents the only verified detection of neutrinos from a supernova. However, many stars have gone supernova in the universe, leaving a theorized diffuse supernova neutrino background.

Flavor, mass, and their mixing Edit

Weak interactions create neutrinos in one of three leptonic flavors: electron neutrinos (
ν
e ), muon neutrinos (
ν
μ ), or tau neutrinos (
ν
τ ), associated with the corresponding charged leptons, the electron (
e −
), muon (
μ −
), and tau (
τ −
), respectively. [32]

Although neutrinos were long believed to be massless, it is now known that there are three discrete neutrino masses each neutrino flavor state is a linear combination of the three discrete mass eigenstates. Although only differences of squares of the three mass values are known as of 2016, [8] experiments have shown that these masses are tiny in magnitude. From cosmological measurements, it has been calculated that the sum of the three neutrino masses must be less than one-millionth that of the electron. [1] [9]

More formally, neutrino flavor eigenstates (creation and annihilation combinations) are not the same as the neutrino mass eigenstates (simply labeled "1", "2", and "3"). As of 2016, it is not known which of these three is the heaviest. In analogy with the mass hierarchy of the charged leptons, the configuration with mass 2 being lighter than mass 3 is conventionally called the "normal hierarchy", while in the "inverted hierarchy", the opposite would hold. Several major experimental efforts are underway to help establish which is correct. [33]

A neutrino created in a specific flavor eigenstate is in an associated specific quantum superposition of all three mass eigenstates. This is possible because the three masses differ so little that they cannot be experimentally distinguished within any practical flight path, due to the uncertainty principle. The proportion of each mass state in the produced pure flavor state has been found to depend profoundly on that flavor. The relationship between flavor and mass eigenstates is encoded in the PMNS matrix. Experiments have established values for the elements of this matrix. [8]

A non-zero mass allows neutrinos to possibly have a tiny magnetic moment if so, neutrinos would interact electromagnetically, although no such interaction has ever been observed. [34]

Flavor oscillations Edit

Neutrinos oscillate between different flavors in flight. For example, an electron neutrino produced in a beta decay reaction may interact in a distant detector as a muon or tau neutrino, as defined by the flavor of the charged lepton produced in the detector. This oscillation occurs because the three mass state components of the produced flavor travel at slightly different speeds, so that their quantum mechanical wave packets develop relative phase shifts that change how they combine to produce a varying superposition of three flavors. Each flavor component thereby oscillates as the neutrino travels, with the flavors varying in relative strengths. The relative flavor proportions when the neutrino interacts represent the relative probabilities for that flavor of interaction to produce the corresponding flavor of charged lepton. [6] [7]

There are other possibilities in which neutrino could oscillate even if they were massless: If Lorentz symmetry were not an exact symmetry, neutrinos could experience Lorentz-violating oscillations. [35]

Mikheyev–Smirnov–Wolfenstein effect Edit

Neutrinos traveling through matter, in general, undergo a process analogous to light traveling through a transparent material. This process is not directly observable because it does not produce ionizing radiation, but gives rise to the MSW effect. Only a small fraction of the neutrino's energy is transferred to the material. [36]

Antineutrinos Edit

For each neutrino, there also exists a corresponding antiparticle, called an antineutrino, which also has no electric charge and half-integer spin. They are distinguished from the neutrinos by having opposite signs of lepton number and opposite chirality. As of 2016, no evidence has been found for any other difference. In all observations so far of leptonic processes (despite extensive and continuing searches for exceptions), there is never any change in overall lepton number for example, if the total lepton number is zero in the initial state, electron neutrinos appear in the final state together with only positrons (anti-electrons) or electron-antineutrinos, and electron antineutrinos with electrons or electron neutrinos. [10] [11]

Antineutrinos are produced in nuclear beta decay together with a beta particle, in which, e.g., a neutron decays into a proton, electron, and antineutrino. All antineutrinos observed thus far possess right-handed helicity (i.e. only one of the two possible spin states has ever been seen), while neutrinos are left-handed. Nevertheless, as neutrinos have mass, their helicity is frame-dependent, so it is the related frame-independent property of chirality that is relevant here.

Antineutrinos were first detected as a result of their interaction with protons in a large tank of water. This was installed next to a nuclear reactor as a controllable source of the antineutrinos (See: Cowan–Reines neutrino experiment). Researchers around the world have begun to investigate the possibility of using antineutrinos for reactor monitoring in the context of preventing the proliferation of nuclear weapons. [37] [38] [39]

Majorana mass Edit

Because antineutrinos and neutrinos are neutral particles, it is possible that they are the same particle. Particles that have this property are known as Majorana particles, named after the Italian physicist Ettore Majorana who first proposed the concept. For the case of neutrinos this theory has gained popularity as it can be used, in combination with the seesaw mechanism, to explain why neutrino masses are so small compared to those of the other elementary particles, such as electrons or quarks. Majorana neutrinos would have the property that the neutrino and antineutrino could be distinguished only by chirality what experiments observe as a difference between the neutrino and antineutrino could simply be due to one particle with two possible chiralities.

As of 2019 [update] , it is not known whether neutrinos are Majorana or Dirac particles. It is possible to test this property experimentally. For example, if neutrinos are indeed Majorana particles, then lepton-number violating processes such as neutrinoless double beta decay would be allowed, while they would not if neutrinos are Dirac particles. Several experiments have been and are being conducted to search for this process, e.g. GERDA, [40] EXO, [41] and SNO+, [42] and CUORE. [43] The cosmic neutrino background is also a probe of whether neutrinos are Majorana particles, since there should be a different number of cosmic neutrinos detected in either the Dirac or Majorana case. [44]

Nuclear reactions Edit

Neutrinos can interact with a nucleus, changing it to another nucleus. This process is used in radiochemical neutrino detectors. In this case, the energy levels and spin states within the target nucleus have to be taken into account to estimate the probability for an interaction. In general the interaction probability increases with the number of neutrons and protons within a nucleus. [29] [45]

It is very hard to uniquely identify neutrino interactions among the natural background of radioactivity. For this reason, in early experiments a special reaction channel was chosen to facilitate the identification: the interaction of an antineutrino with one of the hydrogen nuclei in the water molecules. A hydrogen nucleus is a single proton, so simultaneous nuclear interactions, which would occur within a heavier nucleus, don't need to be considered for the detection experiment. Within a cubic metre of water placed right outside a nuclear reactor, only relatively few such interactions can be recorded, but the setup is now used for measuring the reactor's plutonium production rate.

Induced fission Edit

Very much like neutrons do in nuclear reactors, neutrinos can induce fission reactions within heavy nuclei. [46] So far, this reaction has not been measured in a laboratory, but is predicted to happen within stars and supernovae. The process affects the abundance of isotopes seen in the universe. [45] Neutrino fission of deuterium nuclei has been observed in the Sudbury Neutrino Observatory, which uses a heavy water detector.

Types Edit

Neutrinos in the Standard Model of elementary particles
Fermion Symbol
Generation 1
Electron neutrino
ν
e
Electron antineutrino
ν
e
Generation 2
Muon neutrino
ν
μ
Muon antineutrino
ν
μ
Generation 3
Tau neutrino
ν
τ
Tau antineutrino
ν
τ

There are three known types (flavors) of neutrinos: electron neutrino
ν
e , muon neutrino
ν
μ , and tau neutrino
ν
τ , named after their partner leptons in the Standard Model (see table at right). The current best measurement of the number of neutrino types comes from observing the decay of the Z boson. This particle can decay into any light neutrino and its antineutrino, and the more available types of light neutrinos, [c] the shorter the lifetime of the Z boson. Measurements of the Z lifetime have shown that three light neutrino flavors couple to the Z . [32] The correspondence between the six quarks in the Standard Model and the six leptons, among them the three neutrinos, suggests to physicists' intuition that there should be exactly three types of neutrino.

There are several active research areas involving the neutrino. Some are concerned with testing predictions of neutrino behavior. Other research is focused on measurement of unknown properties of neutrinos there is special interest in experiments that determine their masses and rates of CP violation, which cannot be predicted from current theory.

Detectors near artificial neutrino sources Edit

International scientific collaborations install large neutrino detectors near nuclear reactors or in neutrino beams from particle accelerators to better constrain the neutrino masses and the values for the magnitude and rates of oscillations between neutrino flavors. These experiments are thereby searching for the existence of CP violation in the neutrino sector that is, whether or not the laws of physics treat neutrinos and antineutrinos differently. [8]

The KATRIN experiment in Germany began to acquire data in June 2018 [47] to determine the value of the mass of the electron neutrino, with other approaches to this problem in the planning stages. [1]

Gravitational effects Edit

Despite their tiny masses, neutrinos are so numerous that their gravitational force can influence other matter in the universe.

The three known neutrino flavors are the only established elementary particle candidates for dark matter, specifically hot dark matter, although the conventional neutrinos seem to be essentially ruled out as substantial proportion of dark matter based on observations of the cosmic microwave background. It still seems plausible that heavier, sterile neutrinos might compose warm dark matter, if they exist. [48]

Sterile neutrino searches Edit

Other efforts search for evidence of a sterile neutrino – a fourth neutrino flavor that does not interact with matter like the three known neutrino flavors. [49] [50] [51] [52] The possibility of sterile neutrinos is unaffected by the Z boson decay measurements described above: If their mass is greater than half the Z boson's mass, they could not be a decay product. Therefore, heavy sterile neutrinos would have a mass of at least 45.6 GeV.

The existence of such particles is in fact hinted by experimental data from the LSND experiment. On the other hand, the currently running MiniBooNE experiment suggested that sterile neutrinos are not required to explain the experimental data, [53] although the latest research into this area is on-going and anomalies in the MiniBooNE data may allow for exotic neutrino types, including sterile neutrinos. [54] A recent re-analysis of reference electron spectra data from the Institut Laue-Langevin [55] has also hinted at a fourth, sterile neutrino. [56]

According to an analysis published in 2010, data from the Wilkinson Microwave Anisotropy Probe of the cosmic background radiation is compatible with either three or four types of neutrinos. [57]

Neutrinoless double-beta decay searches Edit

Another hypothesis concerns "neutrinoless double-beta decay", which, if it exists, would violate lepton number conservation. Searches for this mechanism are underway but have not yet found evidence for it. If they were to, then what are now called antineutrinos could not be true antiparticles.

Cosmic ray neutrinos Edit

Cosmic ray neutrino experiments detect neutrinos from space to study both the nature of neutrinos and the cosmic sources producing them. [58]

Speed Edit

Before neutrinos were found to oscillate, they were generally assumed to be massless, propagating at the speed of light. According to the theory of special relativity, the question of neutrino velocity is closely related to their mass: If neutrinos are massless, they must travel at the speed of light, and if they have mass they cannot reach the speed of light. Due to their tiny mass, the predicted speed is extremely close to the speed of light in all experiments, and current detectors are not sensitive to the expected difference.

Also some Lorentz-violating variants of quantum gravity might allow faster-than-light neutrinos. A comprehensive framework for Lorentz violations is the Standard-Model Extension (SME).

The first measurements of neutrino speed were made in the early 1980s using pulsed pion beams (produced by pulsed proton beams hitting a target). The pions decayed producing neutrinos, and the neutrino interactions observed within a time window in a detector at a distance were consistent with the speed of light. This measurement was repeated in 2007 using the MINOS detectors, which found the speed of 3 GeV neutrinos to be, at the 99% confidence level, in the range between 0.999 976 c and 1.000 126 c . The central value of 1.000 051 c is higher than the speed of light but, with uncertainty taken into account, is also consistent with a velocity of exactly c or slightly less. This measurement set an upper bound on the mass of the muon neutrino at 50 MeV with 99% confidence. [59] [60] After the detectors for the project were upgraded in 2012, MINOS refined their initial result and found agreement with the speed of light, with the difference in the arrival time of neutrinos and light of −0.0006% (±0.0012%). [61]

A similar observation was made, on a much larger scale, with supernova 1987A (SN 1987A). 10 MeV antineutrinos from the supernova were detected within a time window that was consistent with the speed of light for the neutrinos. So far, all measurements of neutrino speed have been consistent with the speed of light. [62] [63]

Superluminal neutrino glitch Edit

In September 2011, the OPERA collaboration released calculations showing velocities of 17 GeV and 28 GeV neutrinos exceeding the speed of light in their experiments. In November 2011, OPERA repeated its experiment with changes so that the speed could be determined individually for each detected neutrino. The results showed the same faster-than-light speed. In February 2012, reports came out that the results may have been caused by a loose fiber optic cable attached to one of the atomic clocks which measured the departure and arrival times of the neutrinos. An independent recreation of the experiment in the same laboratory by ICARUS found no discernible difference between the speed of a neutrino and the speed of light. [64]

In June 2012, CERN announced that new measurements conducted by all four Gran Sasso experiments (OPERA, ICARUS, Borexino and LVD) found agreement between the speed of light and the speed of neutrinos, finally refuting the initial OPERA claim. [65]

Mass Edit

Can we measure the neutrino masses? Do neutrinos follow Dirac or Majorana statistics?

The Standard Model of particle physics assumed that neutrinos are massless. [ citation needed ] The experimentally established phenomenon of neutrino oscillation, which mixes neutrino flavour states with neutrino mass states (analogously to CKM mixing), requires neutrinos to have nonzero masses. [66] Massive neutrinos were originally conceived by Bruno Pontecorvo in the 1950s. Enhancing the basic framework to accommodate their mass is straightforward by adding a right-handed Lagrangian.

Providing for neutrino mass can be done in two ways, and some proposals use both:

  • If, like other fundamental Standard Model particles, mass is generated by the Dirac mechanism, then the framework would require an SU(2) singlet. This particle would have the Yukawa interactions with the neutral component of the Higgsdoublet, but otherwise would have no interactions with Standard Model particles, so is called a "sterile" neutrino. [clarification needed]
  • Or, mass can be generated by the Majorana mechanism, which would require the neutrino and antineutrino to be the same particle.

The strongest upper limit on the masses of neutrinos comes from cosmology: the Big Bang model predicts that there is a fixed ratio between the number of neutrinos and the number of photons in the cosmic microwave background. If the total energy of all three types of neutrinos exceeded an average of 50 eV per neutrino, there would be so much mass in the universe that it would collapse. [67] This limit can be circumvented by assuming that the neutrino is unstable, but there are limits within the Standard Model that make this difficult. A much more stringent constraint comes from a careful analysis of cosmological data, such as the cosmic microwave background radiation, galaxy surveys, and the Lyman-alpha forest. These indicate that the summed masses of the three neutrinos must be less than 0.3 eV . [68]

The Nobel prize in Physics 2015 was awarded to Takaaki Kajita and Arthur B. McDonald for their experimental discovery of neutrino oscillations, which demonstrates that neutrinos have mass. [69] [70]

In 1998, research results at the Super-Kamiokande neutrino detector determined that neutrinos can oscillate from one flavor to another, which requires that they must have a nonzero mass. [71] While this shows that neutrinos have mass, the absolute neutrino mass scale is still not known. This is because neutrino oscillations are sensitive only to the difference in the squares of the masses. [72] As of 2020, [73] the best-fit value of the difference of the squares of the masses of mass eigenstates 1 and 2 is |Δm 2
21 | = 0.000 074 eV 2 , while for eigenstates 2 and 3 it is |Δm 2
32 | = 0.002 51 eV 2 . Since |Δm 2
32 | is the difference of two squared masses, at least one of them must have a value which is at least the square root of this value. Thus, there exists at least one neutrino mass eigenstate with a mass of at least 0.05 eV . [74]

In 2009, lensing data of a galaxy cluster were analyzed to predict a neutrino mass of about 1.5 eV . [75] This surprisingly high value requires that the three neutrino masses be nearly equal, with neutrino oscillations on the order of milli-electron-volts. In 2016 this was updated to a mass of 1.85 eV . [76] It predicts 3 sterile [ clarification needed ] neutrinos of the same mass, stems with the Planck dark matter fraction and the non-observation of neutrinoless double beta decay. The masses lie below the Mainz-Troitsk upper bound of 2.2 eV for the electron antineutrino. [77] The latter is being tested since June 2018 in the KATRIN experiment, that searches for a mass between 0.2 eV and 2 eV . [47]

A number of efforts are under way to directly determine the absolute neutrino mass scale in laboratory experiments. The methods applied involve nuclear beta decay (KATRIN and MARE).

On 31 May 2010, OPERA researchers observed the first tau neutrino candidate event in a muon neutrino beam, the first time this transformation in neutrinos had been observed, providing further evidence that they have mass. [78]

In July 2010, the 3-D MegaZ DR7 galaxy survey reported that they had measured a limit of the combined mass of the three neutrino varieties to be less than 0.28 eV . [79] A tighter upper bound yet for this sum of masses, 0.23 eV , was reported in March 2013 by the Planck collaboration, [80] whereas a February 2014 result estimates the sum as 0.320 ± 0.081 eV based on discrepancies between the cosmological consequences implied by Planck's detailed measurements of the cosmic microwave background and predictions arising from observing other phenomena, combined with the assumption that neutrinos are responsible for the observed weaker gravitational lensing than would be expected from massless neutrinos. [81]

If the neutrino is a Majorana particle, the mass may be calculated by finding the half-life of neutrinoless double-beta decay of certain nuclei. The current lowest upper limit on the Majorana mass of the neutrino has been set by KamLAND-Zen: 0.060–0.161 eV. [82]

Size Edit

Standard Model neutrinos are fundamental point-like particles, without any width or volume. Since the neutrino is an elementary particle it does not have a size in the same sense as everyday objects. [83] Properties associated with conventional "size" are absent: There is no minimum distance between them, and neutrinos cannot be condensed into a separate uniform substance that occupies a finite volume.

Chirality Edit

Experimental results show that within the margin of error, all produced and observed neutrinos have left-handed helicities (spins antiparallel to momenta), and all antineutrinos have right-handed helicities. [84] In the massless limit, that means that only one of two possible chiralities is observed for either particle. These are the only chiralities included in the Standard Model of particle interactions.

It is possible that their counterparts (right-handed neutrinos and left-handed antineutrinos) simply do not exist. If they do exist, their properties are substantially different from observable neutrinos and antineutrinos. It is theorized that they are either very heavy (on the order of GUT scale—see Seesaw mechanism), do not participate in weak interaction (so-called sterile neutrinos), or both.

The existence of nonzero neutrino masses somewhat complicates the situation. Neutrinos are produced in weak interactions as chirality eigenstates. Chirality of a massive particle is not a constant of motion helicity is, but the chirality operator does not share eigenstates with the helicity operator. Free neutrinos propagate as mixtures of left- and right-handed helicity states, with mixing amplitudes on the order of mνE . This does not significantly affect the experiments, because neutrinos involved are nearly always ultrarelativistic, and thus mixing amplitudes are vanishingly small. Effectively, they travel so quickly and time passes so slowly in their rest-frames that they do not have enough time to change over any observable path. For example, most solar neutrinos have energies on the order of 0.100 MeV – 1 MeV , so the fraction of neutrinos with "wrong" helicity among them cannot exceed 10 −10 . [85] [86]

GSI anomaly Edit

An unexpected series of experimental results for the rate of decay of heavy highly charged radioactive ions circulating in a storage ring has provoked theoretical activity in an effort to find a convincing explanation. The observed phenomenon is known as the GSI anomaly, as the storage ring is a facility at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt Germany.

The rates of weak decay of two radioactive species with half lives of about 40 seconds and 200 seconds were found to have a significant oscillatory modulation, with a period of about 7 seconds. [87] As the decay process produces an electron neutrino, some of the suggested explanations for the observed oscillation rate propose new or altered neutrino properties. Ideas related to flavour oscillation met with skepticism. [88] A later proposal is based on differences between neutrino mass eigenstates. [89]

Artificial Edit

Reactor neutrinos Edit

Nuclear reactors are the major source of human-generated neutrinos. The majority of energy in a nuclear reactor is generated by fission (the four main fissile isotopes in nuclear reactors are 235
U
, 238
U
, 239
Pu
and 241
Pu
), the resultant neutron-rich daughter nuclides rapidly undergo additional beta decays, each converting one neutron to a proton and an electron and releasing an electron antineutrino (
n

p
+
e −
+
ν
e ). Including these subsequent decays, the average nuclear fission releases about 200 MeV of energy, of which roughly 95.5% is retained in the core as heat, and roughly 4.5% (or about 9 MeV ) [90] is radiated away as antineutrinos. For a typical nuclear reactor with a thermal power of 4000 MW , [d] the total power production from fissioning atoms is actually 4185 MW , of which 185 MW is radiated away as antineutrino radiation and never appears in the engineering. This is to say, 185 MW of fission energy is lost from this reactor and does not appear as heat available to run turbines, since antineutrinos penetrate all building materials practically without interaction.

The antineutrino energy spectrum depends on the degree to which the fuel is burned (plutonium-239 fission antineutrinos on average have slightly more energy than those from uranium-235 fission), but in general, the detectable antineutrinos from fission have a peak energy between about 3.5 and 4 MeV , with a maximum energy of about 10 MeV . [91] There is no established experimental method to measure the flux of low-energy antineutrinos. Only antineutrinos with an energy above threshold of 1.8 MeV can trigger inverse beta decay and thus be unambiguously identified (see § Detection below). An estimated 3% of all antineutrinos from a nuclear reactor carry an energy above this threshold. Thus, an average nuclear power plant may generate over 10 20 antineutrinos per second above this threshold, but also a much larger number ( 97%/3% ≈ 30 times this number) below the energy threshold, which cannot be seen with present detector technology.

Accelerator neutrinos Edit

Some particle accelerators have been used to make neutrino beams. The technique is to collide protons with a fixed target, producing charged pions or kaons. These unstable particles are then magnetically focused into a long tunnel where they decay while in flight. Because of the relativistic boost of the decaying particle, the neutrinos are produced as a beam rather than isotropically. Efforts to design an accelerator facility where neutrinos are produced through muon decays are ongoing. [92] Such a setup is generally known as a "neutrino factory".

Nuclear weapons Edit

Nuclear weapons also produce very large quantities of neutrinos. Fred Reines and Clyde Cowan considered the detection of neutrinos from a bomb prior to their search for reactor neutrinos a fission reactor was recommended as a better alternative by Los Alamos physics division leader J. M. B. Kellogg. [93] Fission weapons produce antineutrinos (from the fission process), and fusion weapons produce both neutrinos (from the fusion process) and antineutrinos (from the initiating fission explosion).

Geologic Edit

Neutrinos are produced together with the natural background radiation. In particular, the decay chains of 238
U
and 232
Th
isotopes, as well as 40
K
, include beta decays which emit antineutrinos. These so-called geoneutrinos can provide valuable information on the Earth's interior. A first indication for geoneutrinos was found by the KamLAND experiment in 2005, updated results have been presented by KamLAND [94] and Borexino. [95] The main background in the geoneutrino measurements are the antineutrinos coming from reactors.

Atmospheric Edit

Atmospheric neutrinos result from the interaction of cosmic rays with atomic nuclei in the Earth's atmosphere, creating showers of particles, many of which are unstable and produce neutrinos when they decay. A collaboration of particle physicists from Tata Institute of Fundamental Research (India), Osaka City University (Japan) and Durham University (UK) recorded the first cosmic ray neutrino interaction in an underground laboratory in Kolar Gold Fields in India in 1965. [96]

Solar Edit

Solar neutrinos originate from the nuclear fusion powering the Sun and other stars. The details of the operation of the Sun are explained by the Standard Solar Model. In short: when four protons fuse to become one helium nucleus, two of them have to convert into neutrons, and each such conversion releases one electron neutrino.

The Sun sends enormous numbers of neutrinos in all directions. Each second, about 65 billion ( 6.5 × 10 10 ) solar neutrinos pass through every square centimeter on the part of the Earth orthogonal to the direction of the Sun. [13] Since neutrinos are insignificantly absorbed by the mass of the Earth, the surface area on the side of the Earth opposite the Sun receives about the same number of neutrinos as the side facing the Sun.

Supernovae Edit

In 1966, Stirling A. Colgate and Richard H. White [97] calculated that neutrinos carry away most of the gravitational energy released by the collapse of massive stars, events now categorized as Type Ib and Ic and Type II supernovae. When such stars collapse, matter densities at the core become so high ( 10 17 kg/m 3 ) that the degeneracy of electrons is not enough to prevent protons and electrons from combining to form a neutron and an electron neutrino. A second and more profuse neutrino source is the thermal energy (100 billion kelvins) of the newly formed neutron core, which is dissipated via the formation of neutrino–antineutrino pairs of all flavors. [98]

Colgate and White's theory of supernova neutrino production was confirmed in 1987, when neutrinos from Supernova 1987A were detected. The water-based detectors Kamiokande II and IMB detected 11 and 8 antineutrinos (lepton number = −1) of thermal origin, [98] respectively, while the scintillator-based Baksan detector found 5 neutrinos (lepton number = +1) of either thermal or electron-capture origin, in a burst less than 13 seconds long. The neutrino signal from the supernova arrived at Earth several hours before the arrival of the first electromagnetic radiation, as expected from the evident fact that the latter emerges along with the shock wave. The exceptionally feeble interaction with normal matter allowed the neutrinos to pass through the churning mass of the exploding star, while the electromagnetic photons were slowed.

Because neutrinos interact so little with matter, it is thought that a supernova's neutrino emissions carry information about the innermost regions of the explosion. Much of the visible light comes from the decay of radioactive elements produced by the supernova shock wave, and even light from the explosion itself is scattered by dense and turbulent gases, and thus delayed. The neutrino burst is expected to reach Earth before any electromagnetic waves, including visible light, gamma rays, or radio waves. The exact time delay of the electromagnetic waves' arrivals depends on the velocity of the shock wave and on the thickness of the outer layer of the star. For a Type II supernova, astronomers expect the neutrino flood to be released seconds after the stellar core collapse, while the first electromagnetic signal may emerge hours later, after the explosion shock wave has had time to reach the surface of the star. The Supernova Early Warning System project uses a network of neutrino detectors to monitor the sky for candidate supernova events the neutrino signal will provide a useful advance warning of a star exploding in the Milky Way.

Although neutrinos pass through the outer gases of a supernova without scattering, they provide information about the deeper supernova core with evidence that here, even neutrinos scatter to a significant extent. In a supernova core the densities are those of a neutron star (which is expected to be formed in this type of supernova), [99] becoming large enough to influence the duration of the neutrino signal by delaying some neutrinos. The 13 second-long neutrino signal from SN 1987A lasted far longer than it would take for unimpeded neutrinos to cross through the neutrino-generating core of a supernova, expected to be only 3200 kilometers in diameter for SN 1987A.

The number of neutrinos counted was also consistent with a total neutrino energy of 2.2 × 10 46 joules , which was estimated to be nearly all of the total energy of the supernova. [31]

For an average supernova, approximately 10 57 (an octodecillion) neutrinos are released, but the actual number detected at a terrestrial detector N will be far smaller, at the level of

where M is the mass of the detector (with e.g. Super Kamiokande having a mass of 50 kton) and d is the distance to the supernova. [100] Hence in practice it will only be possible to detect neutrino bursts from supernovae within or nearby the Milky Way (our own galaxy). In addition to the detection of neutrinos from individual supernovae, it should also be possible to detect the diffuse supernova neutrino background, which originates from all supernovae in the Universe. [101]

Supernova remnants Edit

The energy of supernova neutrinos ranges from a few to several tens of MeV. The sites where cosmic rays are accelerated are expected to produce neutrinos that are at least one million times more energetic, produced from turbulent gaseous environments left over by supernova explosions: the supernova remnants. The origin of the cosmic rays was attributed to supernovas by Walter Baade and Fritz Zwicky this hypothesis was refined by Vitaly L. Ginzburg and Sergei I. Syrovatsky who attributed the origin to supernova remnants, and supported their claim by the crucial remark, that the cosmic ray losses of the Milky Way is compensated, if the efficiency of acceleration in supernova remnants is about 10 percent. Ginzburg and Syrovatskii's hypothesis is supported by the specific mechanism of "shock wave acceleration" happening in supernova remnants, which is consistent with the original theoretical picture drawn by Enrico Fermi, and is receiving support from observational data. The very high-energy neutrinos are still to be seen, but this branch of neutrino astronomy is just in its infancy. The main existing or forthcoming experiments that aim at observing very-high-energy neutrinos from our galaxy are Baikal, AMANDA, IceCube, ANTARES, NEMO and Nestor. Related information is provided by very-high-energy gamma ray observatories, such as VERITAS, HESS and MAGIC. Indeed, the collisions of cosmic rays are supposed to produce charged pions, whose decay give the neutrinos, neutral pions, and gamma rays the environment of a supernova remnant, which is transparent to both types of radiation.

Still-higher-energy neutrinos, resulting from the interactions of extragalactic cosmic rays, could be observed with the Pierre Auger Observatory or with the dedicated experiment named ANITA.

Big Bang Edit

It is thought that, just like the cosmic microwave background radiation leftover from the Big Bang, there is a background of low-energy neutrinos in our Universe. In the 1980s it was proposed that these may be the explanation for the dark matter thought to exist in the universe. Neutrinos have one important advantage over most other dark matter candidates: They are known to exist. This idea also has serious problems.

From particle experiments, it is known that neutrinos are very light. This means that they easily move at speeds close to the speed of light. For this reason, dark matter made from neutrinos is termed "hot dark matter". The problem is that being fast moving, the neutrinos would tend to have spread out evenly in the universe before cosmological expansion made them cold enough to congregate in clumps. This would cause the part of dark matter made of neutrinos to be smeared out and unable to cause the large galactic structures that we see.

These same galaxies and groups of galaxies appear to be surrounded by dark matter that is not fast enough to escape from those galaxies. Presumably this matter provided the gravitational nucleus for formation. This implies that neutrinos cannot make up a significant part of the total amount of dark matter.

From cosmological arguments, relic background neutrinos are estimated to have density of 56 of each type per cubic centimeter and temperature 1.9 K ( 1.7 × 10 −4 eV ) if they are massless, much colder if their mass exceeds 0.001 eV . Although their density is quite high, they have not yet been observed in the laboratory, as their energy is below thresholds of most detection methods, and due to extremely low neutrino interaction cross-sections at sub-eV energies. In contrast, boron-8 solar neutrinos—which are emitted with a higher energy—have been detected definitively despite having a space density that is lower than that of relic neutrinos by some 6 orders of magnitude.

Neutrinos cannot be detected directly because they do not carry electric charge, which means they do not ionize the materials they pass through. Other ways neutrinos might affect their environment, such as the MSW effect, do not produce traceable radiation. A unique reaction to identify antineutrinos, sometimes referred to as inverse beta decay, as applied by Reines and Cowan (see below), requires a very large detector to detect a significant number of neutrinos. All detection methods require the neutrinos to carry a minimum threshold energy. So far, there is no detection method for low-energy neutrinos, in the sense that potential neutrino interactions (for example by the MSW effect) cannot be uniquely distinguished from other causes. Neutrino detectors are often built underground to isolate the detector from cosmic rays and other background radiation.

Antineutrinos were first detected in the 1950s near a nuclear reactor. Reines and Cowan used two targets containing a solution of cadmium chloride in water. Two scintillation detectors were placed next to the cadmium targets. Antineutrinos with an energy above the threshold of 1.8 MeV caused charged current interactions with the protons in the water, producing positrons and neutrons. This is very much like
β +
decay, where energy is used to convert a proton into a neutron, a positron (
e +
) and an electron neutrino (
ν
e ) is emitted:

In the Cowan and Reines experiment, instead of an outgoing neutrino, you have an incoming antineutrino (
ν
e ) from a nuclear reactor:

The resulting positron annihilation with electrons in the detector material created photons with an energy of about 0.5 MeV . Pairs of photons in coincidence could be detected by the two scintillation detectors above and below the target. The neutrons were captured by cadmium nuclei resulting in gamma rays of about 8 MeV that were detected a few microseconds after the photons from a positron annihilation event.

Since then, various detection methods have been used. Super Kamiokande is a large volume of water surrounded by photomultiplier tubes that watch for the Cherenkov radiation emitted when an incoming neutrino creates an electron or muon in the water. The Sudbury Neutrino Observatory is similar, but used heavy water as the detecting medium, which uses the same effects, but also allows the additional reaction any-flavor neutrino photo-dissociation of deuterium, resulting in a free neutron which is then detected from gamma radiation after chlorine-capture. Other detectors have consisted of large volumes of chlorine or gallium which are periodically checked for excesses of argon or germanium, respectively, which are created by electron-neutrinos interacting with the original substance. MINOS used a solid plastic scintillator coupled to photomultiplier tubes, while Borexino uses a liquid pseudocumene scintillator also watched by photomultiplier tubes and the NOνA detector uses liquid scintillator watched by avalanche photodiodes. The IceCube Neutrino Observatory uses 1 km 3 of the Antarctic ice sheet near the south pole with photomultiplier tubes distributed throughout the volume.

Neutrinos' low mass and neutral charge mean they interact exceedingly weakly with other particles and fields. This feature of weak interaction interests scientists because it means neutrinos can be used to probe environments that other radiation (such as light or radio waves) cannot penetrate.

Using neutrinos as a probe was first proposed in the mid-20th century as a way to detect conditions at the core of the Sun. The solar core cannot be imaged directly because electromagnetic radiation (such as light) is diffused by the great amount and density of matter surrounding the core. On the other hand, neutrinos pass through the Sun with few interactions. Whereas photons emitted from the solar core may require 40,000 years to diffuse to the outer layers of the Sun, neutrinos generated in stellar fusion reactions at the core cross this distance practically unimpeded at nearly the speed of light. [102] [103]

Neutrinos are also useful for probing astrophysical sources beyond the Solar System because they are the only known particles that are not significantly attenuated by their travel through the interstellar medium. Optical photons can be obscured or diffused by dust, gas, and background radiation. High-energy cosmic rays, in the form of swift protons and atomic nuclei, are unable to travel more than about 100 megaparsecs due to the Greisen–Zatsepin–Kuzmin limit (GZK cutoff). Neutrinos, in contrast, can travel even greater distances barely attenuated.

The galactic core of the Milky Way is fully obscured by dense gas and numerous bright objects. Neutrinos produced in the galactic core might be measurable by Earth-based neutrino telescopes. [18]

Another important use of the neutrino is in the observation of supernovae, the explosions that end the lives of highly massive stars. The core collapse phase of a supernova is an extremely dense and energetic event. It is so dense that no known particles are able to escape the advancing core front except for neutrinos. Consequently, supernovae are known to release approximately 99% of their radiant energy in a short (10 second) burst of neutrinos. [104] These neutrinos are a very useful probe for core collapse studies.

The rest mass of the neutrino is an important test of cosmological and astrophysical theories (see Dark matter). The neutrino's significance in probing cosmological phenomena is as great as any other method, and is thus a major focus of study in astrophysical communities. [105]

The study of neutrinos is important in particle physics because neutrinos typically have the lowest mass, and hence are examples of the lowest-energy particles theorized in extensions of the Standard Model of particle physics.

In November 2012, American scientists used a particle accelerator to send a coherent neutrino message through 780 feet of rock. This marks the first use of neutrinos for communication, and future research may permit binary neutrino messages to be sent immense distances through even the densest materials, such as the Earth's core. [106]

In July 2018, the IceCube Neutrino Observatory announced that they have traced an extremely-high-energy neutrino that hit their Antarctica-based research station in September 2017 back to its point of origin in the blazar TXS 0506 +056 located 3.7 billion light-years away in the direction of the constellation Orion. This is the first time that a neutrino detector has been used to locate an object in space and that a source of cosmic rays has been identified. [107] [108] [109]


Contents

The speed of light in vacuum is usually denoted by a lowercase c , for "constant" or the Latin celeritas (meaning "swiftness, celerity"). In 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch had used c for a different constant that was later shown to equal √ 2 times the speed of light in vacuum. Historically, the symbol V was used as an alternative symbol for the speed of light, introduced by James Clerk Maxwell in 1865. In 1894, Paul Drude redefined c with its modern meaning. Einstein used V in his original German-language papers on special relativity in 1905, but in 1907 he switched to c , which by then had become the standard symbol for the speed of light. [7] [8]

Sometimes c is used for the speed of waves in any material medium, and c 0 for the speed of light in vacuum. [9] This subscripted notation, which is endorsed in official SI literature, [10] has the same form as other related constants: namely, μ0 for the vacuum permeability or magnetic constant, ε0 for the vacuum permittivity or electric constant, and Z0 for the impedance of free space. This article uses c exclusively for the speed of light in vacuum.

The speed at which light waves propagate in vacuum is independent both of the motion of the wave source and of the inertial frame of reference of the observer. [Note 5] This invariance of the speed of light was postulated by Einstein in 1905, [6] after being motivated by Maxwell's theory of electromagnetism and the lack of evidence for the luminiferous aether [16] it has since been consistently confirmed by many experiments. [Note 6] It is only possible to verify experimentally that the two-way speed of light (for example, from a source to a mirror and back again) is frame-independent, because it is impossible to measure the one-way speed of light (for example, from a source to a distant detector) without some convention as to how clocks at the source and at the detector should be synchronized. However, by adopting Einstein synchronization for the clocks, the one-way speed of light becomes equal to the two-way speed of light by definition. [17] [18] The special theory of relativity explores the consequences of this invariance of c with the assumption that the laws of physics are the same in all inertial frames of reference. [19] [20] One consequence is that c is the speed at which all massless particles and waves, including light, must travel in vacuum.

Special relativity has many counterintuitive and experimentally verified implications. [21] These include the equivalence of mass and energy (E = mc 2 ) , length contraction (moving objects shorten), [Note 7] and time dilation (moving clocks run more slowly). The factor γ by which lengths contract and times dilate is known as the Lorentz factor and is given by γ = (1 − v 2 /c 2 ) −1/2 , where v is the speed of the object. The difference of γ from 1 is negligible for speeds much slower than c, such as most everyday speeds—in which case special relativity is closely approximated by Galilean relativity—but it increases at relativistic speeds and diverges to infinity as v approaches c. For example, a time dilation factor of γ = 2 occurs at a relative velocity of 86.6% of the speed of light (v = 0.866 c). Similarly, a time dilation factor of γ = 10 occurs at v = 99.5% c.

The results of special relativity can be summarized by treating space and time as a unified structure known as spacetime (with c relating the units of space and time), and requiring that physical theories satisfy a special symmetry called Lorentz invariance, whose mathematical formulation contains the parameter c. [24] Lorentz invariance is an almost universal assumption for modern physical theories, such as quantum electrodynamics, quantum chromodynamics, the Standard Model of particle physics, and general relativity. As such, the parameter c is ubiquitous in modern physics, appearing in many contexts that are unrelated to light. For example, general relativity predicts that c is also the speed of gravity and of gravitational waves. [25] [Note 8] In non-inertial frames of reference (gravitationally curved spacetime or accelerated reference frames), the local speed of light is constant and equal to c, but the speed of light along a trajectory of finite length can differ from c, depending on how distances and times are defined. [27]

It is generally assumed that fundamental constants such as c have the same value throughout spacetime, meaning that they do not depend on location and do not vary with time. However, it has been suggested in various theories that the speed of light may have changed over time. [28] [29] No conclusive evidence for such changes has been found, but they remain the subject of ongoing research. [30] [31]

It also is generally assumed that the speed of light is isotropic, meaning that it has the same value regardless of the direction in which it is measured. Observations of the emissions from nuclear energy levels as a function of the orientation of the emitting nuclei in a magnetic field (see Hughes–Drever experiment), and of rotating optical resonators (see Resonator experiments) have put stringent limits on the possible two-way anisotropy. [32] [33]

Upper limit on speeds

According to special relativity, the energy of an object with rest mass m and speed v is given by γmc 2 , where γ is the Lorentz factor defined above. When v is zero, γ is equal to one, giving rise to the famous E = mc 2 formula for mass–energy equivalence. The γ factor approaches infinity as v approaches c, and it would take an infinite amount of energy to accelerate an object with mass to the speed of light. The speed of light is the upper limit for the speeds of objects with positive rest mass, and individual photons cannot travel faster than the speed of light. [34] [35] [36] This is experimentally established in many tests of relativistic energy and momentum. [37]

More generally, it is impossible for signals or energy to travel faster than c. One argument for this follows from the counter-intuitive implication of special relativity known as the relativity of simultaneity. If the spatial distance between two events A and B is greater than the time interval between them multiplied by c then there are frames of reference in which A precedes B, others in which B precedes A, and others in which they are simultaneous. As a result, if something were travelling faster than c relative to an inertial frame of reference, it would be travelling backwards in time relative to another frame, and causality would be violated. [Note 9] [39] In such a frame of reference, an "effect" could be observed before its "cause". Such a violation of causality has never been recorded, [18] and would lead to paradoxes such as the tachyonic antitelephone. [40]

There are situations in which it may seem that matter, energy, or information-carrying signal travels at speeds greater than c, but they do not. For example, as is discussed in the propagation of light in a medium section below, many wave velocities can exceed c. For example, the phase velocity of X-rays through most glasses can routinely exceed c, [41] but phase velocity does not determine the velocity at which waves convey information. [42]

If a laser beam is swept quickly across a distant object, the spot of light can move faster than c, although the initial movement of the spot is delayed because of the time it takes light to get to the distant object at the speed c. However, the only physical entities that are moving are the laser and its emitted light, which travels at the speed c from the laser to the various positions of the spot. Similarly, a shadow projected onto a distant object can be made to move faster than c, after a delay in time. [43] In neither case does any matter, energy, or information travel faster than light. [44]

The rate of change in the distance between two objects in a frame of reference with respect to which both are moving (their closing speed) may have a value in excess of c. However, this does not represent the speed of any single object as measured in a single inertial frame. [44]

Certain quantum effects appear to be transmitted instantaneously and therefore faster than c, as in the EPR paradox. An example involves the quantum states of two particles that can be entangled. Until either of the particles is observed, they exist in a superposition of two quantum states. If the particles are separated and one particle's quantum state is observed, the other particle's quantum state is determined instantaneously. However, it is impossible to control which quantum state the first particle will take on when it is observed, so information cannot be transmitted in this manner. [44] [45]

Another quantum effect that predicts the occurrence of faster-than-light speeds is called the Hartman effect: under certain conditions the time needed for a virtual particle to tunnel through a barrier is constant, regardless of the thickness of the barrier. [46] [47] This could result in a virtual particle crossing a large gap faster-than-light. However, no information can be sent using this effect. [48]

So-called superluminal motion is seen in certain astronomical objects, [49] such as the relativistic jets of radio galaxies and quasars. However, these jets are not moving at speeds in excess of the speed of light: the apparent superluminal motion is a projection effect caused by objects moving near the speed of light and approaching Earth at a small angle to the line of sight: since the light which was emitted when the jet was farther away took longer to reach the Earth, the time between two successive observations corresponds to a longer time between the instants at which the light rays were emitted. [50]

In models of the expanding universe, the farther galaxies are from each other, the faster they drift apart. This receding is not due to motion through space, but rather to the expansion of space itself. [44] For example, galaxies far away from Earth appear to be moving away from the Earth with a speed proportional to their distances. Beyond a boundary called the Hubble sphere, the rate at which their distance from Earth increases becomes greater than the speed of light. [51]

In classical physics, light is described as a type of electromagnetic wave. The classical behaviour of the electromagnetic field is described by Maxwell's equations, which predict that the speed c with which electromagnetic waves (such as light) propagate in vacuum is related to the distributed capacitance and inductance of vacuum, otherwise respectively known as the electric constant ε0 and the magnetic constant μ0, by the equation [52]

In modern quantum physics, the electromagnetic field is described by the theory of quantum electrodynamics (QED). In this theory, light is described by the fundamental excitations (or quanta) of the electromagnetic field, called photons. In QED, photons are massless particles and thus, according to special relativity, they travel at the speed of light in vacuum.

Extensions of QED in which the photon has a mass have been considered. In such a theory, its speed would depend on its frequency, and the invariant speed c of special relativity would then be the upper limit of the speed of light in vacuum. [27] No variation of the speed of light with frequency has been observed in rigorous testing, [53] [54] [55] putting stringent limits on the mass of the photon. The limit obtained depends on the model used: if the massive photon is described by Proca theory, [56] the experimental upper bound for its mass is about 10 −57 grams [57] if photon mass is generated by a Higgs mechanism, the experimental upper limit is less sharp, m ≤ 10 −14 eV/c 2 [56] (roughly 2 × 10 −47 g).

Another reason for the speed of light to vary with its frequency would be the failure of special relativity to apply to arbitrarily small scales, as predicted by some proposed theories of quantum gravity. In 2009, the observation of gamma-ray burst GRB 090510 found no evidence for a dependence of photon speed on energy, supporting tight constraints in specific models of spacetime quantization on how this speed is affected by photon energy for energies approaching the Planck scale. [58]

In a medium

In a medium, light usually does not propagate at a speed equal to c further, different types of light wave will travel at different speeds. The speed at which the individual crests and troughs of a plane wave (a wave filling the whole space, with only one frequency) propagate is called the phase velocity vp. A physical signal with a finite extent (a pulse of light) travels at a different speed. The largest part of the pulse travels at the group velocity vg, and its earliest part travels at the front velocity vf.

The phase velocity is important in determining how a light wave travels through a material or from one material to another. It is often represented in terms of a refractive index. The refractive index of a material is defined as the ratio of c to the phase velocity vp in the material: larger indices of refraction indicate lower speeds. The refractive index of a material may depend on the light's frequency, intensity, polarization, or direction of propagation in many cases, though, it can be treated as a material-dependent constant. The refractive index of air is approximately 1.0003. [59] Denser media, such as water, [60] glass, [61] and diamond, [62] have refractive indexes of around 1.3, 1.5 and 2.4, respectively, for visible light. In exotic materials like Bose–Einstein condensates near absolute zero, the effective speed of light may be only a few metres per second. However, this represents absorption and re-radiation delay between atoms, as do all slower-than-c speeds in material substances. As an extreme example of light "slowing" in matter, two independent teams of physicists claimed to bring light to a "complete standstill" by passing it through a Bose–Einstein condensate of the element rubidium. However, the popular description of light being "stopped" in these experiments refers only to light being stored in the excited states of atoms, then re-emitted at an arbitrarily later time, as stimulated by a second laser pulse. During the time it had "stopped", it had ceased to be light. This type of behaviour is generally microscopically true of all transparent media which "slow" the speed of light. [63]

In transparent materials, the refractive index generally is greater than 1, meaning that the phase velocity is less than c. In other materials, it is possible for the refractive index to become smaller than 1 for some frequencies in some exotic materials it is even possible for the index of refraction to become negative. [64] The requirement that causality is not violated implies that the real and imaginary parts of the dielectric constant of any material, corresponding respectively to the index of refraction and to the attenuation coefficient, are linked by the Kramers–Kronig relations. [65] In practical terms, this means that in a material with refractive index less than 1, the absorption of the wave is so quick that no signal can be sent faster than c.

A pulse with different group and phase velocities (which occurs if the phase velocity is not the same for all the frequencies of the pulse) smears out over time, a process known as dispersion. Certain materials have an exceptionally low (or even zero) group velocity for light waves, a phenomenon called slow light, which has been confirmed in various experiments. [66] [67] [68] [69] The opposite, group velocities exceeding c, has also been shown in experiment. [70] It should even be possible for the group velocity to become infinite or negative, with pulses travelling instantaneously or backwards in time. [71]

None of these options, however, allow information to be transmitted faster than c. It is impossible to transmit information with a light pulse any faster than the speed of the earliest part of the pulse (the front velocity). It can be shown that this is (under certain assumptions) always equal to c. [71]

It is possible for a particle to travel through a medium faster than the phase velocity of light in that medium (but still slower than c). When a charged particle does that in a dielectric material, the electromagnetic equivalent of a shock wave, known as Cherenkov radiation, is emitted. [72]

The speed of light is of relevance to communications: the one-way and round-trip delay time are greater than zero. This applies from small to astronomical scales. On the other hand, some techniques depend on the finite speed of light, for example in distance measurements.

Small scales

In supercomputers, the speed of light imposes a limit on how quickly data can be sent between processors. If a processor operates at 1 gigahertz, a signal can travel only a maximum of about 30 centimetres (1 ft) in a single cycle. Processors must therefore be placed close to each other to minimize communication latencies this can cause difficulty with cooling. If clock frequencies continue to increase, the speed of light will eventually become a limiting factor for the internal design of single chips. [73] [74]

Large distances on Earth

Given that the equatorial circumference of the Earth is about 40 075 km and that c is about 300 000 km/s , the theoretical shortest time for a piece of information to travel half the globe along the surface is about 67 milliseconds. When light is travelling around the globe in an optical fibre, the actual transit time is longer, in part because the speed of light is slower by about 35% in an optical fibre, depending on its refractive index n. [Note 10] Furthermore, straight lines rarely occur in global communications situations, and delays are created when the signal passes through an electronic switch or signal regenerator. [76]

Spaceflights and astronomy

Similarly, communications between the Earth and spacecraft are not instantaneous. There is a brief delay from the source to the receiver, which becomes more noticeable as distances increase. This delay was significant for communications between ground control and Apollo 8 when it became the first manned spacecraft to orbit the Moon: for every question, the ground control station had to wait at least three seconds for the answer to arrive. [77] The communications delay between Earth and Mars can vary between five and twenty minutes depending upon the relative positions of the two planets. [78] As a consequence of this, if a robot on the surface of Mars were to encounter a problem, its human controllers would not be aware of it until at least five minutes later, and possibly up to twenty minutes later it would then take a further five to twenty minutes for instructions to travel from Earth to Mars.

Receiving light and other signals from distant astronomical sources can even take much longer. For example, it has taken 13 billion (13 × 10 9 ) years for light to travel to Earth from the faraway galaxies viewed in the Hubble Ultra Deep Field images. [79] [80] Those photographs, taken today, capture images of the galaxies as they appeared 13 billion years ago, when the universe was less than a billion years old. [79] The fact that more distant objects appear to be younger, due to the finite speed of light, allows astronomers to infer the evolution of stars, of galaxies, and of the universe itself.

Astronomical distances are sometimes expressed in light-years, especially in popular science publications and media. [81] A light-year is the distance light travels in one year, around 9461 billion kilometres, 5879 billion miles, or 0.3066 parsecs. In round figures, a light year is nearly 10 trillion kilometres or nearly 6 trillion miles. Proxima Centauri, the closest star to Earth after the Sun, is around 4.2 light-years away. [82]

Distance measurement

Radar systems measure the distance to a target by the time it takes a radio-wave pulse to return to the radar antenna after being reflected by the target: the distance to the target is half the round-trip transit time multiplied by the speed of light. A Global Positioning System (GPS) receiver measures its distance to GPS satellites based on how long it takes for a radio signal to arrive from each satellite, and from these distances calculates the receiver's position. Because light travels about 300 000 kilometres ( 186 000 mi ) in one second, these measurements of small fractions of a second must be very precise. The Lunar Laser Ranging Experiment, radar astronomy and the Deep Space Network determine distances to the Moon, [83] planets [84] and spacecraft, [85] respectively, by measuring round-trip transit times.

High-frequency trading

The speed of light has become important in high-frequency trading, where traders seek to gain minute advantages by delivering their trades to exchanges fractions of a second ahead of other traders. For example, traders have been switching to microwave communications between trading hubs, because of the advantage which microwaves travelling at near to the speed of light in air have over fibre optic signals, which travel 30–40% slower. [86] [87]

There are different ways to determine the value of c. One way is to measure the actual speed at which light waves propagate, which can be done in various astronomical and earth-based setups. However, it is also possible to determine c from other physical laws where it appears, for example, by determining the values of the electromagnetic constants ε0 and μ0 and using their relation to c. Historically, the most accurate results have been obtained by separately determining the frequency and wavelength of a light beam, with their product equalling c. [ citation needed ]

Astronomical measurements

Outer space is a convenient setting for measuring the speed of light because of its large scale and nearly perfect vacuum. Typically, one measures the time needed for light to traverse some reference distance in the solar system, such as the radius of the Earth's orbit. Historically, such measurements could be made fairly accurately, compared to how accurately the length of the reference distance is known in Earth-based units. It is customary to express the results in astronomical units (AU) per day.

Ole Christensen Rømer used an astronomical measurement to make the first quantitative estimate of the speed of light in the year 1676. [89] [90] When measured from Earth, the periods of moons orbiting a distant planet are shorter when the Earth is approaching the planet than when the Earth is receding from it. The distance travelled by light from the planet (or its moon) to Earth is shorter when the Earth is at the point in its orbit that is closest to its planet than when the Earth is at the farthest point in its orbit, the difference in distance being the diameter of the Earth's orbit around the Sun. The observed change in the moon's orbital period is caused by the difference in the time it takes light to traverse the shorter or longer distance. Rømer observed this effect for Jupiter's innermost moon Io and deduced that light takes 22 minutes to cross the diameter of the Earth's orbit.

Another method is to use the aberration of light, discovered and explained by James Bradley in the 18th century. [91] This effect results from the vector addition of the velocity of light arriving from a distant source (such as a star) and the velocity of its observer (see diagram on the right). A moving observer thus sees the light coming from a slightly different direction and consequently sees the source at a position shifted from its original position. Since the direction of the Earth's velocity changes continuously as the Earth orbits the Sun, this effect causes the apparent position of stars to move around. From the angular difference in the position of stars (maximally 20.5 arcseconds) [92] it is possible to express the speed of light in terms of the Earth's velocity around the Sun, which with the known length of a year can be converted to the time needed to travel from the Sun to the Earth. In 1729, Bradley used this method to derive that light travelled 10 210 times faster than the Earth in its orbit (the modern figure is 10 066 times faster) or, equivalently, that it would take light 8 minutes 12 seconds to travel from the Sun to the Earth. [91]

Astronomical unit

An astronomical unit (AU) is approximately the average distance between the Earth and Sun. It was redefined in 2012 as exactly 149 597 870 700 m . [93] [94] Previously the AU was not based on the International System of Units but in terms of the gravitational force exerted by the Sun in the framework of classical mechanics. [Note 11] The current definition uses the recommended value in metres for the previous definition of the astronomical unit, which was determined by measurement. [93] This redefinition is analogous to that of the metre and likewise has the effect of fixing the speed of light to an exact value in astronomical units per second (via the exact speed of light in metres per second).

Previously, the inverse of c expressed in seconds per astronomical unit was measured by comparing the time for radio signals to reach different spacecraft in the Solar System, with their position calculated from the gravitational effects of the Sun and various planets. By combining many such measurements, a best fit value for the light time per unit distance could be obtained. For example, in 2009, the best estimate, as approved by the International Astronomical Union (IAU), was: [96] [97] [98]

light time for unit distance: tau = 499.004 783 836 (10) s c = 0.002 003 988 804 10 (4) AU/s = 173.144 632 674 (3) AU/day.

The relative uncertainty in these measurements is 0.02 parts per billion ( 2 × 10 −11 ), equivalent to the uncertainty in Earth-based measurements of length by interferometry. [99] Since the metre is defined to be the length travelled by light in a certain time interval, the measurement of the light time in terms of the previous definition of the astronomical unit can also be interpreted as measuring the length of an AU (old definition) in metres. [Note 12]

Time of flight techniques

A method of measuring the speed of light is to measure the time needed for light to travel to a mirror at a known distance and back. This is the working principle behind the Fizeau–Foucault apparatus developed by Hippolyte Fizeau and Léon Foucault. [ citation needed ]

The setup as used by Fizeau consists of a beam of light directed at a mirror 8 kilometres (5 mi) away. On the way from the source to the mirror, the beam passes through a rotating cogwheel. At a certain rate of rotation, the beam passes through one gap on the way out and another on the way back, but at slightly higher or lower rates, the beam strikes a tooth and does not pass through the wheel. Knowing the distance between the wheel and the mirror, the number of teeth on the wheel, and the rate of rotation, the speed of light can be calculated. [100]

The method of Foucault replaces the cogwheel with a rotating mirror. Because the mirror keeps rotating while the light travels to the distant mirror and back, the light is reflected from the rotating mirror at a different angle on its way out than it is on its way back. From this difference in angle, the known speed of rotation and the distance to the distant mirror the speed of light may be calculated. [101]

Nowadays, using oscilloscopes with time resolutions of less than one nanosecond, the speed of light can be directly measured by timing the delay of a light pulse from a laser or an LED reflected from a mirror. This method is less precise (with errors of the order of 1%) than other modern techniques, but it is sometimes used as a laboratory experiment in college physics classes. [102] [103] [104]

Electromagnetic constants

An option for deriving c that does not directly depend on a measurement of the propagation of electromagnetic waves is to use the relation between c and the vacuum permittivity ε0 and vacuum permeability μ0 established by Maxwell's theory: c 2 = 1/(ε0μ0). The vacuum permittivity may be determined by measuring the capacitance and dimensions of a capacitor, whereas the value of the vacuum permeability is fixed at exactly 4π × 10 −7 H⋅m −1 through the definition of the ampere. Rosa and Dorsey used this method in 1907 to find a value of 299 710 ± 22 km/s . [105] [106]

Cavity resonance

Another way to measure the speed of light is to independently measure the frequency f and wavelength λ of an electromagnetic wave in vacuum. The value of c can then be found by using the relation c = . One option is to measure the resonance frequency of a cavity resonator. If the dimensions of the resonance cavity are also known, these can be used to determine the wavelength of the wave. In 1946, Louis Essen and A.C. Gordon-Smith established the frequency for a variety of normal modes of microwaves of a microwave cavity of precisely known dimensions. The dimensions were established to an accuracy of about ±0.8 μm using gauges calibrated by interferometry. [105] As the wavelength of the modes was known from the geometry of the cavity and from electromagnetic theory, knowledge of the associated frequencies enabled a calculation of the speed of light. [105] [107]

The Essen–Gordon-Smith result, 299 792 ± 9 km/s , was substantially more precise than those found by optical techniques. [105] By 1950, repeated measurements by Essen established a result of 299 792 .5 ± 3.0 km/s . [108]

A household demonstration of this technique is possible, using a microwave oven and food such as marshmallows or margarine: if the turntable is removed so that the food does not move, it will cook the fastest at the antinodes (the points at which the wave amplitude is the greatest), where it will begin to melt. The distance between two such spots is half the wavelength of the microwaves by measuring this distance and multiplying the wavelength by the microwave frequency (usually displayed on the back of the oven, typically 2450 MHz), the value of c can be calculated, "often with less than 5% error". [109] [110]

Interferometry

Interferometry is another method to find the wavelength of electromagnetic radiation for determining the speed of light. [Note 13] A coherent beam of light (e.g. from a laser), with a known frequency (f), is split to follow two paths and then recombined. By adjusting the path length while observing the interference pattern and carefully measuring the change in path length, the wavelength of the light (λ) can be determined. The speed of light is then calculated using the equation c = λf.

Before the advent of laser technology, coherent radio sources were used for interferometry measurements of the speed of light. [112] However interferometric determination of wavelength becomes less precise with wavelength and the experiments were thus limited in precision by the long wavelength (

4 mm (0.16 in)) of the radiowaves. The precision can be improved by using light with a shorter wavelength, but then it becomes difficult to directly measure the frequency of the light. One way around this problem is to start with a low frequency signal of which the frequency can be precisely measured, and from this signal progressively synthesize higher frequency signals whose frequency can then be linked to the original signal. A laser can then be locked to the frequency, and its wavelength can be determined using interferometry. [113] This technique was due to a group at the National Bureau of Standards (NBS) (which later became NIST). They used it in 1972 to measure the speed of light in vacuum with a fractional uncertainty of 3.5 × 10 −9 . [113] [114]

History of measurements of c (in km/s)
<1638 Galileo, covered lanterns inconclusive [115] [116] [117] : 1252 [Note 14]
<1667 Accademia del Cimento, covered lanterns inconclusive [117] : 1253 [118]
1675 Rømer and Huygens, moons of Jupiter 220 000 [90] [119] ‒27% error
1729 James Bradley, aberration of light 301 000 [100] +0.40% error
1849 Hippolyte Fizeau, toothed wheel 315 000 [100] +5.1% error
1862 Léon Foucault, rotating mirror 298 000 ± 500 [100] ‒0.60% error
1907 Rosa and Dorsey, EM constants 299 710 ± 30 [105] [106] ‒280 ppm error
1926 Albert A. Michelson, rotating mirror 299 796 ± 4 [120] +12 ppm error
1950 Essen and Gordon-Smith , cavity resonator 299 792 .5 ± 3.0 [108] +0.14 ppm error
1958 K.D. Froome, radio interferometry 299 792 .50 ± 0.10 [112] +0.14 ppm error
1972 Evenson et al., laser interferometry 299 792 .4562 ± 0.0011 [114] ‒0.006 ppm error
1983 17th CGPM, definition of the metre 299 792 .458 (exact) [88] exact, as defined

Until the early modern period, it was not known whether light travelled instantaneously or at a very fast finite speed. The first extant recorded examination of this subject was in ancient Greece. The ancient Greeks, Muslim scholars, and classical European scientists long debated this until Rømer provided the first calculation of the speed of light. Einstein's Theory of Special Relativity concluded that the speed of light is constant regardless of one's frame of reference. Since then, scientists have provided increasingly accurate measurements.

Early history

Empedocles (c. 490–430 BC) was the first to propose a theory of light [121] and claimed that light has a finite speed. [122] He maintained that light was something in motion, and therefore must take some time to travel. Aristotle argued, to the contrary, that "light is due to the presence of something, but it is not a movement". [123] Euclid and Ptolemy advanced Empedocles' emission theory of vision, where light is emitted from the eye, thus enabling sight. Based on that theory, Heron of Alexandria argued that the speed of light must be infinite because distant objects such as stars appear immediately upon opening the eyes. [124] Early Islamic philosophers initially agreed with the Aristotelian view that light had no speed of travel. In 1021, Alhazen (Ibn al-Haytham) published the Book of Optics, in which he presented a series of arguments dismissing the emission theory of vision in favour of the now accepted intromission theory, in which light moves from an object into the eye. [125] This led Alhazen to propose that light must have a finite speed, [123] [126] [127] and that the speed of light is variable, decreasing in denser bodies. [127] [128] He argued that light is substantial matter, the propagation of which requires time, even if this is hidden from our senses. [129] Also in the 11th century, Abū Rayhān al-Bīrūnī agreed that light has a finite speed, and observed that the speed of light is much faster than the speed of sound. [130]

In the 13th century, Roger Bacon argued that the speed of light in air was not infinite, using philosophical arguments backed by the writing of Alhazen and Aristotle. [131] [132] In the 1270s, Witelo considered the possibility of light travelling at infinite speed in vacuum, but slowing down in denser bodies. [133]

In the early 17th century, Johannes Kepler believed that the speed of light was infinite since empty space presents no obstacle to it. René Descartes argued that if the speed of light were to be finite, the Sun, Earth, and Moon would be noticeably out of alignment during a lunar eclipse. Since such misalignment had not been observed, Descartes concluded the speed of light was infinite. Descartes speculated that if the speed of light were found to be finite, his whole system of philosophy might be demolished. [123] In Descartes' derivation of Snell's law, he assumed that even though the speed of light was instantaneous, the denser the medium, the faster was light's speed. [134] Pierre de Fermat derived Snell's law using the opposing assumption, the denser the medium the slower light travelled. Fermat also argued in support of a finite speed of light. [135]

First measurement attempts

In 1629, Isaac Beeckman proposed an experiment in which a person observes the flash of a cannon reflecting off a mirror about one mile (1.6 km) away. In 1638, Galileo Galilei proposed an experiment, with an apparent claim to having performed it some years earlier, to measure the speed of light by observing the delay between uncovering a lantern and its perception some distance away. He was unable to distinguish whether light travel was instantaneous or not, but concluded that if it were not, it must nevertheless be extraordinarily rapid. [115] [116] In 1667, the Accademia del Cimento of Florence reported that it had performed Galileo's experiment, with the lanterns separated by about one mile, but no delay was observed. The actual delay in this experiment would have been about 11 microseconds.

The first quantitative estimate of the speed of light was made in 1676 by Rømer. [89] [90] From the observation that the periods of Jupiter's innermost moon Io appeared to be shorter when the Earth was approaching Jupiter than when receding from it, he concluded that light travels at a finite speed, and estimated that it takes light 22 minutes to cross the diameter of Earth's orbit. Christiaan Huygens combined this estimate with an estimate for the diameter of the Earth's orbit to obtain an estimate of speed of light of 220 000 km/s , 26% lower than the actual value. [119]

In his 1704 book Opticks, Isaac Newton reported Rømer's calculations of the finite speed of light and gave a value of "seven or eight minutes" for the time taken for light to travel from the Sun to the Earth (the modern value is 8 minutes 19 seconds). [136] Newton queried whether Rømer's eclipse shadows were coloured hearing that they were not, he concluded the different colours travelled at the same speed. In 1729, James Bradley discovered stellar aberration. [91] From this effect he determined that light must travel 10 210 times faster than the Earth in its orbit (the modern figure is 10 066 times faster) or, equivalently, that it would take light 8 minutes 12 seconds to travel from the Sun to the Earth. [91]

Connections with electromagnetism

In the 19th century Hippolyte Fizeau developed a method to determine the speed of light based on time-of-flight measurements on Earth and reported a value of 315 000 km/s . [137] His method was improved upon by Léon Foucault who obtained a value of 298 000 km/s in 1862. [100] In the year 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch measured the ratio of the electromagnetic and electrostatic units of charge, 1/ √ ε0μ0 , by discharging a Leyden jar, and found that its numerical value was very close to the speed of light as measured directly by Fizeau. The following year Gustav Kirchhoff calculated that an electric signal in a resistanceless wire travels along the wire at this speed. [138] In the early 1860s, Maxwell showed that, according to the theory of electromagnetism he was working on, electromagnetic waves propagate in empty space [139] [140] [141] at a speed equal to the above Weber/Kohlrausch ratio, and drawing attention to the numerical proximity of this value to the speed of light as measured by Fizeau, he proposed that light is in fact an electromagnetic wave. [142]

"Luminiferous aether"

It was thought at the time that empty space was filled with a background medium called the luminiferous aether in which the electromagnetic field existed. Some physicists thought that this aether acted as a preferred frame of reference for the propagation of light and therefore it should be possible to measure the motion of the Earth with respect to this medium, by measuring the isotropy of the speed of light. Beginning in the 1880s several experiments were performed to try to detect this motion, the most famous of which is the experiment performed by Albert A. Michelson and Edward W. Morley in 1887. [143] [144] The detected motion was always less than the observational error. Modern experiments indicate that the two-way speed of light is isotropic (the same in every direction) to within 6 nanometres per second. [145] Because of this experiment Hendrik Lorentz proposed that the motion of the apparatus through the aether may cause the apparatus to contract along its length in the direction of motion, and he further assumed, that the time variable for moving systems must also be changed accordingly ("local time"), which led to the formulation of the Lorentz transformation. Based on Lorentz's aether theory, Henri Poincaré (1900) showed that this local time (to first order in v/c) is indicated by clocks moving in the aether, which are synchronized under the assumption of constant light speed. In 1904, he speculated that the speed of light could be a limiting velocity in dynamics, provided that the assumptions of Lorentz's theory are all confirmed. In 1905, Poincaré brought Lorentz's aether theory into full observational agreement with the principle of relativity. [146] [147]

Special relativity

In 1905 Einstein postulated from the outset that the speed of light in vacuum, measured by a non-accelerating observer, is independent of the motion of the source or observer. Using this and the principle of relativity as a basis he derived the special theory of relativity, in which the speed of light in vacuum c featured as a fundamental constant, also appearing in contexts unrelated to light. This made the concept of the stationary aether (to which Lorentz and Poincaré still adhered) useless and revolutionized the concepts of space and time. [148] [149]

Increased accuracy of c and redefinition of the metre and second

In the second half of the 20th century, much progress was made in increasing the accuracy of measurements of the speed of light, first by cavity resonance techniques and later by laser interferometer techniques. These were aided by new, more precise, definitions of the metre and second. In 1950, Louis Essen determined the speed as 299 792 .5 ± 3.0 km/s , using cavity resonance. [108] This value was adopted by the 12th General Assembly of the Radio-Scientific Union in 1957. In 1960, the metre was redefined in terms of the wavelength of a particular spectral line of krypton-86, and, in 1967, the second was redefined in terms of the hyperfine transition frequency of the ground state of caesium-133. [150]

In 1972, using the laser interferometer method and the new definitions, a group at the US National Bureau of Standards in Boulder, Colorado determined the speed of light in vacuum to be c = 299 792 456 .2 ± 1.1 m/s . This was 100 times less uncertain than the previously accepted value. The remaining uncertainty was mainly related to the definition of the metre. [Note 15] [114] As similar experiments found comparable results for c, the 15th General Conference on Weights and Measures in 1975 recommended using the value 299 792 458 m/s for the speed of light. [153]

Defining the speed of light as an explicit constant

In 1983 the 17th meeting of the General Conference on Weights and Measures (CGPM) found that wavelengths from frequency measurements and a given value for the speed of light are more reproducible than the previous standard. They kept the 1967 definition of second, so the caesium hyperfine frequency would now determine both the second and the metre. To do this, they redefined the metre as: "The metre is the length of the path traveled by light in vacuum during a time interval of 1/ 299 792 458 of a second." [88] As a result of this definition, the value of the speed of light in vacuum is exactly 299 792 458 m/s [154] [155] and has become a defined constant in the SI system of units. [13] Improved experimental techniques that, prior to 1983, would have measured the speed of light no longer affect the known value of the speed of light in SI units, but instead allow a more precise realization of the metre by more accurately measuring the wavelength of Krypton-86 and other light sources. [156] [157]

In 2011, the CGPM stated its intention to redefine all seven SI base units using what it calls "the explicit-constant formulation", where each "unit is defined indirectly by specifying explicitly an exact value for a well-recognized fundamental constant", as was done for the speed of light. It proposed a new, but completely equivalent, wording of the metre's definition: "The metre, symbol m, is the unit of length its magnitude is set by fixing the numerical value of the speed of light in vacuum to be equal to exactly 299 792 458 when it is expressed in the SI unit m s −1 ." [158] This was one of the changes that was incorporated in the 2019 redefinition of the SI base units, also termed the New SI.


5. Exosphere

Unlike other layers, which are mostly distinguishable from one another, it is hard to say how far the exosphere is from the surface of the planet.

Unlike other layers, which are mostly distinguishable from one another, it is hard to say how far the exosphere is from the surface of the planet. Somewhere it is around 100,000 km, but it can expand up to 190,000 km above sea level. The air here is extremely thin, and the conditions here are more similar to the ones we find when we leave the Earth’s atmosphere entirely.


Arrhenius – Earth To Boil

And 115 years later, Stephen Hawking was parroting the same nonsense.

Arrhenius made a fundamental error in that he didn’t recognize H2O is a greenhouse gas. Knut Angstrom pointed this out in 1901, and showed experimentally that adding CO2 has very little impact on climate.

Rasool and Schneider confirmed this in 1971.

I generated this graph using the RRTM-LW model, which shows how little impact CO2 and CH4 have on earth’s radiative balance. Even a huge increase in CO2 or CH4 has minimal impact on climate. H2O is far and away the dominant greenhouse gas on earth.

48 Responses to Arrhenius – Earth To Boil

How can the Earth warm to a boil like Venus with an atmosheric pressure of 14.7 psi and a CO2 level of only .04%? Venus is hotter than Mecury because it has an atmoshere, a thick atmoshere. So you can’t say Venus is hotter because it’s closer to the Sun. Basic physics tells us that the high air pressure on Venus generates the heat.

The heat on Venus is totally accounted for by gas laws not any GHG effect.
Yes 100bar – 450C! The IR reaching Venus (very high albedo as pointed out) surface is very small even though it is closer to the Sun. Mercury has no atmosphere therefore its surface temperature is meaningless in terms of gasses.

So why isn’t the nitrogen in the cylinders in my lab (2,000 psi) hot?
Near the surface of Venus the atmosphere certainly doesn’t obey any gas laws (not even the non-ideal gas ones) since the CO2 is super-critical.

Because the heat generated to get that nitrogen into the cylinder was dissipated during the work expended to get it there. Also, your gas cylinder is not a massive, gravitational body, continuously rotating in space–orbiting close to an incandescent Sun–which acts as a continuous forcing pump. Let the nitrogen quickly out of your cylinder and see the freezing cold (equal, opposite reaction) that is generated. Do you understand fluid dynamics and head pressure? By your logic, cylinders full of CO2, exposed to a heat source, would be hotter than those full of nitrogen or other gasses.

Exactly my point, RealUniverse, to whom I was responding, asserted that the temperature on Venus is “totally accounted for by gas laws”, and that �bar-450ºC”. Like you I know that to be incorrect.

I’m still not clear on how high pressure alone can account for the heat since after compression it would cool. Doesn’t atmosphere act more like an insulating blanket, keeping heat in?

When I pump my car tires, I fill them to about 40 PSI. The friction caused adds some warmth, and yes that warmth escapes.

The atmospheric pressure on Venus is

1330 PSI, or about 90 times greater than the pressure at sea level here on Mama Gaia, which is about 14.5 PSI.

There is no blanket high in the stratosphere, nor at the top of the troposphere, where the predicted carbon dioxide-caused “hot spot” at the equator was supposed to be proof of the “greenhouse effect.” It does not exist. Heat is dissipating [escaping] into space as it always has.

CAGW is a massive fail. It is long past the time to throw it on top of the mountain of failed scientific hypotheses.

It isn’t the increased pressure itself that does it — it’s the increased “optical thickness” that goes along with it. That is, for a given atmospheric composition, twice the “amount” of atmosphere results in twice the surface pressure, and also twice the absorption of surface radiation.

When an atmosphere is more opaque to longwave surface radiation than it is to shortwave solar radiation (and this is the case for all planetary atmospheres that we know about), it generally gains energy near (or at) the bottom, and loses it from near the top. This is is what creates the negative lapse rate (temperature decreasing with height) that we usually see.

But basic physics tells us that if the magnitude of the negative lapse rate exceeds adiabatic, convection will occur, rapidly taking warmer air from lower heights to higher heights, where it can radiate to space much more readily. Lapse rates larger than adiabatic are therefore called “unstable”.

This phenomenon sets a firm upper limit on how much “greenhouse warming” can occur in a given atmosphere. Fundamentally, it is the product of the adiabatic lapse rate and the “emission height” of the atmosphere.

While it is certainly possible that increasing the concentration of absorbing gasses like CO2 could raise the emission height slightly, this is a marginal effect. Venus has so much more atmosphere than Earth that its emission height is tens of times higher than on earth. So it is not possible to get a “runaway” warming from the type of CO2 increases we are seeing.

People who claim we could face runaway warming simply do not understand the underlying physical mechansims.

Ed Bo,
Good explanation.
I would add that if one looks at the atmosphere of Venus from the 1 bar altitude up, the temperature and temperature profile looks remarkably similar to that of Earth’s from sea level up. To me, this is an indication that the hot surface temperature is largely governed by the density of the Venusian atmosphere and the high concentration of CO2 is irrelevant. The Venusian atmosphere with an equivalent mass of Nitrogen substituted for the CO2 would be similarly hot.

I don’t quite agree. You do need the absorption of longwave surface radiation to be greater than that for shortwave solar radiation to set up the negative lapse rate in the first place. Nitrogen does not provide that.

But the fact that a lapse rate larger than adiabatic cannot persist puts a firm upper limit on the warming effect. Many alarmists do not understand this.

Archie, The surface heat is virtually constant at about 460 C, night and day, poll to equator. At the highest point on Venus the temperature is about 380 C, so it is a similar response to Earth. It certainly doesn’t have runaway global warming as is claimed, since the heat is constant. On Venus you have to get to an elevation of at least 50 km to approach an Earth type pressure.

So if the CO2 is trapping the heat, why isn’t increasing?

Empirical testing/refutation of CAGW (catastrophic anthropogenic global warming) is of the essence, of course.

Very important empirical results have been produced that refute CAGW, amply ferreted out recorded by Real Climate Science.

However, there is another angle from which the subject needs to be approached: the chemistry and physics of CO2.

I have criticised my fellow sceptics for not paying enough attention to the physics and chemistry of CO2, which holds the key to direct refutation of CAGW.

I am not natural scientist enough to get involved in, let alone spearhead, such efforts, instead I am very slowly accumulating insights from that front – which actually seems to rely in large measure on long-standing knowledge. One wonders why these scientific findings did not act as deadly circuit breakers for CAGW.

Hey Georg! Facts will not refute political and emotional beliefs. CAGW was never about science.

Some countries even have still a decent news channel left.

Geologist and earth scientist Ian Plimer says the globe is not facing a climate emergency, telling Sky News that “we are actually still living in an ice age.”

Thanks Robertv. I get a big grin on my face every time I watch Mr. Plimer. Conservatives know the data. Alarmunists only know the propaganda talking points. CAGW is a dead hypedpothesis walking.

Sky News host Chris Kenny.

In a way, I am sympathetic to your view. People over here in Germany are trapped in a state of emotional conditioning such that it is enough to claim something is hurting the environment, and they will accept it without checking. In fact, they will accept any pseudo-ecological nonsense.

In this way, the Greens have managed to destroy environmental consciousness/discernment/competence/concern in Germany and replaced it with a thoroughly unecological religion.

However, it is fortunate that there are still people like Tony who care about ecological facts — and the scientific method behind it.

Relentless dissemination of the correct method of dealing with environmental issues is an important factor in reverting to proper respect for science.

Alarmism is full of contradictions and beginning to hurt people in their real lives.

Most people experience “catastrophic global warming” in a vicarious manner. But even here in Germany, their alarmist convictions prove very thin, the moment alarmism becomes a palpable event: they quickly turn against it, when there is suddenly the noise of a wind turbine to be heard in their living room.

For most, believing in CGW is a convenient way to be like everybody else, which in itself makes life easier (you can’t get closer to absolute truth than believing what everybody else believes, and you save the effort of thinking for yourself).

Alarmism is going to hurt people increasingly, and this growing pain will prove just how shallow alarmist convictions are. Political forces will arise that take up and reinforce this trend and the dominoes will start to tumble.

Manabe & Möller did a great article about active gases in the infrared spectrum and the heat balance of the atmosphere :

See also Kondratyev – Radiation in the atmosphere – chap. 11, Temperature variation in the atmopshere due to radiative heat exchange.

All these studies concluded that the CO2 cools the atmosphere (see the chapter on heat budget in the Manabe & Möller article p.525).
The only area where some could desagree (Plass, Goody for example) were at the tropopause at tropical latitudes (see Manabe & Möller article).

Later, the tenants of GCM models told us that the warming effect of the CO2 should be very important in this same area and that this should be observed with the increase of the CO2 concentration in the atmosphere.

Despite the observed CO2 concentration increase in the last 40 years, 40 years of satellite data (from UAH) showed no such warming in the tropical tropopause area.

Yep! The perpetual hot spot in the upper troposphere over the tropics which the assumptions and physics the climate models that project out of control warming demands has never shown. And still they go on as if the projections of those models are correct.

The problem is not that water vapor is a GHG and absorbs a lot, the problem is that albedo is modulated by water condensation. Albedo is not a fixed value at 0.305, it varies during the day to reflect solar as needed. The entire planet is albedo controlled, and NO gas is going to do anything about that. More heating gives exponentially more evaporation, which results in higher cloud mean coverage, and higher precipitation turnover.

Shading is the first control, latent heat is 2nd. Latent heat can go over 1 million watts/m^2 (at the highest rain rate) (and it can’t be radiated away at that rate, so the heat moves into adjacent “grid cells”). The total size explains the total thunderhead area compared to the raining area (about 500x area at 1″ per hour)

Another scientist who is agreed with the Arrhenius story was Albert Einstein.
Albert Einstein, in his 1917 paper:

says this about radiative heating of a gas:

During absorption and emission of radiation there is also present a transfer of momentum to the molecules. This means that just the interaction of radiation and molecules leads to a velocity distribution of the latter. This must surely be the same as the velocity distribution which molecules acquire as the result of their mutual interaction by collisions, that is, it must coincide with the Maxwell distribution. We must require that the mean kinetic energy which a molecule
per degree of freedom acquires in a Plank radiation field of temperature T be

this must be valid regardless of the nature of the molecules and independent of frequencies which the molecules absorb and emit.

“Regardless if the nature of the molecules and independent of the frequencies at which molecules absorb and emit.”

Only now are people coming to understand what Einstein meant – that the absorption-emission phenomena at the heart of the CO2 warming conjecture account for a small fraction only of atmospheric heat dynamics. The majority of heat movement is by Maxwellian momentum transfer interactions, as well as convection and evaporation.

Indeed, a fundamental article from Einstein.

You surely meant “who disagreed” …

The molecule which absorbs a photon becomes excited, and can then re-radiate some short time afterwards. This is elastic in as much as no energy is lost and all the energy comes out in the same form that it went in. However entropy increases because the new radiation goes in some random direction, and the general name for this is “scattering”.

If you have an infra-red radiator (i.e. Earth) in a pure vacuum, then all radiation will travel away from the radiator. If the same infra-red radiator is surrounded by a scattering “blanket” of atmosphere, then it will radiate slightly less efficiently because scattering changes the direction of some radiation back towards the original radiator. NOTE: the radiator still cools down, thermodynamics does what it says on the box and heat moves away from the hot radiator towards the cool surrounding space. However, it simply cools down somewhat less efficiently.

However, if the molecule absorbing the photon (the excited molecule) has a collision with another molecule before it can re-radiate then it might transfer that energy into heat within the gas and never radiate out the photon. This is energy in the photon being converted to kinetic energy which then disperses out into your standard distribution of energy amongst the mechanical degrees of freedom. The reverse process can also happen: the gas does slightly radiate as by random chance the exact set of collisions manages to achieve excitation in a molecule (reverse energy conversion). This process is inelastic and also entropy increasing. Therefore in aggregate the Earth will radiate at somewhat longer wavelength than it would otherwise. How significant this effect becomes depends on whether the excited molecule is more likely to have a collision, or more likely to re-radiate. At low atmospheric density, collisions are rare so most scattering is elastic.

All of these effects are quite small and largely irrelevant. The lion’s share of the surface heat from Earth is moved by convection of water which carries the latent heat of evaporation. This has a number of effects, lifting surface heat up to the tropopause (thus increasing the surface area of the radiator, allowing it to cool more efficiently) and also shunting heat away from the place where the sun is shining and sideways to cooler regions (thus also increasing the surface area of the radiator, and redistributing heat to create a much larger part of the Earth’s surface comfortable for life).

We already have an experimental test of what temperature on Earth’s surface would be like without any atmosphere. You might be surprised but it’s nothing like what the so called “climate scientists” say it would be. The hot patch would be 400K (very hot) and the cold regions down around 120K (freezing cold), with almost zero percent of the surface being comfortable for life. This is a solid empirical result having been measured from the surface of the moon.

Thus, in terms of life on Earth, what matters almost entirely is the heat transport and temperature stabilizing effect of water convection. The infra-red scattering effect (while real) is two tenths of bugger all in the scheme of things. Water is highly non-linear so very small increase in the surface temperature of the sea, has a massive increase in evaporation. For this reason the surface temperature of the Earth’s oceans cannot get significantly hotter than 303K and we have excellent empirical measured data for that as well, coming from the Argo buoys.

The molecule which absorbs a photon becomes excited, and can then re-radiate some short time afterwards. This is elastic in as much as no energy is lost and all the energy comes out in the same form that it went in.

Not true, it’s inelastic, in the case of fluorescence the energy of the photon that is emitted is lower than the energy of the exciting photon.


Cave Deposits Reveal Permafrost Thawed 400,000 Years Ago, When Temperatures Were Not Much Higher Than Today

Researchers from the US and Canada found evidence in mineral deposits from caves in Canada that permafrost thawing took place as recently as 400,000 years ago, in temperatures not much warmer than today. But they did not find evidence the thawing caused the release of predicted levels of carbon dioxide stored in the frozen terrain. Credit: Jeremy Shakun, Boston College

Cave deposits reveal Pleistocene permafrost thaw, absent predicted levels of CO2 release.

The vast frozen terrain of Arctic permafrost thawed several times in North America within the past 1 million years when the world’s climate was not much warmer than today, researchers from the United States and Canada report in today’s edition of Science Advances.

Arctic permafrost contains twice as much carbon as the atmosphere. But the researchers found that the thawings — which expel stores of carbon dioxide sequestered deep in frozen vegetation — were not accompanied by increased levels of CO2 in the atmosphere. The surprising finding runs counter to predictions that as the planet warms, the volume of these natural carbon stores can add significantly to CO2 produced by human activity, a combination that could increase the climatological toll of greenhouse gases.

The team of researchers explored caves in Canada to look for clues left in speleothems — mineral deposits accumulated over thousands of years — that might help answer when in the past did Canadian permafrost thaw and how much warmer was the climate, said Boston College Associate Professor of Earth and Environmental Sciences Jeremy Shakun, a co-author of the study.

The team was following up on a 2020 study that dated samples from caves in Siberia. That research found records of permafrost thawing until about 400,000 years ago, but little since then. Since the study focused on only a single region, the researchers sought to expand the search for a more representative view of the Arctic region, said Shakun, a paleoclimatologist.

During the course of two years, the researchers dated 73 cave deposits from several now-frozen caves in Canada. The deposits offer tell-tale clues to climatological history because they only form when the ground is thawed and water is dripping in a cave. By dating the age of the speleothems, the scientists were able to determine when in the past the regions had thawed.

Shakun said the results are very similar to the earlier Siberian study, suggesting that Arctic permafrost became more stable over the ice age cycles of the past couple million years.

But he said the team was surprised to find that many of the speleothems from the high Arctic turned out to be much younger than expected. Their relatively young ages mean permafrost thawing formed mineral deposits when the world was not much warmer than it is today.

Sediment cores from the Arctic Ocean hint at what might have been going on then.

“The summers were ice free before 400,000 years ago,” Shakun said. “That would have heated the land up more during the summer and insulated it under deeper snows in the winter, causing the ground to thaw.”

That theory is cause for concern if correct, he added. “Half of the Arctic sea ice has disappeared since I was born, so this may be making the permafrost more vulnerable again.”

Second, records of the ancient atmosphere show that greenhouse gas levels were not any higher during the past intervals of permafrost thaw we identified — this is surprising because the standard view is that massive amounts of carbon should be released to the atmosphere when the permafrost thaws.

Shakun said the findings call for further research to understand what allowed the permafrost to thaw at times in the past when it was not much warmer, and why there is little evidence for a big carbon release at those times.

“These findings do not fit easily with typical global warming predictions for the future,” said Shakun. “They may mean that scientists have overlooked processes that will prevent permafrost thaw from causing a big spike in CO2 going forward. On the other hand, it might just be that the gradual thawing events in the past were slow enough that the CO2 they released could be absorbed by the oceans or plants elsewhere – a situation that may not apply to the much faster warming today.”

Reference: “Increasing Pleistocene permafrost persistence and carbon cycle conundrums inferred from Canadian speleothems” by Nicole Biller-Celander, Jeremy D. Shakun, David McGee, Corinne I. Wong, Alberto V. Reyes, Ben Hardt, Irit Tal, Derek C. Ford and Bernard Lauriol, 28 April 2021, Science Advances.
DOI: 10.1126/sciadv.abe5799

In addition to Shakun, co-authors of the report included David McGee, Ben Hardt and Irit Tal, of MIT, Alberto Reys, of the University of Alberta, Derek Ford, of McMaster University, Bernard Lauriol, of the University of Ottawa, former BC graduate student Nicole Biller-Celander and geologist Corinne Wong, formerly of BC.


Atmosphere Topics

Greenhouse warming is enhanced during nights when the sky is overcast. Heat energy from the earth can be trapped by clouds leading to higher temperatures as compared to nights with clear skies. The air is not allowed to cool as much with overcast skies. Under partly cloudy skies, some heat is allowed to escape and some remains trapped. Clear skies allow for the most cooling to take place.


Watch the video: Η ατμόσφαιρα της Γης (June 2022).