Saturday, August 27, 2016

The “Earthlike” Planet of Proxima Centauri

The steady improvement of techniques for detecting extrasolar planets has borne fruit—and a low-hanging fruit at that.  The nearest star to the Solar System, Proxima Centauri, has been found to have a planet (imaginatively named Proxima Centauri B) with roughly Earth-like size and even Earth-like temperatures.  This is fascinating news, but what does it really mean?  Is this really another Earth?

The press has been all over this story, and some of what has been written makes sense.  But what reliable knowledge do we have?

First, this is indeed a rocky terrestrial-type planet.  It is the closest known planet to the Solar System, and indeed orbits the star that is the Sun’s nearest neighbor.  Interstellar distances, even for nearest neighbors, are huge: Proxima Centauri (let’s call it Proxy) is 4.24 light years away from us, a dizzying 270,000 Astronomical Units (1 AU is the mean distance of Earth from the Sun).  That’s 64,000 times as far as Jupiter is at its closest to Earth. 

The mass of the planet is estimated to be 1.3 times the mass of Earth.  Its radius and density are unknown.  It orbits once every 11.2 Earth days at an average distance of 0.05 AU from Proxy, following a path whose eccentricity is so far unknown.  Proxy is a faint red Main Sequence star, of spectral class M6, with a mass of about 0.123 times the mass of our Sun and a luminosity only 0.17% of the Sun’s—but almost all of that light (about 86% of it) is infrared (heat) radiation invisible to the human eye.  It is a member of the family of flare stars, undependable neighbors that emit powerful and unpredictable flares. The star’s photosphere (its visible surface) is at a mere 3000 Kelvins, cool enough for “clouds” of refractory metals and oxides to form. 

It is likely that Proxy orbits around the common center of mass of the Alpha Centauri (α Cen) system, but far enough from α Cen that its orbital period must be on the order of a million years.

A planet forced to live in such close proximity to its star suffers a variety of indignities.  The first is that the erratic activity of the star subjects the planet to extreme brightness fluctuations and to bombardment with high fluxes of X-radiation near times of maximum activity.  The second is that tidal friction can quickly despin the planet, causing a rotational lock between the star and planet.  Third, if the planet is too close to its star, the planet may cross the Roche limit and be torn apart by the star’s tidal forces.  Proxy B certainly suffers from the first of these afflictions and certainly does not suffer from the third: if it were inside the Roche limit there would be no planet to detect, only a debris disk; a super asteroid belt.  The intermediate fate, falling into a rotational lock, is unavoidable in such close quarters, but there are several distinct outcomes with very different significance, and for which we presently lack the data to choose between them. 

The simplest possibility, if the orbit of Proxy B is nearly circular, is for it to simply lock directly onto Proxy and always keep the same face toward its star.  With the Sun shining on only one side of the planet, the sub-solar point would be quite hot, and half of the planet would be frozen in eternal night.  Volatile gases would migrate into the darkness and freeze out on the surface, making vast deposits of water ice, carbon dioxide ice, and other gases, and perhaps generating lakes of liquid argon and the heavier inert gases krypton and xenon.  Nitrogen and oxygen, if present, would fall as snow on the night side.  Any slight eccentricity of the planet’s orbit would cause it to rock back and forth once per year (in the case of Proxy B, one year is just 11.2 Earth days). The strong solar tidal forces on the planet would cause the rocking to damp out and the orbit to become more perfectly circular.  This is called a 1:1 spin-orbit resonance, like the Moon around Earth or many satellites of the outer planets around their primaries.

But we have no guarantee that the planet’s orbit is closely circular.  A sobering example is provided by Mercury, a tidally despun planet locked onto its star (the Sun) but with a significant orbital eccentricity.  It actually rotates in a 3:2 spin-orbit resonance: three planetary rotations in two planet years.  At consecutive perihelion passages, opposite points on Mercury’s equator face the Sun.  Thus two regions get alternately scorched—and frozen.  Because of the gravitational stresses, the planet ends up slightly elongated with two bumps on opposite sides of the planet’s equator.  At perihelion passage the angular rate of rotation of Mercury and its angular rate of motion along its orbit are almost exactly equal, so that the “hot pole” tracks the Sun rather closely for many days near perihelion.  Other resonant relationships besides Mercury’s 3:2 resonance are also possible, but they have the potential for disaster: 2:1 and 5:2 and 3:1 resonances are associated with such large orbital eccentricities that they raise the potential for collision with other planets.  Note that 2:1 and 3:1 resonances would have the same spot on the equator being baked on each perihelion passage; 3:2 and 5:2 resonances would have the strongest heating localized alternately in two regions on opposite sides of the planetary equator. 

We don’t yet know the orbit of Proxy B well enough to distinguish between these different states.  But we can see that some of these states would generate extreme temperature and weather behavior that would not be conducive to maintaining a biosphere—and that’s without even considering the effects of wild luminosity and flare activity by the star.

Oh, and one other thing: Proxy B is so close to its star that it is quite near the point at which the tidal forces of its star would disassemble the planet and turn it into an asteroid belt.  Bummer.

But if the planet is in, say, a 3:2 resonance--and all its volatiles don’t go gentle into that good night—the star will remain on the Main Sequence, providing heat to its planets, for another 4000 billion years.  No need to rage against the dying of the light.

Saturday, April 23, 2016

Women in Space

About three years ago, shortly after the launch of the Chinese Shenzhou 9 spacecraft in 2012 with female “Taikonaut” Liu Yang aboard, I was interviewed on television by a woman reporter who seemed quite impressed by the fact that China had a real female astronaut.  She was aware that the first female space traveler was Valentina Tereshkova, who flew a mission in the Soviet Union’s Vostok program ‘way back in 1963, and wondered why the United States didn’t have female astronauts.

I was confounded by the question: it was like being asked why gravity had stopped working, or whether I had stopped beating my wife!  Perhaps a little summary is in order here.

The first woman to travel in space was indeed Valentina Tereshkova.  I actually would hesitate to call her an astronaut; “state-sponsored space tourist” would be a better description.  Her employment as a textile worker seemed poor preparation for piloting a spacecraft: she was not trained as a pilot, engineer, or scientist.  According to my Russian friends, she was trained in space flight to the extent of being “warned not to touch anything”, which I view as a probable overstatement by jealous men.   However, she had a background as a parachutist, an important factor.  The rationale for flying a parachutist was explained as giving her the option of jumping out of the Vostok capsule “if something went wrong”.  (In reality, it was always far safer to jump out than to remain aboard, because the spherical Vostok capsule and its Voskhod successor had the nasty habit of rolling downhill upon touchdown, much to the detriment of their occupants.)

The argument that Tereshkova was pioneering the way for Soviet women astronauts is ludicrous: the next Soviet woman cosmonaut was not to fly for another 19 years!  That woman, Svetlana Savitskaya, flew on the Soyuz T-5 mission to the Salyut 7 space station in July, 1982.   She was a real astronaut, well trained and competent to do far more than touch the controls.  Two years later she flew a second time, on the Soyuz T-12 mission, becoming the first woman to fly in space twice and also the first woman to go on a spacewalk. 

In 1978 NASA had selected a new class of astronauts, including several women.  It was clear that by 1983 NASA would begin launching female astronauts into orbit.  It is reasonable to interpret Savitskaya’s flight as being a preemptive strike, timed to beat NASA’s women astronauts into space-- but she was a real astronaut!

The first American woman to fly in space, Sally Ride, a Ph. D. physicist from Stanford, flew two Space Shuttle missions (STS 7 and STS 41G, in 1983 and 1984 respectively).  She was followed in quick succession by Judith Resnik (STS 41D and STS 51L in 1984 and 1986) and Kathryn Sullivan (three flights, STS 41G, STS 31, and STS 45 in 1984, 1990, and 1992, plus one spacewalk).  Anna Fisher flew on STS 51A in 1984, and Margaret Seddon flew three STS missions between 1985 and 1993. 

Shannon Lucid flew five separate space missions between 1985 and 1996, the last being a visit to the Mir space station.  She also has the unusual distinction that she was the first woman born in China to fly in space.

Bonnie Dunbar followed with five Space Shuttle missions from 1985 to 1998, and a number of other American female astronauts have flown three, four, or five missions since that time.

As of April 2016, the totals look like this:

·       Forty-four American women have flown in space, for a total of 116 missions.

·       Four Soviet/Russian women (Valentina Tereshkova, Vostok 6; Svetlana Savitskaya, Soyuz T 5, Soyuz T 12; Yelena Kondakova, Soyuz TM 20, STS-84; Yelena Serova, Soyuz TMA 14M) have flown a total of six missions.

·       Two Canadian women (Roberta Bondar, on STS 42; Julie Payette on STS 96 and STS 127) have flown a total of three Space Shuttle missions,

·       Two women from Japan (Chiaki Mukai on STS 65 and STS 95; Naoko Yamazaki, STS 131) have also flown a total of three missions.

·       Two Chinese women (Liu Yang, Shenzhou 9; Wang Yaping, Shenzhou 10) have each flown one mission. (The political significance of the launch of China’s first female space traveler can be judged by the fact that it occurred precisely on the 49th anniversary of the launch of Valentina Tereshkova.)

·       From France (Claudie Haigneré, Soyuz TM 24 and Soyuz TM 33), two missions.

·       From India (Kalpana Chawla, STS 87 and STS 107), two missions.

·       From the United Kingdom (Helen Sharman, Soyuz TM 12), one mission.

·       From Iran (Anousheh Ansari, Soyuz TMA 9), the first female space tourist, an Iranian-born US citizen, one mission.

·       From Italy (Samantha Cristoforetti, Soyuz TMA 15M), one mission.

·       From the Republic of Korea (Yi So-yeon, Soyuz TMA 12), one mission.

Soviet/Russian boosters have launched 6 American women (7 counting Anousheh Ansari*), 4 Russian women, 2 French women, and one woman each from Great Britain, Iran*, Italy, and Korea. 

If a woman wants to fly into space on a Russian booster, her best bet is to be an American citizen.

Of the 139 missions flown by women, 84% have been by Americans and 4% by Russians.

Thursday, April 21, 2016

A Reason to Want Global Warming

Those of you who do not read the Journal of Geography and Natural Disasters before breakfast each morning missed something interesting.  On 17 March that journal published a paper by M. J. Kelly of Cambridge University on the subject of “Trends in Extreme Weather Events since 1900- An Enduring Conundrum for Wise Policy Advice”.  Now, we know that human activities have added a lot of carbon dioxide to the atmosphere since 1900, and we know that CO2 has a net warming effect on the planet.  Numerous press reports have claimed that global warming must cause an increase in the frequency and severity of extreme weather events.  Interestingly, the Intergovernmental Panel on Climate Change (usually familiarly referred to as the IPCC), which has consistently warned about anthropogenic global warming (AGW), has never endorsed this position. 

Global warming, according to both model calculations and observations, causes the most warming at higher latitudes and the least warming near the equator.  In the language of meteorology, the meridional temperature gradient (the temperature contrast between equator and poles) is decreased.  But global weather is driven primarily by that gradient: when the meridional temperature gradient is large, polar air is colder relative to equatorial air, so the pole-to-equator density contrast of Earth-surface air is larger, exerting larger forces to drive dense polar air toward the equator and vice versa.  The cold air sinks and flows equator-ward, the warm air rises and flows pole-ward, and the Coriolis effect diverts these flows into giant circulation patterns, including (at the extreme) cyclones and hurricanes.  A larger temperature contrast between equator and poles causes larger density differences and pumps more energy into these global-scale motions.  More energy in the same mass of air means higher velocities.  In other words, the obvious effect of global warming is to reduce the temperature contrast and cause lower wind speeds.

And of course, we humans injected vastly less CO2 into the atmosphere in the 50 years from 1900 to 1950 than we did in the following 50 years: therefore AGW must have been much stronger in more recent history.

But so much for how things “ought” to work: Dr. Kelly has (gasp!) actually looked at the data on weather extremes to address this issue.  He found that “the weather in the first half of the 20th century was, if anything, more extreme than in the second half”.  In other words, the actual quantitative data on weather extremes confirms the common-sense understanding of a decreased meridional temperature gradient and agrees with the consensus of the IPCC, but flatly contradicts the glib prophecies of impending doom of the fear-mongers.  These prophecies, though quantitatively unfounded, have the PR virtue of being frighteningly draconic and easily understood by politicians and policy makers who think and argue qualitatively.  But who gets more attention, the person who says "Tomorrow will be a little better than today", or the one who shouts "Disaster coming!"?

Dr. Kelly concludes, “The lack of public, political and policymaker appreciation of the disconnect between empirical data and theoretical constructs is profoundly worrying, especially in terms of policy advice being given.”

You don’t have to take my word for this.  The original technical publication is available online:

Tuesday, April 19, 2016

The Secrets of Jupiter’s Great Red Spot

A recent article on ( tells of the efforts of a team of NASA scientists at Goddard Space Flight Center to replicate the striking brick-red color of Jupiter’s famous and long-lived Great Red Spot (GRS).   Most of Jupiter is covered by alternating bands of bright clouds (zones) and dark clouds (belts), in which the GRS is embedded: however, the red color of the GRS appears to be distinctly different from the brown belts, suggesting two or more different coloring agents. 

Carl Sagan and his colleagues long argued for organic matter as the coloring agent; this suggestion, however, depends on the achingly slow destruction of methane by ultraviolet sunlight, which makes largely uncolored products at such a slow rate that the atmosphere would have to remain stable and unmixed for millions of years to accumulate a detectable tinge of brown.  Carl gave these largely imaginary organic coloring agents the name “tholins”, a name that has stuck with us while the organic coloring agents that supposedly justified the name have largely disappeared from the Jovian literature as being quantitatively indefensible: another clear example of the victory of the charmingly qualitative over the less-romantic quantitative.

The Goddard team wisely concentrates on the predicted ammonium hydrosulfide (NH4SH) cloud layer (misidentified in the article as ammonium sulfide, (NH4)2S), the level that we see when we peer into Jupiter’s belts, the next cloud layer below the white ammonia-crystal clouds that cover most of the planet, especially the bright zones.  They presumably chose that layer because fresh ammonium hydrosulfide, a colorless crystalline substance, is very sensitive to ultraviolet light and rapidly turns brown when exposed to sunlight. explains, “Studies predict that Jupiter's upper atmosphere is composed of clouds of ammonia, ammonium hydrosulfide and water”.  I’m rather partial to these cloud layers because I am the author of the generally accepted cloud models of Jupiter and its fellow giant planets [J.S. Lewis, The Clouds of Jupiter and the NH3-H2O and NH3-H2S Systems. Icarus 10, 365 (1969)].  Yes, that’s 1969. 

The article explains that the Goddard team is “baking some of the components of Jupiter's atmosphere with radiation, mimicking cosmic rays”.   They also report that their simulation “heats up hydrogen sulfide and ammonia” to make ammonium hydrosulfide, a remarkable assertion that makes no sense.  Actually, the way to make ammonium hydrosulfide, both on Jupiter and in the lab, is to cool down a mixture containing ammonia and hydrogen sulfide gases to precipitate a “snow” of the solid.  As for the “baking”, the temperature of that cloud layer is both predicted and measured to be about 225 K (-48 oC; -54 oF), a pretty bracing temperature for baking! 

OK, now they have solid NH4SH.  What next?  They blast the solids with high-energy particles, “much as cosmic rays blast Jupiter's clouds”.  Now, they have good reason to expect color changes because the much less violent and simple exposure of this cloud-stuff to sunlight has the same effect.

But wait a minute!  Doesn’t the Sun also shine on Jupiter?  How important are cosmic rays compared to the ultraviolet part of sunlight?  Good question!  The cosmic rays hitting Jupiter carry about 0.001 ergs of energy per square centimeter per second (of which only a tiny proportion actually goes to make colored products).  The energy supplied by the part of ultraviolet sunlight energetic enough to make colored sulfur compounds out of H2S (all the sunlight with wavelength less than 270 nanometers) is nearly 1000 ergs per square centimeter per second.  In other words, whatever the importance of cosmic rays, sunlight is about a million times more important! 

So hydrogen sulfide makes Jupiter-colored products.  How could we have missed this, back in that earlier millennium?  Well, we didn’t.  Ron Prinn and I pointed this out long ago: J.S. Lewis and R.G. Prinn, Jupiter's Clouds: Structure and Composition. Science 169, 472 (1970).  In that article, we showed that the rate of solar UV destruction of hydrogen sulfide (and production of yellow-, orange-, and brown-colored sulfur compounds) should surpass the total rate of methane photolysis claimed by Sagan and coworkers by a factor of 100,000.  This would occur only in those regions of Jupiter where the topmost (crystalline ammonia) cloud layer was thin or absent; i. e., in the belts but not in the zones.  The belts remain white because the ammonia-snow clouds block sunlight from reaching the deeper levels where the sulfur compounds reside.

But this explanation did not apply to the Great Red Spot, which is, after all, red.  Prinn and I addressed this issue a few years later, after phosphine gas (PH3) was detected on Jupiter (R.G. Prinn and J.S. Lewis, Phosphine on Jupiter and Implications for the Great Red Spot. Science 190, 274 (1975)).  Our argument was straightforward: that the dynamically active GRS was the best place on Jupiter for accumulation of red phosphorus made by solar UV destruction of phosphine: strong vertical winds blow phosphine gas up to altitudes above the protective ammonia clouds, where it encounters UV light and makes red phosphorus; the vertical winds then help levitate the particles of red phosphorus up where we can see them.  This process would occur at a rate governed by the relatively large proportion of UV radiation that is energetically capable of destroying PH3 and NH3 compared to that capable of destroying methane: red phosphorus would be produced at a rate hundreds of times faster than the total rate of destruction of methane (the ultimate source of all organic matter, both colored and uncolored), and thousands of times faster than the rate of formation of colored organic products.

Where do cosmic rays figure in this argument?  They don’t.  The total energy flow from cosmic rays is about a million times smaller than the rate of production of colored sulfur compounds.  Even if the cosmic rays produced colored products with 100% efficiency, which they don’t, their effects would remain negligible.

Then there is that spectacular image of “cosmic rays blast(ing) Jupiter's clouds”.  “Blasting” at one millionth of the intensity of ultraviolet sunlight”?  Really?  Sounds more like an advertising slogan to me.   

These color “secrets” haven’t been secrets for over 40 years.

China in Space 2016

Since 2005 I have had the pleasure of being an expert commentator on China Central Television (CCTV) for “civil” space missions, including both the manned flight program (Shenzhou and Tiangong) and their series of Chang-e lunar probes.  After a three-year lull in Chinese manned spaceflight activity, that program is set to resume this fall.

Since the 3-person Shenzhou 7 mission in 2008, Chinese manned spaceflight has centered on the Tiangong 1 space station module.  This module, announced on CCTV in 2008, and originally slated for flight in 2010, was launched into orbit on 29 September 2011 on a Long March 2F booster.  (The delay in launch date was further extended by a safety review occasioned by the launch failure of a Long March 2C booster in August.)  The module, similar in size and weight to a Shenzhou spacecraft, although very different in design, weighs in at about 8.5 metric tonnes. 

The first visit to TG1 was by an unmanned spacecraft (Shenzhou 8) launched on 17 November 2011, a precursor mission to test all systems before human occupation of the module.  The spacecraft remained attached for 12 days before SZ8 was recalled to Earth.  Several months later, on 16 June 2012, three Chinese astronauts (“Taikonauts”), including one woman, Liu Yang, were launched into orbit on Shenzhou 9.  The flight featured two docking events with TG1, one computer-controlled and one manually-directed, with return to Earth after 11 days.  The Shenzhou 10 mission, also with a crew of two men and one woman, Wang Yaping, flew a year later, launching on 11 June 2013.  After a 15-day flight, featuring several undocking and docking tests with Tiangong 1, SZ10 was successfully returned to Earth. 

It was originally planned that the Tiangong 1 module would be de-orbited in 2013; however, it still remains in space in April 2016, but is apparently no longer crew-rated.  To replace it, the Tiangong 2 module is scheduled for launch in the third quarter of 2016.  It is apparently a slightly modified version of Tiangong 1. 

Manned missions to Tiangong 2 are planned to begin in October or November of 2016, ending a 41-month hiatus.  

More ambitious human space endeavors await the debut of the Long March 5 booster.  The launch facilities for LM5 on Hainan Island have been completed and on-pad tests of an LM5 rocket (not necessarily a flight article) have commenced.  The first launch of the LM5, long planned for mid-2015, can be expected before the end of the year.  LM5, comparable to the Russian Proton or American Saturn I boosters, will carry payloads of up to 25 tonnes to Low Earth Orbit, permitting direct launch of a large second-generation space station module in the 2020 time frame. LM5 will also have a trans-lunar injection capability of about 8 tonnes, allowing a manned lunar flyby or orbital mission with a crew of two or three, followed by return to Earth.  It is likely that such a mission would be preceded by Earth reentry tests of unmanned Shenzhou capsules at lunar-return velocities (11 km/s).  The energy dissipated during a return from the Moon is twice that of the same vehicle returning from LEO, so two-step reentry profiles such as skip-glide trajectories may be expected.  The development of these capabilities will mirror the Soviet Zond probe development program using the Proton booster: Kosmos 146 and 154 tests in March and April of 1967 of lunar manned-mission hardware, the Zond 4 launch into high Earth orbit in March 1968, the Zond 5 launch in September 1968, a “cabin” carrying a dummy cosmonaut for an unmanned flyby of the Moon and return to Earth, and Zond 6 in November 1968 for a similar lunar flyby mission and recovery.  By the usual conservative standards of the Soviet space program, three consecutive successful unmanned tests would be required before launching a cosmonaut on the same mission profile.  The launch pad turnaround of two months meant that the next (and final) unmanned precursor would probably be expected in January.  But the American Apollo program was ahead of schedule, and in December 1968 the Apollo 8 mission was dispatched on a lunar-orbiting mission: 48 tonnes, three astronauts, and days in lunar orbit.  Zond 7 was not yet ready to fly its mission: one cosmonaut at best (and probably none), 6.6 tonnes, on a lunar flyby without orbiting the Moon; embarrassingly non-competitive.  In the heat of the space race, Zond 7 was simply put on indefinite hold.

Zond 7 was finally launched in August 1969 as a repeat of the same unmanned mission profile, a month after the American Apollo 11 mission landed Neil Armstrong and Buzz Aldrin on the Moon; too little and too late.  

Upcoming Shenzhou missions to Tiangong 2, beginning with Shenzhou 11 this November, will practice rendezvous and docking activities, develop experience with longer mission durations, and prepare the way for Moon-oriented manned missions in the Long March 5 era.  Operating without the frenzied intensity of the Space Race, China can progress deliberately and cautiously, minimizing uncertainties and risks, on its own schedule—and using 21st century technology.  Watch for the emergence of a Chinese manned lunar flyby (or orbiter) mission once LM5 is operational, well before manned lunar landing hardware has been developed.  And watch for unmanned precursors, especially high-speed reentry tests!


Thursday, March 31, 2016

A Large Airburst over Iran (?)

Last summer, on 30 July 2015, a spectacular airburst occurred over northern Iran, in the mountains west of Tehran.  The site of the reported fall is near the town of Avaj, in Qazvin County.  Press reports mention shattered windows and light structural damage to buildings.

There are reports of recovered meteorites, complete with plausible, if not wholly convincing, pictures.  I even received an email from someone in Iran offering to sell me a stone from the fall, accompanied by a putative analysis that claims 20% carbon and doesn’t mention silicon.  At least, that what the message appears to mean: it bears all the earmarks of a machine translation from low Martian.  So what hard facts do we have to work with?  Virtually none.

Perhaps the most notable fallout from this event has been the fuss kicked up on the internet.  There has been the most astonishing display of ignorance, prejudice, millennialist vitriol, and bigotry, liberally salted with insane conspiracy theories.  I have seen the following charges: 1) it’s a lie by the Iranian government, 2) a cover-up by NASA, 3) a harbinger of the end of the world, 4) a divine portent of unknown significance sent by Allah, 5) evidence of the God of Israel’s intent to destroy Iran, 6) a stray Russian missile, 7) an Israeli missile, 8) a baseless rumor denied by the Iranian government, etc.  A number of comments appear to have no content, simply serving as vehicles for incoherent rantings, misspellings, tortured grammar, and severe mental confusion from which no meaning can be extracted.  It’s a paranoid madhouse.  I have read close to 50 such comments, of which two show evidence of both knowledge and sanity. 

So, dear reader, here is my summary: we don’t know diddly-squat about this particular event.  Statistically, however, airbursts are not rare and reports of minor damage have many historical precedents.  As for using this natural event as an omen, well, any idiot can make up some such nonsense.  There is overwhelming evidence that they can—and do—exactly that.  Many such predictions of the end of the world have been issued, none of which have come true.  For your amusement, read

The question of why something happens is enormously interesting, but these examples of “man’s search for meaning” show our pathetic incompetence in this task.  It is wonderful to contemplate why something happened, but any explanation beyond physical causality is often simply baseless speculation, rarely testable by observation, and, when tested, almost invariably found to be wrong.  We would be far better engaged in studying the how, what, when and where of events, where physical evidence can be brought to bear. But speculation about underlying causes is fun!

Test yourself: The (true) given fact is, “The first German artillery shell fired on Leningrad in World War II landed in the zoo and killed the only elephant in Russia.” 

Now, propose an answer to the question, “Why?”

An Earthlike Planet

The press is full of the news that a new Earthlike planet, Kepler 452b, has been discovered by the revived Kepler planet-hunting spacecraft.  The discovery of a planet “much like Earth” garners more attention because the planet orbits in the so-called “Goldilocks zone” around its star, the range of distances within which water can exist as a liquid on its surface rather than only as ice or hot vapor.

Kepler 452b is 60% larger in diameter than Earth and is presumed to have Earthlike composition, although it is important to note that there is as yet no way of measuring its mass.  Nonetheless, the phrase “Earthlike world” has the press reeling, including suggestions that worlds like this are the places radio astronomers should look to find radio signals from intelligent aliens.

OK, let’s play this game and see what Earthlike composition would imply.  A diameter of 1.6 times Earth’s means a surface area of 1.6x1.6, or 2.56 times Earth’s, and a volume of 1.6x2.56, or 4.1 times Earth’s.  Assuming Earthlike material, this planet would have a core/mantle/crust structure very similar to Earth’s, but the internal pressures would of course be significantly higher, and the core and mantle material must be compressed to higher density than Earth’s average of 5.5 grams per cubic centimeter, probably close to 7.5 for the whole planet.  Now, that would generate a planet with 5.6 times Earth’s mass.  This mass and diameter would correspond to a surface gravity that is 5.6/2.56 times Earth, 2.18 times as large, or 21.4 meters/second2. 

“Similar composition” means similar abundances of radioactive elements, which decay inside the planet and eventually lose their heat by radiating heat from the planetary surface into space.  This planet, generating 5.6 times as much heat as Earth, which it radiates into space through a surface with area 2.56 times Earth’s, so the heat flux (Watts per square meter per second) is about 2.2 times Earth’s.  At steady state, with heat loss rate equal to heat production rate, this requires that the temperature gradient in the crust (the rate at which temperature increases with depth) must be 2.2 times as large as on Earth. 

Mountains can build up only to a finite height because the temperature gradient under them leads to softening and melting of the continental rock deep under them.  On Earth the Himalayas rise about 13 kilometers above the abyssal plains of the oceans.  Mars has half the radius of Earth, so its heat flow and temperature gradient should be about half that of Earth, meaning that softening of the rock should occur about twice as deep, and Martian mountains should build to about twice the height as on Earth.  In fact, the highest peak on Mars, Olympus Mons, rises about 26 km above the plains.  Of course, to calculate this precisely we need to account for the slightly lower density of Mars and its slightly lower surface temperature, but we still get a similar answer. 

Now let’s apply that to Kepler 452b.  The topography, Earth’s standard scaled down by a factor of 2.2, would have the highest mountain regions about 6 kilometers above the abyssal plains.  Now let’s think about the oceans on this world.  On Earth, there is enough water to make a layer about 4 km deep covering the entire planet.  Kepler 452b, if it has the same composition as Earth, would contain 5.6 times as much water spread out over a surface area that is 2.56 times as large.  Thus it would contain enough water to make a layer 2.56 times as deep as Earth’s 4 km, or about 10.2 km deep.  Since the highest land would rise about 6 km from the abyssal plains, this means that the tops of the highest mountains would lie roughly 4 km below mean sea level.  Thus Kepler 452b, if truly Earthlike in composition, would be a true water world.

Now consider the conditions on the sea floor.  An ocean 10.2 km deep in a gravity field of 2.2 Earth gravities would exert tremendous pressure at the ocean floor: the weight of the ocean exerts an average pressure of 2240 atmospheres, which is essentially the highest pressure at which pure liquid water can exist, irrespective of temperature.  At approximately this pressure, water freezes to make dense (sinking) ice III rather than familiar (floating) ice I.  Such an ocean could start to freeze from the bottom up.

We’re also told the planet has had liquid water for 6 billion years or more, without mentioning how the luminosity of its parent star (and the surface temperature of the planet) have changed over time.

Of course, with data on the mass of the planet we could see whether it really is a “terrestrial” planet or a sort of warmed-up ice ball: if the latter is the case, then the planet could be much less dense and the ocean much deeper.

So this is “Earth 2.0”? 

If you are interested in the wonderful game of designing planets that accord with the laws of physics, astronomy, and chemistry, you can find a number of examples in my 1998 book Worlds without End, which explores the possibilities for many types of planets allowed by nature, but not present in our Solar System.

Pluto in the Rear-View Mirror

The New Horizons flyby of the Pluto system reveals a new world in stunning detail.  The progress made over the last few decades in understanding Pluto has suddenly undergone another round of explosive growth.  Perhaps this is a good time to review just how our knowledge of Pluto has evolved since I first became interested in this outpost of the Solar System.

My first source of information about space was Stars, a Golden Nature Guide, written by Herbert S. Zim in 1951.  This little pocket guide contained, on pages 104 and 105, a table of data on “The Planets”.  The table was populated with a lot of quaintly obsolete data and a liberal sprinkling of question marks.  The column devoted to Pluto, unabashedly listed as a planet, gave good information on its distance from the Sun, but all the rest seemed designed to pique the curiosity of a child.  The diameter, which could not be measured directly, is given conjecturally as “3600  (?)” miles, comparable to Mars—but New Horizons, viewing from up close, measured only 1473 miles.  The volume of Pluto is therefore only 6.85% of what was then accepted.  The mass of Pluto was based on the wildly wrong diameter and an equally wild guess about its density; its mass relative to Earth was given as “0.8 (?)” and its volume as “0.07 (?)” that of Earth.  One need not have mastered calculus to figure that the density of Pluto was 11.4 times that of Earth, or 63 grams per cubic centimeter.  Considering that the densest chemical elements (osmium and iridium) have densities of 22 g/cm3, this presented an obvious problem for a budding young chemist. 

Further paradoxes abounded.  The number of moons of Pluto is given as “0.1 (?)”.  I can perhaps be forgiven for wondering how a planet could have a tenth of a moon.  The length of the Pluto day was given simply as “(?)”, and its axial inclination is reported the same way.

Is it any wonder that I became intrigued with this mysterious planet?

In 1972, years before we had reliable data on the size and mass of Pluto, I published two papers on the structure of ice-plus-rock bodies, so-called “dirty snowballs”, with the large satellites of the Jovian planets in mind.  The densities of these bodies constrained their proportions of ice and rock; the rocky component, with its endowment of uranium, thorium, and potassium, contributed substantial heat to their deep interiors.  Considering the size of the heat source, the surface temperatures of the large icy satellites, and the melting behavior of ices, I predicted that bodies like Europa and Ganymede could have deep oceans covered by thin crusts of water ice.  Their surfaces would then be quite susceptible to resurfacing, and would be very poor at preserving evidence of impact cratering.  Later, in 1979, Stan Peale, Pat Cassen and Ray Reynolds (Science 203, 892) proposed another, stronger heating effect for the Galilean satellites, through flexing driven by the tidal interactions of the moons. This model became entrenched in the literature, even to the point that most scientists ignored the radioactive heating component. 

We had no way to measure the mass of Pluto until its big satellite Charon was discovered in 1978, finally letting us track the orbital motions of Charon and Pluto around their common center of mass.  Mutual eclipses of Pluto and Charon provided much-improved data on their sizes.  But the best guesses on their densities still relied on condensation theory, which could not be tested with the best Pluto data in hand.

All that changed when New Horizons flew by Pluto.  Since Pluto and Charon are locked into a 1:1:1 spin-spin-orbit resonance, heating by tidal flexing is ruled out.  The other satellites of Pluto are tiny and have almost no effect.  Yet impact craters are absent and the whole planet has been recently resurfaced.  Clearly the driving force must be radioactive decay.  But how does it work?  What could the fluid be that resurfaces so efficiently?  Pluto’s surface is far, far below the melting temperature of water ice.  Clearly this is not a place for silicate volcanism: the resurfacing must be connected with the ices that make up a third of the mass of Pluto.  But which ones?

It turns out that there is a significant difference between ices formed in the Solar Nebula and ices formed in the satellite systems around the giant planets.  The environment in a protoplanetary disk girdling Jupiter or Saturn generally has much higher gas pressure than in the nearby Solar Nebula.  The effect of pressure strongly influences the chemistry of both nitrogen and carbon because their reactions with hydrogen (the dominant gas in the Universe) are driven to the right by higher pressures:

            3H2 + N2 à 2NH3         and       3H2 + CO à CH4 + H2O.

Thus ammonia and methane are minor constituents of ices formed in the Solar Nebula, but can be major components of ices formed in sufficiently dense and cool protoplanetary disks, such as those surrounding Saturn, Uranus, and Neptune.  These are available as the raw materials out of which their satellite systems formed.   Each disk was warmer near its center and cooler near its outer edge; in the case of Jupiter, the region inside Europa’s orbit (including Io) was too warm for even water ice to condense, thus making rocky moons.  Europa, forming close to the “snow line” in Jupiter’s nebula, retained only a small proportion of water ice and essentially none of the other, more volatile ices.  Ganymede and Callisto, formed farther out, are much more ice-rich.

Ammonia and methane can enter solid ices at temperatures too high for direct condensation of solid ammonia or solid methane because both gases can react with water ice to make solid hydrates.  This is how Saturn’s largest moon, Titan, retained vast stores of ammonia and methane.  Heating of Titan’s interior, whether by radioactive decay or tidal flexing, caused early melting of ammonia hydrates: in fact, ammonia-water ices begin to melt at only 100 K, or -173 oC.  Once melting begins, separation of the ice component from the “dirt” proceeds to generate a muddy core and a deep water-rich ocean with an ice crust.  Interestingly, the average surface temperature of Titan is 94 K, just 6 K colder than the onset of ammonia/water melting.  One can easily imagine cold, viscous ammonia/water melt being extruded onto the surface as cryovolcanic eruptions.  At these temperatures, little ammonia is released as a gas, but methane is given off in large quantities.  Ammonia is also very vulnerable to destruction by solar ultraviolet light, producing nitrogen and hydrogen (which is so light it readily escapes from Titan).  Not surprisingly, Titan today has an atmosphere dominated by nitrogen and methane.  Neptune’s large satellite Triton should be regarded as a colder version of the same scheme.

But Pluto and other Kuiper-belt bodies, formed in the much less dense Solar Nebula, would have experienced much more limited conversion of CO and N2 into methane and ammonia.  Both CO and N2 gases readily form solid hydrates, permitting them to be important constituents of the ice.  Any heating of the interior (in Pluto’s case, by radioactive decay) will release CO and N2 into the atmosphere.  Thus low-temperature resurfacing is not only possible, but very important—and the key to the process is contained in a theory that dates back to 1972.

Oh yeah, is Pluto really a planet?  I DON’T CARE!!

Friday, February 19, 2016

More Weird News from Russia

A CNN news report this morning (19 February 2016) tells of Russian plans to modify existing ICBMs to carry warheads to intercept and blow up incoming asteroids.  This can be found at:

The story is disturbing for a host of reasons. 

First, ICBMs can carry nuclear (thermonuclear) warheads over intercontinental range, for which purpose they can achieve a terminal velocity of 8 kilometers per second. They are designed so that achieving altitudes much higher than about 1000 km is not possible.  The simplest way of using an ICBM would be to intercept the incoming asteroid at an altitude of 1000 km, a move of such extreme stupidity that even the Russian Ministry of Defense would hesitate to do it.  A multi-megaton explosion in space so close to Earth would not only kill a large fraction of all the satellites operating in Earth orbit, but its EMP would knock out surface electrical power grids over a continent-sized area.  As a further bonus, fragments of the incoming asteroid would shower a wide area on the ground, likely inflicting several times as much damage as the intact asteroid would have done.  The unanimous conclusion of international Planetary Defense studies (with the participation and concurrence of leading Russian scientific experts) is that blowing up a threatening asteroid is a high-risk, damage-multiplying endeavor that should be avoided at all cost.

Second, redesigning a strategic missile for asteroid interception at a safe distance from Earth would require replacing the payload with an additional upper stage and a much smaller warhead.  The largest operational Russian ICBM, the SS-18-6, carries a 20 megaton thermonuclear warhead weighing about 9 tonnes; replacing that warhead with a new upper stage and a smaller (5 megaton?) warhead with a mass of about 2 tonnes would permit interception out to lunar distances.  But that raises another question:

Third is the question of how we deal with different kinds of targets.  There is no doubt that interception and destruction of a 10-meter diameter asteroid at the distance of the Moon would be safe: the problem is that asteroids of this size are extremely difficult to find.  Virtually all of the asteroids of this small size (>99.99% of them) remain undiscovered.  They can be found only if they approach Earth very closely.  In other words, an incoming 10-m asteroid on a collision course with Earth would almost certainly be unknown to us.  Discovery of a new asteroid of this size, even if it occurs by incredible good fortune while the asteroid is still at the distance of the Moon, would occur about one day before impact.  The asteroid would have to be discovered and tracked, and the mission would have to be planned and launched, within hours of discovery.  The asteroid would typically be traveling at 20 km/s and the interceptor rocket at 2 km/s, so interception would occur 1/10th of the distance to the Moon, an altitude of about 40,000 km, which happens to be altitude of the Geosynchronous Orbit belt of communication satellites.  A 5 megaton explosion at that altitude would destroy most of the world’s communications assets.  An asteroid that, if it by incredibly bad luck should have hit a city, might have caused thousands of casualties, is destroyed at the cost of world-wide communication capabilities.  If we chose to leave it alone (or, more likely, never saw it coming) it is overwhelmingly more probable that it would have fallen in a remote and unpopulated area, probably over the ocean, and inflicted little or no damage.  The cure would probably be more lethal than the disease.

What about kilometer-sized asteroids, which constitute a serious threat to areas the size of a country?  Asteroids of this size and brightness are much easier to discover and track: of all the Earth-crossing asteroids larger than about 1 km in diameter, we have discovered and tracked more than 95%.  Best estimates are that there about 980 such asteroids: of the estimated few dozen that have not yet been discovered, we are finding several new ones each year.  We know with surety that none of the ones discovered to date threatens impact with Earth in the next few centuries.  But suppose we were to discover a new one this year in an orbit that threatens Earth.  It is highly probable that we would have hundreds to thousands of years to prepare for that threat.  But the impact could be avoided by minuscule changes in the orbit of the asteroid.  As an example, suppose we find a km-sized body that would impact Earth in 300 years.  If we could change the orbit enough to miss Earth, we would buy ourselves thousands of years of additional time to deal with it.  Changing the asteroid’s orbit enough to displace its position by 10,000 km and guarantee that it would miss Earth 300 years from now requires changing the velocity of the asteroid by a minuscule 0.1 cm per second.  This can easily be effected by setting off a large nuclear explosion several km from the asteroid: the vaporized surface rock would exert a mild but entirely adequate vapor “puff” that would very slightly deflect the asteroid and change its speed without running the risk of turning the asteroid into a deadly shower of a thousand 100-meter sized chunks of shrapnel.

In short, this proposed “defense” scheme is sufficiently crazy that we would be well advised to look for other explanations of why Russia would want to suggest it.

Oh, by the way, the United States no longer has any operational ICBMs with multi-megaton “city buster” warheads.  These relics of the cold war survive only in Russia and China—and are effective threats only against population centers, not military targets.  This means the US doesn’t even have the option of doing something equally stupid with asteroids.

Time Travel Made Practical

What does it mean to “travel through time?”  We already travel through time at a rate beyond our control, no matter what we do. So let’s say that what we mean by time travel is that the subjective rate of passage of time for the “time traveler” is very different from the rate of time passage in the external world: we get to another time faster than the muggles do (slower is boring).  And of course it would be nice if it were perfectly safe.

In the interests of practicality, I shall neglect the possibility of traveling backward in time: this seems to work conceptually only on the level of individual quantum particles, which is very inconvenient if you happen to consist of more than one particle.  So let us concentrate on moving forward through time at variable and controllable rates. Since it is vastly harder to control the external world than it is to change our subjective experience, our search for practicality must concentrate on what we can do to ourselves, individually, to get our ticket to ride.

The first thing that comes to mind is cryogenic stasis, often referred to as “cold sleep”.  We freeze ourselves and cool our corpsicles down to liquid helium temperatures. That should put an end to destructive oxidation reactions, shouldn’t it?  All motions, and all chemical reactions, stop at absolute zero, right?  Wrong!  Quantum mechanics assures us that there is residual zero-point energy that keeps everything in motion even at zero degrees absolute.  (This follows from the uncertainty principle: the product of the uncertainty in momentum (Δp) times the uncertainty of position (Δx) of each particle is a constant.  If any particle were absolutely at rest (Δp = 0), then Δx would be infinite: we wouldn’t know where in the Universe the particle was.)  This has interesting ramifications for the rate of chemical reactions in our bodies as we chill down toward absolute zero.  All the reactions that can damage our cells do slow down dramatically with decreasing temperature, up to the point at which quantum tunneling effects become more important than classical chemical kinetics.  Thereafter, further cooling has virtually no effect on the rates of “bad” reactions.  This means you cannot stop oxidative damage to your cells (and your DNA) even at absolute zero.  This is a real concern: you don’t want to arrive in 5,002,016 AD with a wrecked, poisoned, and embarrassingly oxidized body.

Given this concern, then we need to identify what causes these destructive reactions and get rid of the cause.  Well, first of all, our bodies contain three biologically essential elements that have radioactive isotopes; tritium, carbon-14, and potassium-40; whose decay reactions produce energetic charged particles.  These particles, both gamma rays and beta radiation-- high-speed electrons and positrons-- tear apart water molecules to make atomic oxygen, hydroxyl radical, hydroperoxyl radical and even molecular oxygen, all of which are deadly poisons to a wide variety of essential biochemicals.  We have several ways to suppress this kind of damage: filling ourselves with antioxidants that sop up the damaging oxidative chemicals, and getting rid of the three offending radioisotopes.  The antioxidants you get from eating a flat of blueberries, even if they succeed in entering your blood stream, are bulky molecules that are immobilized at low temperatures: they can’t move to the site of the problem.  You could eat yourself blue in the face, enough to qualify for Avatar citizenship, without making yourself much safer. (Happier, perhaps, but not safer.)

Of course, we could raise people on radioisotope-free nutrients to avoid the problem altogether.  We could get our drinking water from deep aquifers where the tritium content is essentially zero (70,000-year-old groundwater has survived 10,000 half-lives of tritium decay).  We can source our carbon from Carboniferous coal (450 million years is about 100,000 half-lives of carbon-14).  Potassium is a much worse problem because it has a billion-year half-life. There is no potassium in nature that is old enough to have had the radioisotope decay away.  We would have to separate the isotopes of natural potassium to get rid of the dangerous potassium-40. This requires huge mass spectrometers or other dedicated equipment and great expense, but could be done.

Assuming success with potassium, we would next have to deal with biologically non-essential elements that sneak into our bodies because of their chemical similarity to things we really need, such as radioactive uranium and thorium masquerading as calcium atoms in our bones. We could deal with this problem only if all the calcium entering our bodies during life were scrupulously cleaned of undesirable radioactive trace elements. Again, this is very expensive but achievable.

But we live on a planet in a galaxy: the crust of the planet contains radioactive potassium, uranium and thorium whose radiation strikes us from outside, even if our bodies are completely clean inside. And of course we are struck by cosmic rays rather often, both primary cosmic ray protons and, more importantly, cascades of charged secondary particles such as muons that are made by the impact of cosmic ray primaries on atoms in our atmosphere. So we hide our corpsicle deep underground and store it in the bottom of a mine shaft, where the effects of cosmic rays can’t reach.  Then we have to shield ourselves from radioisotopes in the surrounding rock by lining our hobbit hole with a thick layer of lead.  Once again, all this is expensive but achievable.

Have we overlooked anything? What about those pesky neutrinos? Shielding against them is simply impossible; a layer of lead light-years thick would be required. But neutrinos are uncharged and interact very poorly with matter. Is there any reason to fear them? Not normally, but we have gone to such extraordinary lengths to reduce risks that this is now the #1 problem remaining. It happens that the natural and non-radioactive isotope chlorine-37 has a tiny probability of capturing a neutrino, which converts it into argon-37, which unfortunately is radioactive, emitting an energetic charged agent of destruction, an 813 keV beta particle.  We can’t just swear off of chlorine: every human body contains cellular fluid that resembles the early oceans where the first cell originated, endowing us all with sodium chloride in every cell.  Well, at great expense we could separate the chlorine isotopes and use only chlorine-35…

 Or maybe the desire to achieve perfect guaranteed safety is actually insane…