Nuclear War is Not Good

Yes, well that is sort of obvious, but how not good? Ukraine has brought the scenario of a nuclear war to the forefront, which raises the question, what would the outcome be? You may have heard estimates from military hawks that apart from those killed due to the blasts and those who got excessively irradiated, all would be well. Americans tend to be more hawkish because the blasts would be “over there”, although if the enemy were Russia, Russia should be able to bring it to America. There is an article in Nature (579, pp 485 – 487) that paints a worse picture. In the worst case, they estimate deaths of up to 5 billion, and none of these are due to the actual blasts or the radiation; they are additional extras. The problem lies in the food supply.

Suppose there was a war between India and Pakistan. Each fires nuclear weapons, first against military targets, then against cities. Tens of millions die in the blasts. However, there is a band of soot that rises into the air, and temperatures drop. Crop yields drop dramatically from California to China, and affect dozens of countries. Because of the limited food yields, more than a billion people will suffer from food shortages. The question then is, how valid are these sort of predictions?

Nuclear winter was first studied during the cold war. The first efforts described how such smoke would drop the planet into a deep freeze, lasting for months, even in summer. Later studies argued this effect was overdone and it would not end up with such a horrific chill, and unfortunately that has encouraged some politicians who are less mathematically inclined and cannot realize that “less than a horrific chill” can still be bad.

India and Pakistan each have around 150 nuclear warheads, so a study in the US looked into what would happen in which the countries set off 100 Hiroshima-sized bombs. The direct casualties would be about 21 million people. But if we look at how volcanic eruptions cool the planet, and how soot goes into the atmosphere following major forest fires, modelling can predict the outcome. A India-Pakistan war would put 5 million tonne of soot into the atmosphere, while a US Russian war would loft 150 million tonne. The first war would lower global temperatures by a little more than 1 degree C           , but the second would lower it by 10 degrees C, temperatures not seen since the last Ice Age. One problem that may not be appreciated is that sunlight would heat the soot, and by heating the adjacent air, it causes it to rise, and therefore persist longer.

The oceans tell a different story. Global cooling would affect the oceans’ acidity, and the pH would soar upwards (making it more alkaline). The model also suggested that would make it harder to form aragonite, making life difficult for shellfish. Currently, the shellfish are in the same danger from too much acidity; depending on aragonite is a bad option! The biggest danger would come regions that are home to coral reefs. There are some places that cannot win. However, there is worse to come: possibly a “Nuclear Niño”, which is described as a “turbo-charged El Niño”. In the case of a Russia/US war, the trade winds would reverse direction and water would pool in the eastern pacific ocean. Droughts and heavy rain would plague different parts of the world for up to seven years.

One unfortunate effect is that this is modelled. Immediately, another group from Los Alamos carried out different calculations and came to less of a disastrous result. The difference depends in part on how they simulate the amount of fuel, and how that is converted to smoke. Soot comes from partial combustion, and what happens where in a nuclear blast is difficult to calculate.

The effects on food production could be dramatic. Even following the small India-Pakistan war, grain production could drop by approximately 12% and soya bean production by 17%. The worst effects would come from the mid-latitudes such as the US Midwest and Ukraine. The trade in food would dry up because each country would be struggling to feed itself. A major war would be devastating, for other reasons as well. It is all very well to say your region might survive the climate change, and somewhere like Australia might grow more grain if it gets adequate water, as at present it is the heat that is the biggest problem. But if the war also took out industrial production and oil production and distribution, now what? Tractors are not very helpful if you cannot purchase diesel. A return to the old ways of harvesting? Even if you could find one, how many people know how to use a scythe? How do you plough? Leaving the problem of knowing where to find a plough that a horse could pull, and the problem of how to set it up, where do you find the horses? It really would not be easy.

Rotation, Rotation

You have probably heard of dark matter. It is the stuff that is supposed to be the predominant matter of the Universe, but nobody has ever managed to find any, which is a little embarrassing when there is supposed to be something like about 6 times more dark matter in the Universe than ordinary matter. Even more embarrassing is the fact that nobody has any real idea what it could be. Every time someone postulates what it is, and work out a way to detect it, they find absolutely nothing. On the other hand there may be a simpler reason for this. Just maybe they postulated what they thought they could find, as opposed to what it is, in other words it was a proposal to get more funds with the uncovering the nature of the Universe as a hoped-for by-product.

The first reason why there might be dark matter came from the rotation of galaxies. Newtonian mechanics makes some specific predictions. Very specifically, the periodic time for an object orbiting the centre of mass at a distance r varies as r^1.5. That means that say there are two orbiting objects, say Earth and Mars, where Mars is about 1.52 times more distant, the Martian year is about 1.88 Earth years. The relationship works very well in our solar system, and it was from the unexpected effects on Uranus that Neptune was predicted, and found to be in the expected place. However, when we take this up to galactic level, things come unstuck. As we move out from the centre, stars move faster than predicted from the speed of those in the centre. This is quite unambiguous, and has been found in many galaxies. The conventional explanation is that enormous quantities of cold dark matter provide the additional gravitational binding.

However, that explanation also has problems. A study of 175 galaxies showed that the radial acceleration at different distances correlated with the amount of visible matter attracting it, but the relationship does not match Newtonian dynamics. If the discrepancies are due to dark matter, one might expect the dark matter to be present in different amounts in different galaxies, and different parts of the same galaxy. Any such relationship should have a lot of scatter, but it hasn’t. Of course, that might be a result of dark matter being attracted to ordinary matter.

There is an alternative explanation called MOND, which stands for modified Newtonian gravity, which proposes that at large distances and small accelerations, gravity decays more slowly than the inverse square law. The correlation of the radial acceleration with the amount of visible matter would be required by something like MOND, so that is a big plus for it, although the only reason it was postulated in this form was to account for what we see. However, a further study has shown there is no simple scale factor. What this means is that if MOBD is correct the effects on different galaxies should be essentially dependent on the mass of visible matter but it isn’t. MOND can explain any galaxy, but the results don’t translate to other galaxies in any simple way. This should rule out MOND without amending the underlying dynamics, in other words, altering Newtonian laws of motion as well as gravity. This may be no problem for dark matter, as different distributions would give different effects. But wait: in the previous paragraph it was claimed there was no scatter.

The net result: there are two sides to this: one says MOND is ruled out and the other says no it isn’t, and the problem is that it is observational uncertainties that suggest it might be. The two sides of the argument seem to be either using different data or are interpreting the same data differently. I am no wiser.

Astronomers have also observed one of the most distant galaxies ever, MACS1149-JD1, which is over ten billion light years away, and it too is rotating, although the rotational velocity is much slower than galaxies that we see that are much closer and nowhere near as old. So why is it slower? Possible reasons include it has much less mass, hence the gravity is weaker.

However, this galaxy is of significant interest because its age makes it one of the earliest galaxies to form. It also has stars in it estimated to be 300 million years old, which puts the star formation at just 270 million years after the Big Bang. The problem with that is it is in the dark period, when matter as we know it had presumably not formed, so how did a collection of stars start? For gravity to cause a star to accrete, it has to give off radiation but supposedly no radiation was given off then. Again, something seems to be wrong. That most of the stars are just this age makes it appear that the galaxy formed about the same time as the stars, or put it another way, something made a whole lot of stars form at the same time in places where the net result was a galaxy. How did that happen? And where did the angular momentum come from? Then again, did it happen? This is at the limit of observational techniques, so have we drawn a non-valid conclusion from difficult to interpret data. Again, I have no idea, but I mention this to show there is a still a lot to learn about how things started.

Betelgeuse Dimmed

First, I apologize for the initial bizarre appearance of my last post. For some reason, some computer decided to slice and dice. I have no idea why, or for that matter, how. Hopefully, this post will have better luck.

Some will recall that around October 2019 the red supergiant Betelgeuse dimmed, specifically from magnitude +0.5 down to +1.64. As a variable star, its brightness oscillates, but it had never dimmed like this before, at least within our records. This generated a certain degree of nervousness or excitement because a significant dimming is probably what happens initially before a supernova. There has been no nearby supernova since that of the crab nebula in 1054 AD.

To put a cool spot into perspective, if Betelgeuse replaced the sun, its size is such it would swallow Mars, and its photosphere might almost reach Saturn. Its mass is estimated at least ten times, or possibly up to twenty times, the mass of the sun. Such a variation sparks my interest because when I pointed out that my proposed dependence of characteristic planetary orbital semimajor axes on the cube of the mass of the star ran into trouble because the stellar masses were not known that well I got criticised by an astronomer: they knew the masses to within a few percent. The difference between ten times the sun’s mass and twenty times is more than a few percent. This is a characteristic of science. They can measure stellar masses fairly accurately in double star systems, then they “carry over” the results,

But back to Betelgeuse. Our best guess as to distance is between 500 – 600 light years. Interestingly, we have observed its photosphere, the outer “shell” of the star that is transparent to photons, at least to a degree, and this is non-spherical, presumably due to stellar pulsations that send matter out from the star. The star may seem “stable” but actually its surface (whatever that means) is extremely turbulent. It is also surrounded by something we could call an atmosphere, an envelope of matter about 250 times the size of the star. We don’t really know its size because these asymmetric pulsations can add several astronomical units (the Earth-sun distance) in selected directions.

Anyway, back to the dimming. Two rival theories were produced: one involved the development of a large cooler cell that came to the surface and was dimmer than the rest of Betelgeuse’s surface. The other was the partial obscuring of the star by a dust cloud. Neither proposition really explained the dimming, nor did they explain why Betelgeuse was back to normal by the end of February, 2020. Rather unsurprisingly, the next proposition was that the dimming was caused by both of those effects.

Perhaps the biggest problem because telescopes could only look at the star sone of them however a Japanese weather satellite ended up providing just the data they needed. This was somewhat inadvertent. The weather satellite was in geostationary orbit 35,786 km above the Western Pacific. It was always looking at half of Earth, and always the same half, but the background was also always constant, and in the background was Betelgeuse. The satellite revealed that the star overall cooled by 140 degrees C. This was sufficient to reduce the heating of a nearby gas cloud, and when it cooled, dust condensed and formed obscuring dust. So both theories were right, and even more strangely, both contributed roughly equally to what was called “the Great Dimming”.

It also suggested more was happening to the atmospheric structure of the star before this happened. By looking at the infrared lines, it became apparent that water molecules in the upper atmosphere that would normally create absorption lines in the star’s spectrum suddenly changed to form emission lines. Something had made them become unexpectedly hotter. The current thinking is that a shock-wave from the interior propelled a lot of gas outwards from the star, leading to a cooler surface, while heating the outer atmosphere. That is regarded as the best current explanation. It is possible that there was a similar dimming event in the 1940s, but otherwise we have not noticed much, but possibly it could have occurred but our detection methods may not have been accurate enough. People may not want to get carried away with, “I think it might be dimmer.” Anyway, for the present, no supernova. But one will occur, probably within the next 100,000 years. Keep looking upwards!

Some Scientific Curiosities

This week I thought I would try to be entertaining, to distract myself and others from what has happened in Ukraine. So to start with, how big is a bacterium? As you might guess, it depends on which one, but I bet you didn’t guess the biggest. According to a recent article in Science Magazine (doi: 10.1126/science.ada1620) a bacterium has been discovered that lives in Caribbean mangroves that, while it is a single cell, it is 2 cm long. You can see it (proposed name, Thiomargarita magnifica) with the naked eye.

More than that, think of the difference between prokaryotes (most bacteria and single-cell microbes) and eukaryotes (most everything else that is bigger). Prokaryotes have free-floating DNA while eukaryotes package their DNA nucleus and put various cell functions into separate vesicles and can move molecules between the vesicles. But this bacterium cell includes two membrane sacs, only one of which contains DNA. The other sac contains 73% of the total volume and seems to be filled with water. The genome was nearly three times bigger than those of most bacteria.

Now, from Chemistry World. You go to the Moon or Mars, and you need oxygen to breathe. Where do you get it from? One answer is electrolysis, so do you see any problems, assuming you have water and you have electricity? The answer is that it will be up to 11% less efficient. The reason is the lower gravity. If you try to electrolyse water at zero g, such as in the space station, we knew it was less efficient because the gas bubbles have no net force on them. The force arises through different densities generating a weight difference, and the lighter gas rises, but in zero g, there is no lighter gas – they might have different masses, but they all have no weight. So how do they know this effect will apply on Mars or the Moon? They carried out such experiments on board free-fall flights with the help of the European Space Agency. Of course, these free-fall experiments are somewhat brief as the pilot of the aircraft will have this desire not to fly into the Earth.

The reason the electrolysis is slower is because gas bubble desorption is hindered. Getting the gas off the electrodes occurs because there are density differences, and hence a force, but in zero gravity there is no such force. One possible solution being considered is a shaking electrolyser. Next thing we shall see is requests for funding to build different sorts of electrolysers. They have considered using them in centrifuges to construct models to compute what the lower gravity would do, but an alternative might be to have such a process operating within a centrifuge. It does not need to be a fast spinning centrifuge as all you are trying to do is to generate the equivalent of 1 g, Also, one suggestion is that people on Mars or the Moon might want to spend a reasonable fraction of their time inside one such large centrifuge, to help keep the bone density up.

The final oddity comes from Physics World. As you may be aware, according to Einstein’s relativity, time, or more specifically, clocks, run slower as the gravity increases. Apparently this was once tested by taking a clock up a mountain and comparing it with one kept at the base, and General Relativity was shown to predict the correct result. However, now we have improved clocks. Apparently the best atomic clocks are so stable they would be out by less than a second after running for the age of the universe. This precision is astonishing. In 2018 researchers at the US National Institute for Standards and Technology compared two such clocks and found their precision was about 1 part in ten to the power of eighteen. It permits a rather astonishing outcome: it is possible to detect the tiny frequency difference between the two clocks if one is a centimeter higher than the other one. This will permit “relativistic geodesy”, which could be used to more accurately measure the earth’s shape, and the nature of the interior, as variations in density outcrops would cause minute changes in gravitational potential. Needless to say, there is a catch: they may be very precise but they are not very robust. Taking them outside the lab leads to difficulties, like stopping.

Now they have done better – using strontium atoms, uncertainty to less that 1 part in ten to the power of twenty! They now claim they can test for quantum gravity. We shall see more in the not too distant future.

Did a Galactic-Scale Collision Lead to Us?

Why do we have a planet that we can walk around on, and generally mess up? As most of us know, the atoms we use, apart from hydrogen, will have originated in a nova or supernova, and some of the planet possibly even from collisions of neutron stars. These powerful events send clouds of dust into gas clouds, but then what? We call it dust, but the particle size is mainly like smoke. Telescopes like the Hubble space telescope have photographed huge clouds of gas and dust in space. These can be quite large, thus the Orion molecular cloud complex is hundreds of light years across. These giant clouds can sit there and do very little, or then start forming stars. The question then is, what starts it? The hydrogen and helium, which are by far the predominant components, with hydrogen masses about ten thousand times as much as anything else except helium, are always colliding with each other, and with dust molecules, but they always bounce back because there is no way to lose their kinetic energy. The gas has been around for 13.6 billion years, so why does it collapse suddenly?

To make things slightly more complicated, the cloud does not collapse on itself. Rather, sections collapse to form stars. The section that formed our solar system would probably have been a few thousand astronomical units across (an astronomical unit, AU, is the distance between Earth and the Sun), and this is a trivial fraction of such giant clouds. So what happens is sections collapse, leaving the cloud with “holes”, a little like a Swiss cheese.

For us, about 4.6 billion years ago such a piece of a gigantic gas cloud started to collapse upon itself, which eventually led to the formation of the solar system, and us. Perhaps we should thank whatever caused that collapse. A common explanation is that a nearby supernova sent a shockwave through the gas, and that may well be correct for a specific situation, but there is another source of disruption: galactic collisions. We have observed these elsewhere, and invariably such collisions lead to a good generation of stars. Major galaxies do not collide that often because they are so far away from each other. As an example, in about five billion years, Andromeda will collide with the Milky Way. That may well initiate a lot of formation of new stars as long as there is plenty of gas and dust clouds left.

However, there are some galactic collisions that are a bit more frequent. There is something called the Sagittarius Dwarf Spheroidal Galaxy which is approximately a tenth the diameter of the Milky Way. It comprises four main globular clusters and is spiralling around our galaxy on a polar orbit about 50,000 light years from the galactic core and passes through the plane of the Milky Way periodically. It apparently did this about five to six billion years ago, then about two billion years ago, and one billion years ago. Coupled with that, a team of astronomers have argued that star formation in the Milky Way peaked at around 5.7, 1.9 and 1 billion years ago. The argument appears to be that such star formation arose about the same time that the dwarf galaxy passed through the Milky Way. In this context, some of our nearest stars fit ths hypothesis. Thus Tau Ceti, EZ Aquarii,  and Alpha Centauri A and B are about 5.8 billion years old, Procyon is about 1.7 billion years old, while Epsilon Eridani is about 900 million years old.

However, if we look at other local stars, we find Earth, Lacaille 9352 and Proxima Centauri are about 4.5 billion years old, Epsilon Indi is about 1.3 years old, Alpha Ophiuchi A is about 750 million years old, Sirius is about 230 million years old, and Wolf 359 is between 100 – 300 million years old. Of course, a galaxy passing through another galaxy will consume a lot of time, so it is not clear what to make of this. There is always a temptation to correlate and assume causation, and that is unsound. On the other hand, the more massive Milky Way may have stripped some gas from the smaller galaxy, and a wave of gas and dust on a different orbit could have long term effects.

In case you think the stars in a galaxy are on well-behaved orbits around the centre, that is wrong. Because the galaxy formed from the collision and absorption of smaller galaxies the motion is actually quite chaotic, but because stars are so far apart by and large they ignore each other. Thus Kapteyn’s Star orbits the galactic centre and is quite close to our Sun, except it is going in the opposite direction. We “meet again” on the other side of the galaxy in about 120 million years. So to summarize, we still don’t know what caused this solar system to form but we should be thankful that we got what we did. Our system happens to be just about right for our life to form, but as you will see, when it comes out, from the second edition of my ebook “Planetary Formation and Biogenesis” there are a lot of things that could have gone wrong. Let’s not help more things to go wrong.

Warp Drives

“Warp drives” originated in the science fiction shows “Star Trek” in the 1960s, but in 1994, the Mexican Miguel Alcubierre published a paper arguing that under certain conditions exceeding light speed was not forbidden by Einstein’s General Relativity. Alcubierre reached his solution by assuming it was possible, then working backwards to see what was required while rejecting those awkward points that arose. The concept is that the ship sits in a bubble, and spacetime in front of the ship is contracted, while that behind the ship is expanded. In terms of geometry, that means the distance to your destination has got smaller, while the distance from where you started gets longer, i.e. you moved relative to the starting point and the destination. One of the oddities of being in such a bubble is you would not sense you are moving. There would be no accelerating forces because technically you are not moving; it is the space around you that is moving. Captain Kirk on the enterprise is not squashed to a film by the acceleration! Since then there have been a number of proposals. General relativity is a gold mine for academics wanting to publish papers because it is so difficult mathematically.

There is one small drawback to these proposals: you need negative energy. Now we run into definitions, and before you point out the gravitational field has negative energy it is generated by positive mass, and it contracts the distance between you and target, i.e. you fall towards it. If you like, that can be at the front of your drive. The real problem is at the other end – you need the repulsive field that sends you further from where you started, and if you think gravitationally, the opposite field, presumably generated from negative mass.

One objection often heard to negative energy is if quantum field theory were correct, the vacuum would collapse to negative energy, which would lead to the Universe collapsing on itself. My view is, not necessarily. The negative potential energy of the gravitational field causes mass to collapse onto itself, and while we do get black holes in accord with this, the Universe is actually expanding. Since quantum field theory assumes a vacuum energy density, calculations of the relativistic gravitational field arising from this are in error by ten multiplied by itself 120 times, so just maybe it is not a good guideline here. It predicts the Universe has long since collapsed, but here we are.

The only repulsive stuff we think might be there is dark energy, but we have no idea how to lay hands on it, let alone package it, or even if it exists. However, all may not be lost. I recently saw an article in Physics World that stated that a physicist, Erik Lentz, had claimed there was no need for negative energy. The concept is that energy could be capable of arranging the structure of space-time as a soliton. (A soliton is a wave packet that travels more like a bubble, it does not disperse or spread out, but otherwise behaves like a wave.) There is a minor problem. You may have heard that the biggest problem with rockets is the mass of fuel they have to carry before you get started. Well, don’t book a space flight yet. As Lentz has calculated it, a 100 m radius spacecraft would require the energy equivalent to hundreds of times the mass of Jupiter.

There will be other problems. It is one thing to have opposite energy densities on different sides of your bubble. You still have to convert those to motion and go exactly in the direction you wish. If you cannot steer as you go, or worse, you don’t even know for sure exactly where you are and the target is, is there a point? Finally, in my science fiction novels I have steered away from warp drives. The only times my characters went interstellar distances I limited myself to a little under light speed. Some say that lacks imagination, but stop and think. You set out to do something, but suppose where you are going will have aged 300 years before you get there. Come back, and your then associates have been dead for 600 years. That raises some very awkward problems that make a story different from the usual “space westerns”.

What Happens Inside Ice Giants?

Uranus and Neptune are a bit weird, although in fairness that may be because we don’t really know much about them. Our information is restricted to what we can see in telescopes (not a lot) and the Voyager fly-bys, which, of course, also devoted a lot of attention to the Moons, since a lot of effort was devoted to images. The planets are rather large featureless balls of gas and cloud and you can only do so much on a “zoom-past”. One of the odd things is the magnetic fields. On Earth, the magnetic field axis corresponds with the axis of rotation, more or less, but not so much there. Earth’s magnetic field is believed to be due to a molten iron core, but that could not occur there. That probably needs explaining. The iron in the dust that is accreted to form planets is a fine powder; the particles are in the micron size. The Earth’s core arises because the iron formed lumps, melted, and flowed to the core because it is denser. In my ebook “Planetary Formation and Biogenesis” I argue that the iron actually formed lumps in the accretion disk. While the star was accreting, the region around where Earth is reached something like 1600 degrees C, above the melting point of iron, so it formed globs. We see the residues of that in the iron-cored meteorites that sometimes fall to Earth. However, Mars does not appear to have an iron core. Within that model, the explanation is simple. While on Earth the large lumps of iron flowed towards the centre, on Mars, since the disk temperature falls off with distance from the star, at 1.5 AU the large lumps did not form. As a consequence, the fine iron particles could not move through the highly viscous silicates, and instead reacted with water and oxidised, or, if you prefer, rusted.

If the lumps that formed for Earth could not form at Mars because it was too far away from the star, the situation was worse for Uranus. As with Mars, the iron would be accreted as a fine dust and as the ice giants started to warm up from gravitational collapse, the iron, once it got to about 500 degrees Centigrade, would rapidly react with the water and oxidise to form iron oxides and hydrogen. Why did that not happen in the accretion disk? Maybe it did, and maybe at Mars it was always accreted as iron oxides, but by the time it got to where Earth is, there would be at least ten thousand times more hydrogen than iron, and hot hydrogen reduces iron oxide to iron. Anyway, Uranus and Neptune will not have an iron core, so what could generate the magnetic fields? Basically, you need moving electric charge. The planets are moving (rotating) so where does the charge come from?

The answer recently proposed is superionic ice. You will think that ice melts at 0 degrees Centigrade, and yes, it does, but only at atmospheric pressure. Increase the pressure and it melts at a lower temperature, which is how you make snowballs. But ice is weird. You may think ice is ice, but that is not exactly correct. There appear to be about twenty ices possible from water, although there are controversial aspects because high pressure work is very difficult and while you get information, it is not always clear about what it refers to. You may think that irrespective of that, ice will be liquid at the centre of these planets because it will be too hot for a solid. Maybe.

In a recent publication (Nature Physics, 17, 1233-1238 November 2021) authors studied ice in a diamond anvil cell at pressures up to 150 GPa (which is about 1.5 million times greater than our atmospheric pressure) and about 6,500 degrees K (near enough to Centigrade at this temperature). They interpret their observations as there being superionic ice there. The use of “about” is because there will be uncertainty due to the laser heating, and the relatively short times up there. (Recall diamond will also melt.)

A superionic ice is proposed wherein because of the pressure, the hydrogen nuclei can move about the lattice of oxygen atoms, and they are the cause of the electrical conduction. These conditions are what are expected deep in the interior but not at the centre of these two planets. There will presumably be zones where there is an equilibrium between the ice and liquid, and convection of the liquid coupled with the rotation will generate the movement of charge necessary to make the magnetism. At least, that is one theory. It may or may not be correct.

Your Water Came from Where?

One interesting question when considering why Earth has life is from where did we get our water? This is important because essentially it is the difference between Earth and Venus. Both are rocky planets of about the same size. They each have similar amounts of carbon dioxide, with Venus having about 50% more than Earth, and four times the amount of nitrogen, but Venus is extremely short of water. If we are interested in knowing about whether there is life on other planets elsewhere in the cosmos, we need to know about this water issue. The reason Venus is hell and Earth is not is not that Venus is closer to the Sun (although that would make Venus warmer than Earth) but rather it has no water. What happened on Earth is that the water dissolved the CO2 to make carbonic acid, which in turn weathered rocks to make the huge deposits of lime, dolomite, etc that we have on the planet, and to make the bicarbonates in the sea.

One of the more interesting scientific papers has just appeared in Nature Astronomy (https://doi.org/10.1038/s41550-021-01487-w) although the reason I find it interesting may not meet with the approval of the authors. What the authors did was to examine a grain of the dust retrieved from the asteroid Itokawa by the Japanese Space agency and “found it had water on its surface”. Note it had not evaporated after millions of years in a vacuum. The water is produced, so they say, by space weathering. What happens is that the sun sends out bursts of solar wind which contains high velocity protons. Space dust is made of silicates, which involve silica bound to four oxygen atoms in a tetrahedron, and each oxygen atom is bound to something else. Suppose, for sake of argument, the something else is a magnesium atom. A high energy hydrogen nucleus (a proton) strikes it and makes SiOH and, say Mg+, with the Mg ion and the silicon atom remaining bound to whatever else they were bound to. It is fairly standard chemistry that 2SiOH → SiOSi plus H2O, so we have made water. Maybe, because the difference between SiOH on a microscopic sample of dust and dust plus water is rather small, except, of course, Si-OH is chemically bound to and is part of the rock, and rock does not evaporate. However, the alleged “clincher”: the ratio of deuterium to hydrogen on this dust grain was the same as Earth’s water.

Earth’s water has about 5 times more deuterium than solar hydrogen, Venus about a hundred times. The enhancement arises because if anything is to break the bond in H-O-D, the hydrogen is slightly more probable to go because the deuterium has a slightly stronger bond to the oxygen. Also, being slightly heavier, H-O-D is slightly less likely to get to the top of the atmosphere.

So, a light bulb moment: Earth’s water came from space dust. They calculate that this would produce twenty litres of water for every cubic meter of rock. This dust is wet! If that dust rained down on Earth it would deliver a lot of water. The authors suggest about half the water here came that way, while the rest came from carbonaceous chondrites, which have the same D/H ratio.

So, notice anything? There are two problems when forming a theory. First, the theory should account for everything of relevance. In practice this might be a little much, but there should be no obvious problems. Second, the theory should have no obvious inconsistencies. First, let us look at the “everything”. If the dust rained down on the Earth, why did not the same amount rain down on Venus? There is a slight weakness in this argument because if it did, maybe the water was largely destroyed by the sunlight. If that happened a high D/H ratio would result, and that is found on Venus. However, if you accept that, why did Earth’s water not also have its D/H ratio increased? The simplest explanation would be that it did, but not to extent of Venus because Earth had more water to dilute it. Why did the dust not rain down on the Moon? If the answer is the dust had been blown away by the time the Moon was formed, that makes sense, except now we are asking the water to be delivered at the time of accretion, and the evidence on Mars was that water was not there until about 500 million years later. If it arrived before the disk dust was lost, then the strongest supply of water would come closest to the star, and by the time we got to Earth, it would be screened by inner dust. Venus would be the wettest and it isn’t.

Now the inconsistencies. The strongest flux of solar wind at this distance would be what bombards the Moon, and while the dust was only here for a few million years, the Moon has been there for 4.5 billion years. Plenty of time to get wet. Except it has not. The surface of the dust on the Moon shows this reaction, and there are signs of water on the Moon, especially in the more polar regions, and the average Moon rock has got some water. But the problem is these solar winds only hit the surface. Thus the top layer or so of atoms might react, but nothing inside that layer. We can see those SiOH bonds with infrared spectroscopy, but the Moon, while it has some such molecules, it cannot be described as wet. My view is this is another one of those publications where people have got carried away, more intent on getting a paper that gets cited for their CV than actually stopping and thinking about a problem.

Quantum weirdness, or not?

How can you test something without touching it with anything, even a single photon? Here is one of the weirder aspects of quantum mechanics. First, we need a tool, and we use the Mach-Zehnder interferometer, which is illustrated as follows:

There is a source that sends individual photons to a beam splitter (BS1), which divides the beam into two sub-beams, each of which proceed to a mirror that redirects them to meet at another beam splitter (BS2). The path lengths of the two sub-beams are exactly the same (in practice a little adjustment may be needed to get this to work). Each sub-beam (say R and T for reflectance and transmitted at BS1) is reflected once by a mirror. When reflected, they sustain a phase shift of π, and R sustains such a phase shift at BS1. At BS2, the waves going to D1 both have had two reflections, so both have had a phase shift of 2π and they interfere constructively, therefore D1 registers the photon arrival. However, it is a little more complicated at D2. The original beams T and R that would head towards D2 have a net phase difference of π within the beam splitter, so they destructively interfere and the original beam R continues in the direction of net constructive interference hence only detector D1 registers. Now, suppose we send through one photon. At BS1, it seems the wave goes both ways but the photon, which acts as a particle, can only go one way. You get exactly the same result because it does not matter which way the photon goes; the wave goes both ways but the phase shift means only D1 registers.

Now, suppose we block one of the paths? Now there is no interference at BS2 so both D1 and D2 register equally. That means we can detect an obstruction on the path R even if no photon goes along it.

Now, here is the weird conclusion proposed by Elitzer and Vaidman [Foundations of Physics 23, 987, (1992)]. Suppose you have a large supply of bombs, but you think some may be duds. You attach a sensor to each bomb wherein if one photon hits it, it explodes. (It would be desirable to have a high energy laser as a source, otherwise you will be working in the dark setting this up.) At first sight all you have to do is shine light on said bombs, but at the end all you will have are duds, the good ones having blown up. But suppose we put it in the arm of such an interferometer so that it blocks the photon. Half the time a photon will strike it and it will explode if it is good, but consider the other half. When the photon gets to the second beam splitter, the photon has a 50% chance of going to either D1 or D2. If it goes to D1 we know nothing, but if it goes to D2 we know the photon went to the bomb. If the bomb was any good it exploded, so if it did not explode we know it was a dud. So if the bomb is good, the probability is ¼ that we shall learn without destroying it, ½ that we destroy it, and ¼ that we don’t know. In this case we send a second photon and continue until we get a hit at D2, then stop. The probability that we can detect the bomb without sensing it with anything now ends up at 1/3. So we end up keeping 1/3 of our bombs and locate all the duds.

Of course, this is a theoretical prediction. As far as I know, nobody has ever tested bombs, or anything else for that matter, this way. In standard quantum mechanics this is just plain weird. Of course, if you accept the pilot wave approach of de Broglie or Bohm, or for that matter my guidance wave version, where there is actually a physical wave other than the wave being a calculating aid, it is rather straightforward. Can you separate these versions? Oddly enough, yes, if reports are correct. If you have a version of this with an electron, the end result is that any single electron has a 50% chance of firing each detector. Of course, one electron fires only one detector. What does this mean? The beam splitter (which is a bit different for particles) will send the electron either way with a 50% probability, but the wave appears to always follow the particle and is not split. Why would that happen? The mathematics of my guidance wave require the wave to be regenerated continuously. For light, this happens from the wave itself, from Maxwell’s theory of light being an oscillation of electromagnetic waves. The oscillation of the electric field causes the next magnetic oscillation, and vice versa. But an electron does not have this option, and the wave has to be tolerably localised in space around the particle.

Thus if the electron version of this Mach Zehnder interferometer does do what the reference I say claims it did (unfortunately, it did not cite a reference) then this odd behaviour of electrons shows that the wave function for particles at least cannot be non-local (or the beam splitter did not work. There is always an alternative conclusion to any single observation.)

Polymerised Water

In my opinion, probably the least distinguished moment in science in the last sixty years occurred in the late 1960s, and not for the seemingly obvious reason. It all started when Nikolai Fedyakin condensed steam in quartz capillaries and found it had unusual properties, including a viscosity approaching that of a syrup. Boris Deryagin improved production techniques (although he never produced more than very small amounts) and determined a freezing point of – 40 oC, a boiling point of » 150 oC, and a density of 1.1-1.2. Deryagin decided there were only two possible reasons for this anomalous behaviour:

(a) the water had dissolve quartz,

(b) the water had polymerised.

Since recently fused quartz is insoluble in water at atmospheric pressures, he concluded that the water must have polymerised. There was no other option. An infrared spectrum of the material was produced by a leading spectroscopist from which force constants were obtained, and a significant number of papers were published on the chemical theory of polywater. It was even predicted that an escape of polywater into the environment could catalytically convert the Earth’s oceans into polywater, thus extinguishing life. Then there was the inevitable wake-up call: the IR spectrum of the alleged material bore a remarkable resemblance to that of sweat. Oops. (Given what we know now, whatever they were measuring could not have been what everyone called polywater, and probably was sweat, and how that happened from a very respected scientist remains unknown.)

This material brought out some of the worst in logic. A large number of people wanted to work with it, because theory validated it existence. I gather the US navy even conducted or supported research into it. The mind boggles here: did they want to encase enemy vessels in toffee-like water, or were they concerned someone might do it to them? Or even worse, turn the oceans into toffee, and thus end all life on Earth? The fact that the military got interested, though, shows it was taken very seriously. I recall one paper that argued Venus was like it is because all its water polymerised!

Unfortunately, I think the theory validated the existence because, well, the experimentalists said it did exist, so the theoreticians could not restrain themselves from “proving” why it existed. The key to the existence is that they showed through molecular orbital theory that the electrons in water had to be delocalized. Most readers won’t see the immediate problem because we are getting a little technical here, but to put it in perspective, molecular orbital theory assumes the electrons are delocalized over the whole molecule. If you further assume water molecules come together, the firsr assumption requires the electrons to be delocalised, which in turn forces the system to become one molecule. If you cannot end up with what you assumed in the first place, your theoretical work is not exactly competent, let alone inspired.

Unfortunately, these calculations involve what are called quantum mechanics. Quantum mechanics is one of the most predictive theories ever, and almost all your electronic devices have parts that would not have been developed but for knowledge of quantum mechanics. The problem is that for any meaningful problem there is usually no analytical solution from the formal quantum theory generally used, and any actual answer requires some rather complicated mathematics, and in chemistry, because of the number of particles, some approximations. Not everyone agreed. The same computer code in different hands sometimes produced opposite results with no explanation of why the results differed. If there were no differences in the implied physics between methods that gave opposing results, then the calculation method was not physical. If there were differences in the physics, then these should have been clearly explained. The average computational paper gives very little insight to what is done and these papers were actually somewhat worse than usual. It was, “Trust me, I know what I’m doing.” In general, they did not.

So, what was it? Essentially, ordinary water with a lot of dissolved silica, i.e. option (a) above. Deryagin was unfortunate in suffering in logic from the fallacy of the accident. Water at 100 degrees C does not dissolve quartz. If you don’t believe me, try boiling water it in a pot with a piece of silica. It does dissolve it at supercritical temperatures, but these were not involved. So what happened? Seemingly, water condensing in quartz capillaries does dissolve it. However, now I come to the worst part. Here we had an effect that was totally unexpected, so what happened? After the debacle, nobody was prepared to touch the area. We still do not know why silica in capillaries is so eroded, yet perhaps there is some important information here, after all water flows through capillaries in your body.

One of the last papers written on “anomalous water” was in 1973, and one of the authors was John Pople, who went on to win a Nobel Prize for his work in computational chemistry. I doubt that paper is one that he is most proud of. The good news is the co-author, who I assume was a post-doc and can remain anonymous because she almost certainly had little control on what was published, had a good career following this.

The bad news was for me. My PhD project involved whether electrons were delocalized from cyclopropane rings. My work showed they were not however computations from the same type of computational code said it did. Accordingly, everybody ignored my efforts to show what was really going on. More on this later.