Some Lesser Achievements of Science

Most people probably think that science is a rather dull quest for the truth, best left to the experts, who are all out to find the truth. Well, not exactly. Here is a video link where Sean Carroll points out that most physicists are really uninterested in understanding what quantum mechanics is about: https://youtu.be/ZacggH9wB7Y

This is rather awkward because quantum mechanics is one of the two greatest scientific advances of the twentieth century, and here we find all but a few of its exponents really neither understand what is going on nor do they care. What happens is they have a procedure by which they can get answers, so that is all that matters, is it not? Not in my opinion. What happens thereafter is that many of these are University teachers, and when they don’t care, that gets passed on to the students, so they don’t care. The system is degenerating.

But, you protest, we still get the right answers. That leaves open the question, do we really? From my experience in chemistry, we know that the only theories required to explain chemical observations (apart from maybe what atoms are made of) are electromagnetic theory and quantum mechanics. Those in the know will know there are floods of computational papers published so we must understand? Not at all. Almost all the papers calculate something that is known, and because integrating the differential equations means a number of constants are required, and because it is impossible to solve the equations analytically, the constants can be assigned so the correct answers are obtained. Fortunately, for very similar problems the same constants will suffice. If you find that hard to believe, the process is called validation, and you can read about it in John Pople’s Nobel Prize lecture. Actually, I believe all the computations are wrong except for the hydrogen molecule because everybody uses the wrong wave functions, but that is another matter.

That scientists do not care about their most important theory is bad, but there is worse, as published in Nature ( https://doi.org/10.1038/d41586-021-01436-7) Apparently, in 2005 three PhD students wrote a computer program called SCIgen for amusement. What this program does is write “scientific papers”. The research for them? Who needs that? It cobbles together words with random titles, text and charts and is essentially nonsense. Anyone can write them. (Declaration: I did not use this software for this or any other post!) While the original purpose was for “maximum amusement” and papers were generated for conferences, because the software is freely available various people have sent them to scientific journals , the peer review process failed to spot the gibberish, and the journals published them. There are apparently hundreds of these nonsensical papers floating around. Further, they can be for relatively “big names” because apparently articles can get through under someone’s name without the someone knowing anything about it. Why give someone else an additional paper? A big name is more likely to get through peer review and the writer needs to get it out there because they can be published with genuine references, although of course with no relevance to the submission. The reason for doing this is simple: it pads the number of citations for the cited authors, which is necessary to make their CV look better and to improve the chances when applying for funds. With money at stake, it is hardly surprising that sort of fraud has crept in.

Another unsettling aspect to scientific funding has been uncovered (Nature 593: 490 -491). Funding panels are more likely to give EU early-career grants to applicants connected to the granting panelists’ institutions, in other words the panelists have this tendency to give the money to “themselves”. Oops. A study of the grants showed that “applicants who shared both a home and a host organization with one panellist or more received a grant 40% more often than average” and “the success rate for connected applicants was approximately 80% higher than average in the life sciences and 40% higher in the social sciences and humanities, but there seemed to be no discernible effect in physics and engineering.” Here, physics is clean!  One explanation might be that the best applicants want to go to the most prestigious institutions. Maybe, but would that not apply to physics? An evaluation to test such bias in the life sciences showed “successful and connected applicants scored worse on these performance indicators than did funded applicants without such links, and even some unsuccessful applicants.” You can draw your own conclusions, but they are not good looking.

Dark Matter Detection

Most people have heard of dark matter. Its existence is clear, at least so many so state. Actually, that is a bit of an exaggeration. All we know is that galaxies do not behave exactly as General Relativity would have us think. Thus the outer parts of galaxies orbit the centre faster than they should and galaxy clusters do not have the dynamics expected. Worse, if we look at gravitational lensing, where light is bent as it goes around a galaxy, it is bent as if there is additional mass there that we just cannot see. There are two possible explanations for this. One is there is additional matter there we cannot see, which we call dark matter. The other alternative is that our understanding of what gravity behaves like is wrong on a large scale. We understand it very well on the scale of our solar system, but that is incredibly small when compared with a galaxy so it is possible we simply cannot detect such anomalies with our experiments. As it happens, there are awkward aspects of each, although the modified gravity does have the advantage that one explanation is that we might simply not understand how it should be modified.

One way of settling this dispute is to actually detect dark matter. If we detect it, case over. Well, maybe. However, so far all attempts to detect it have failed. That is not critical because to detect something, we have to know what it is, or what its properties are. So far all we can say about dark matter is that its gravity affects galaxies. It is rather hard to do an experiment on a galaxy so that is not exactly helpful. So what physicists have done is to make a guess as to what it will be, and. not surprisingly, make the guess in the form they can do something about it if they are correct. The problem now is we know it has to have mass because it exerts a gravitational effect and we know it cannot interact with electromagnetic fields, otherwise we would see it. We can also say it does not clump because otherwise there would be observable effects on close stars. There will not be dark matter stars. That is not exactly much to work on, but the usual approach has been to try and detect collisions. If such a particle can transfer sufficient energy to the molecule or atom, it can get rid of the energy by giving off a photon. So one such detector had huge tanks containing 370 kg of liquid xenon. It was buried deep underground, and from theory massive particles of dark matter could be separated from occasional neutron events because a neutron would give multiple events. In the end, they found nothing. On the other hand, it is far from clear to me why dark matter could not give multiple events, so maybe they saw some and confused it with stray neutrons.

On the basis that a bigger detector would help, one proposal (Leane and Smirnov, Physical Review Letters 126: 161101 (2021) suggest using giant exoplanets. The idea is that as the dark matter particles collide with the planet, they will deposit energy as they scatter and with the scattering eventually annihilate within the planet. This additional energy will be detected as heat. The point of the giant is that its huge gravitational field will pull in extra dark matter.

Accordingly, they wish someone to measure the temperatures on the surface of old exoplanets of mass between Jupiter and 55 times Jupiter’s mass, and temperatures above that otherwise expected can be allocated to dark matter. Further, since dark matter density should be higher near the galactic centre, and collisional velocities higher, the difference in surface temperatures between comparable planets may signal the detection of dark matter.

Can you see problems? To me, the flaw lies in “what is expected?” In my opinion, one is the question of getting sufficient accuracy in the infrared detection. Gravitational collapse gives off excess heat. Once a planet gets to about 16 Jupiter masses it starts fusing deuterium. Another lies in estimating the heat given off by radioactive decay. That should be understandable from the age of the planet, but if it had accreted additional material from a later supernova the prediction could be wrong. However, for me the biggest assumption is that the dark matter will annihilate, as without this it is hard to see where sufficient energy will come from. If galaxies all behave the same way, irrespective of age (and we see some galaxies from a great distance, which means we see them as they were a long time ago) then this suggests the proposed dark matter does not annihilate. There is no reason why it should, and that our detection method needs it to will be totally ignored by nature. However, no doubt schemes to detect dark matter will generate many scientific papers in the near future and consume very substantial research grants. As for me, I would suggest one plausible approach, since so much else has failed by assuming large particles, is to look for small ones. Are there any unexplained momenta in collisions from the large hadron collider? What most people overlook is that about 99% of the data generated is trashed (because there is so much of it), but would it hurt to spend just a little effort on examining if fine detail that which you do not expect to see much?

How Many Tyrannosaurs Were There?

Suppose you were transported back to the late Cretaceous, what is the probability that you would see a Tyrannosaurus? That depends on a large number of factors, and to simplify, I shall limit myself to T Rex. There were various Tyrannosaurs, but probably in different times and different places. As far as we know, T Rex was limited to what was effectively an island land mass known as Laramidia that has now survived as part of Western North America. In a recent edition of Science, a calculation was made, and it starts with the premise, known as “Damuth’s Law” that population density is negatively correlated with body mass through a power law that involves two assignable constants, plus the body mass. What does that mean? It is an empirical relationship that says the bigger the animal, the fewer will be found in a given area. The reason is obvious: the bigger the animal, the more it will eat, and a given area has only so much food. Apparently one of the empirical constants has been assigned a value of 0.75, more or less, so now we are down to one assignable constant.

If we concentrate on the food requirement, then it depends on what it eats, and what it does with it. To explain the last point, carnivores kill prey, so there has to be enough prey there to supply the food, AND to be able to reproduce. There has to be a stable population of prey, otherwise the food runs out and everyone dies. The bigger the animal, the more food it needs to generate body mass and to provide the energy to move, however mammals have a further requirement over animals like snakes: they burn food to provide body heat, so mammals need more food per unit mass. It also depends on how specialized the food is. Thus pandas, specializing on eating bamboo, depend on bamboo growth rates (which happens to be fast) and on something else not destroying the bamboo. For Tyrannosaurs, they presumably would concentrate on eating large animals. Anything that was a few centimeters high would probably be safe, apart from being accidentally stood on, because the Tyrannosaur could not get its head down low enough and keep it there long enough to catch it. The smaller raptors were also probably safe because they could run faster. So now the problem is, how many large animals, and was there a restriction? My guess is it would take on any large herbivore. In terms of the probability of meeting one, it also depends on how they hunt. If they hunted in packs, which is sometimes postulated, you are less likely to meet them, but you are in more trouble if you do.

That now gets back to how many large herbivores would be in a given area, and that in turn depends on the amount of vegetation, and its food value. We have to make guesses about that. We also have to decide whether the Tyrannosaur generated its own heat. We cannot tell exactly, but the evidence does seem to support the fact that it was concerned about heat as it probably had feathers. The article assumed that the dinosaur was about half-way between mammals and large lizards as far as heat generation goes. Provided the temperatures were warm, something as large as a Tyrannosaur would probably be able to retain much of its own heat as surface area is a smaller fraction of volume than for small animals.The next problem is assigning body mass, which is reasonably straightforward for a given skeleton, but each animal starts out as an egg.  How many juvenile ones were there? This is important because juvenile ones will have different food requirements; they eat smaller herbivores. The authors took a distribution that is somewhat similar to that for tigers. If so, an area the size of California could support 3,800 T. Rex. We now need the area over which they roamed, and with a considerable possible error range and limiting ourselves to land that is above sea level now, they settled on 2.3 + 0.88 million square kilometers, which, at any one time would support about 20,000 individuals. If we take a mid-estimate of how long they roamed, which is 2.4 million years, we get, with a very large error range, that the total number of T. Rex that ever lived was about 2.5 billion individuals. Currently, there are 32 individual fossils (essentially all are partial), which shows how difficult fossilization really is. Part of this, of course, arises because fossilization is dependent on appropriate geology and conditions. So there we are: more useless information, almost certainly erroneous, but fun to speculate on.

How Fast is the Universe Expanding?

In the last post I commented on the fact that the Universe is expanding. That raises the question, how fast is it expanding? At first sight, who cares? If all the other galaxies will be out of sight in so many tens of billions of years, we won’t be around to worry about it. However, it is instructive in another way. Scientists make measurements with very special instruments and what you get are a series of meter readings, or a printout of numbers, and those numbers have implied dimensions. Thus the number you see on your speedometer in your car represents miles per hour or kilometers per hour, depending on where you live. That is understandable, but that is not what is measured. What is usually measured is actually something like the frequency of wheel revolutions. So the revolutions are counted, the change of time is recorded, and the speedometer has some built-in mathematics that gives you what you want to know. Within that calculation is some built-in theory, in this case geometry and an assumption about tyre pressure.

Measuring the rate of expansion of the universe is a bit trickier. What you are trying to measure is the rate of change of distance between galaxies at various distances from you, average them because they have random motion superimposed, and in some cases regular motion if they are in clusters. The velocity at which they are moving apart is simply change of distance divided by change of time. Measuring time is fine but measuring distance is a little more difficult.  You cannot use a ruler.  So some theory has to be imposed.

There are some “simple” techniques, using the red shift as a Doppler shift to obtain velocity, and brightness to measure distance. Thus using different techniques to estimate cosmic distances such as the average brightness of stars in giant elliptical galaxies, type 1a supernovae and one or two other techniques it can be asserted the Universe is expanding at 73.5 + 1.4 kilometers per second for every megaparsec. A megaparsec is about 3.3 million light years, or three billion trillion kilometers.

However, there are alternative means of determining this expansion, such as measured fluctuations in the cosmic microwave background and fluctuations in matter density of the early Universe. If you know what the matter density was then, and know what it is now, it is simple to calculate the rate of expansion, and the answer is, 67.4 +0.5 km/sec/Mpc. Oops. Two routes, both giving highly accurate answers, but well outside any overlap and hence we have two disjoint sets of answers.

So what is the answer? The simplest approach is to use an entirely different method again, and hope this resolves the matter, and the next big hope is the surface brightness of large elliptical galaxies. The idea here is that most of the stars in a galaxy are red dwarfs, and hence the most “light” from a galaxy will be in the infrared. The new James Webb space telescope will be ideal for making these measurements, and in the meantime standards have been obtained from nearby elliptical galaxies at known distances. Do you see a possible problem? All such results also depend on the assumptions inherent in the calculations. First, we have to be sure we actually know the distance accurately to the nearby elliptical galaxies, but much more problematical is the assumption that the luminosity of the ancient galaxies is the same as the local ones. Thus in earlier times, since the metals in stars came from supernovae, the very earliest stars will have much less so their “colour” from their outer envelopes may be different. Also, because the very earliest stars formed from denser gas, maybe the ratio of sizes of the red dwarfs will be different. There are many traps. Accordingly, the main reason for the discrepancy is that the theory used is slightly wrong somewhere along the chain of reasoning. Another possibility is the estimates of the possible errors are overly optimistic. Who knows, and to some extent you may say it does not matter. However, the message from this is that we have to be careful with scientific claims. Always try to unravel the reasoning. The more the explanation relies on mathematics and the less is explained conceptually, the greater the risk that whoever is presenting the story does not understands it either.

Unravelling Stellar Fusion

Trying to unravel many things in modern science is painstaking, as will be seen from the following example, which makes looking for a needle in a haystack relatively easy. Here, the requirement for careful work and analysis can be seen, although less obvious is the need for assumptions during the calculations, and these are not always obviously correct. The example involves how our sun works. The problem is, how do we form the neutrons needed for fusion in the star’s interior? 

In the main process, the immense pressures force two protons form the incredibly unstable 2He (a helium isotope). Besides giving off a lot of heat there are two options: a proton can absorb an electron and give off a neutrino (to conserve leptons) or a proton can give off a positron and a neutrino. The positron would react with an electron to give two gamma ray photons, which would be absorbed by the star and converted to energy. Either way, energy is conserved and we get the same result, except the neutrinos may have different energies. 

The dihydrogen starts to operate at about 4 million degrees C. Gravitational collapse of a star starts to reach this sort of temperature if the star has a mass at least 80 times that of Jupiter. These are the smaller of the red dwarfs. If it has a mass of approximately 16 – 20 times that of Jupiter, it can react deuterium with protons, and this supplies the heat to brown dwarfs. In this case, the deuterium had to come from the Big Bang, and hence is somewhat limited in supply, but again it only reacts in the centre where the pressure is high enough, so the system will continue for a very long time, even if not very strongly.

If the temperatures reach about 17 million degrees C, another reaction is possible, which is called the CNO cycle. What this does is start with 12C (standard carbon, which has to come from accretion dust). It then adds a proton to make 13N, which loses a positron and a neutrino to make 13C. Then come a sequence of proton additions to make 14N (most stable nitrogen), then 15O, which loses a positron and a neutrino to make 15N, and when this is struck by a proton, it spits out 4He and returns to 12C. We have gone around in a circle, BUT converted four hydrogen nuclei to 4helium, and produced 25 MeV of energy. So there are two ways of burning hydrogen, so can the sun do both? Is it hot enough at the centre? How do we tell?

Obviously we cannot see the centre of the star, but we know for the heat generated it will be close to the second cycle. However, we can, in principle, tell by observing the neutrinos. Neutrinos from the 2He positron route can have any energy but not more than a little over 0.4 MeV. The electron capture neutrinos are up to approximately 1.1 MeV, while the neutrinos from 15O are from anywhere up to about 0.3 MeV more energetic, and those from 13N are anywhere up to 0.3 MeV less energetic than electron capture. Since these should be of the same intensity, the energy difference allows a count. The sun puts out a flux where the last three are about the same intensity, while the 2He neutrino intensity is at least 100 times higher. (The use of “at least” and similar terms is because such determinations are very error prone, and you will see in the literature some relatively different values.) So all we have to do is detect the neutrinos. That is easier said than done if they can pass through a star unimpeded. The way it is done is if a neutrino accidentally hits certain substances capable of scintillation it may give off a momentary flash of light.

The first problem then is, anything hitting those substances with enough energy will do it. Cosmic rays or nuclear decay are particularly annoying. So in Italy they built a neutrino detector under1400 meters of rock (to block cosmic rays). The detector is a sphere containing 300 t of suitable liquid and the flashes are detected by photomultiplier tubes. While there is a huge flux of neutrinos from the star, very few actually collide. The signals from spurious sources had to be eliminated, and a “neutrino spectrum” was collected for the standard process. Spurious sources included radioactivity from the rocks and liquid. These are rare, but so are the CNO neutrinos. Apparently only a few counts per day were recorded. However, the Italians ran the experiment for 1000 hours, and claimed to show that the sun does use this CNO cycle, which contributes about 1% of the energy. For bigger stars, this CNO cycle becomes more important. This is quite an incredible effort, right at the very edge of detection capability. Just think of the patience required, and the care needed to be sure spurious signals were not counted.

That Virus Still

By now it is probably apparent that SARS-CoV-2 is making a comeback in the Northern Hemisphere. Why now? There is no good answer to that, but in my opinion a mix of three aspects will be partly involved. The first is a bit of complacency. People who have avoided getting infected for a few months tend think they have dodged the bullet. They would have, but soldiers know that you cannot keep dodging bullets forever; either you do something about the source or get out of there. In the case of the virus, sooner or later someone with it will meet you. You can delay the inevitable by restricting your social life, but most people do not want to do that forever. 

The second may be temperature. Our Health Department has recommended that places where people congregate and have heating systems should raise the temperature to 18 degrees C from the 16 currently advocated. Apparently even that small change restricts the lifetime of the virus adhering to objects, and viruses exhaled have to settle somewhere. This won’t help from direct contact, but it may prevent some infections arising from touching some inert object. That can be overcome by good hygiene, but that can be a little difficult in some social environments. My answer to that is to have hands covered with a gel that has long-term antiviral activity. (Alcohol evaporates, and then has no effect.)

The third is the all-pervasive web. It seems to be unfortunate that the web is a great place for poorly analysed information. Thus you will see claims that the disease is very mild. For some it is, but you cannot cherry-pick and use that for a generalization. If you say, “Some, particularly the very young, often only show mild symptoms,” that is true, but it identifies the limits of the statement. For some others the disease is anything but mild. 

A more perfidious approach is the concept of “herd immunity”. The idea is that when a certain fraction of the population have been infected, the virus runs out of new people to infect, and once the infection rate falls below 1 it means the virus cannot replace itself and eventually it simply dies out. Where that value is depends on something called Ro, the number of people on average that the virus spreads itself to. This has to be guessed, but you see numbers tossed around like herd immunity comes when 60% of the people are infected. We then have to know how many have been infected, and lo and behold, you find on the web that a couple of months ago estimates said we were nearly there in many countries. The numbers of infections were guessed, and given the current situation, were obviously wrong. It is unfortunate that many people are insufficiently sceptical about web statements, especially those where there is a hidden agenda.

So, what is the truth about herd immunity? An article in Nature 587, 26-28 (2020) makes a somewhat depressing point: no other virus has ever been eliminated through herd immunity, and further, to get up to the minimum required infection rate in the US, say, will, according to the Nature paper, mean something like one to two million deaths. Is that a policy? Worse, herd immunity depends on the immunity of those infected to remain immune when the next round of viruses turn up, but corona viruses, such as those found in the common cold, do not give immunity lasting over a year. To quote the Nature paper, “Attempting to reach herd immunity via targeted infections is simply ludicrous.”

The usual way to gain herd immunity is with a vaccine. If sufficient people get the vaccine, and if the vaccine works, there are too few left to maintain the virus, although this assumes the virus cannot be carried by symptom-free vaccinated people. The big news recently is that Pfizer has a vaccine they claim is 90% effective in a clinical trial involving 43,538 participants, half of which were given a placebo. (Lucky them! They are the ones who have to get the infection to prove the vaccine works.) Moderna has a different vaccine that makes similar claims. Unfortunately, we still do not know whether long-term immunity is conveyed, and indeed the clinical trial still has to run for longer to ensure its overall effectiveness. If you know you have a 50% chance of getting the placebo, you may still carefully avoid the virus. Still, the sight of vaccines coming through at least parts of stage 3 trials successfully is encouraging.

No Phosphine on Venus

Some time previouslyI wrote a blog post suggesting the excitement over the announcement that phosphine had been discovered in the atmosphere of Venus (https://ianmillerblog.wordpress.com/2020/09/23/phosphine-on-venus/) I outlined a number of reasons why I found it difficult to believe it. Well, now we find in a paper submitted to Astronomy and Astrophysics (https://arxiv.org/pdf/2010.09761.pdf) we find the conclusion that the 12th-order polynomial fit to the spectral passband utilised in the published study leads to spurious results. The authors concluded the published 267-GHz ALMA data provide no statistical evidence for phosphine in the atmosphere of Venus.

It will be interesting to see if this denial gets the same press coverage as “There’s maybe life on Venus” did. Anyway, you heard it here, and more to the point, I hope I have showed why it is important when very unexpected results come out that they are carefully examined.

New Zealand Volcanism

If you live in New Zealand, you are aware of potential natural disasters. Where I live, there will be a major earthquake at some time, but hopefully well in the future. In other places there are volcanoes, and some are huge. Lake Rotorua is part of a caldera from a rhyolite explosion about 220,000 years ago that threw up at least 340 cubic kilometers of rock. By comparison, the Mount St Helens eruption ejected in the order of 1 cubic km of rock.  Taupo is even worse. It started erupting about 300,000 years ago and last erupted about 1800 years ago, when it devastated an area of about 20,000 square km with a pyroclastic surge and its caldera left a large lake (616 square kilometers area). Layers of ash a hundred meters deep covered nearby land. The Oranui event, about 27,000 years ago sent about 1100 cubic km of debris into the air, and was a hundred times more powerful than Krakatoa. Fortunately, these supervolcanoes do not erupt very often, although Taupo is also uncomfortably frequent, having up to 26 smaller eruptions between Oranui and the latest one. However, as far as we know, nobody has died in these explosions, largely because there were no people in New Zealand until well after the last one, the Maoris arriving somewhere like 1350 AD.

The most deadly eruption in New Zealand was Tarawera. Tarawera is a rhyolite dome, but apparently the explosion was basaltic.  Basaltic eruptions, like in Hawaii, while destructive if you are in the way of a flow, are fairly harmless because the lava simply flows out like a very slow moving river. Escape should be possible, but some eruptions, like Tarawera, become explosive too. The rhyolite eruptions like those at Taupo are explosive because molten rhyolite is often very wet, so when the pressure comes off as the magma comes to the surface, the steam simply sends it explosively upwards, but basaltic volcanoes are different. A recent article in Physics World explains why there are different outcomes for essentially the same material.

Basaltic magmas are apparently less viscous, and as the magma comes to the surface, the gases and steam are vented and the magma simply flows out, so what you get are clouds of steam and gas, often with small lumps of molten magma which gives a “fireworks” display, and a gently flowing river of magma. It turns out that the differences actually depend on the flow rate. If the flow rate is slow, or at least how the theory runs, the gases escape and the magma flows away and cools during the flow. If, however, it rises very quickly, say meters per second, it can cool at around 10 – 20 degrees per second. If it cools that quickly, the average basaltic magma forms nano-sized crystals. The theory then is, if it can get about 5% of the magma in this form, the crystals start to lock together, and when that happens the viscosity suddenly increases. Now the steam cannot escape so easily, the pressurised magma from below pushes it up, and at the surface the magma simply explodes with the steam content. That, of course, requires water, which is most likely in a subduction zone, and of course the subduction zone around New Zealand starts under the Pacific, where there is no shortage of water. It was the water content that led to the Tarawera event generate a pyroclastic surge, from which, once it starts, there is no escape, as the citizens of Pompeii would testify to if they were capable of testifying. And these sort of crises are those you cannot do anything about, other than note the warning signs and go elsewhere. The good thing about such volcanoes is that there is usually a few days warning. But if Taupo decided to erupt again, how far away is safe?

Phosphine on Venus

An article was published in Nature Astronomy on 14th September, 2020, that reported the detection of a signal corresponding to the 1 – 0 rotational transition of phosphine, which has a wavelength of 1.123 mm. This was a very weak signal that had to be obtained by mathematical processing to remove artefacts such as spectral “ripple” that originate from reflections. Nevertheless, the data at the end is strongly suggestive that the line is real. Therefore they found phosphine, right? And since phosphine is made from anaerobes and emitted from marsh gas, they found life, right? Er, hold on. Let us consider this in more detail.

First, is the signal real? The analysis detected the HDO signal at 1.126 mm, which is known to be the 2 – 3 rotational transition. That strongly confirms their equipment and analysis was working properly for that species, so this additional signal is likely to be real. The levels of phosphine have been estimated as between 10 – 30 ppb. However, there is a problem because such spectral signals come from changes to the spin rate of molecules. All molecules can only spin at certain quantised energies, but there are a number of options, thus the phosphine was supposed to be from the first excited state to the ground. There are a very large number of possible states, and higher states are more common at higher temperatures. The Venusian atmosphere ranges from about 30 oC near the top to somewhere approaching 500 oC at the bottom. Also, collisions will change spin rates. Most of our data comes from our atmospheric pressure or lower pressures as doing microwave experiments in high-pressure vessels is not easy. The position of the lines depends on the moment of inertia, so different molecules have different energy levels, and there are different ways  of spinning, tumbling, etc, for complicated molecules. Thus it is possible that the signal could be due to something else. However, the authors examined all the alternatives they could think of and only phosphine remained.

This paper rejected sulphur dioxide as a possibility because in the Venusian atmosphere it gets oxidised to sulphuric acid so there  is not enough of it, but phosphine is actually far more easily oxidised. If we look at our atmosphere, there are actually a number of odd looking molecules caused by photochemistry. The Venusian atmosphere would also have photochemistry but since its atmosphere is so different from ours we cannot guess what that is at present. However, for me I think there is a good chance this signal is from a molecule generated photochemically. The reason is the signal is strongest at the equator and fades away at the poles, where the light intensity per unit area is lower. Note that if it were phosphine generated by life and was removed photochemically, you would get the opposite result.

Phosphine is a rather reactive material, and according to the Nature article models predict its lifetime at 80 km altitude as less than a thousand seconds due to photodegradation. They argue its life should be longer lower down because the UV light intensity is weaker, but they overlook chemical reactions. Amongst other things, concentrated sulphuric acid would react instantaneously with it to make a phosphonium salt, and while the phosphine is not initially destroyed, its ability to make this signal is.

Why does this suggest life? Calculations with some fairly generous lifetimes suggest a minimum of about million molecules have to be made every second on every square centimeter of the planet. There is no known chemistry that can do that. Thus life is proposed on the basis of, “What else could it be?” which is a potential logic fallacy in the making, namely concluding from ignorance. On earth anaerobes make phosphine and it comes out as “marsh gas”, where it promptly reacts with oxygen in the air. This is actually rather rare, and is almost certainly an accident caused by phosphate particles  being in the wrong place in the enzyme system. I have been around many swamps and never smelt phosphine. What anaerobes do is take oxidised material and reduce them, taking energy and some carbon and oxygen, and spit out as waste highly reduced compounds, such as methane. There is a rather low probability they will take sulphates and make hydrogen sulphide and phosphine from phosphates. The problem I have is the Venusian atmosphere is full of concentrated sulphuric acid clouds, and enzymes would not work, or last, in that environment. If the life forms were above the sulphuric acid clouds, they would also be above the phosphoric acid, so where would they get their phosphorus? Further, all life needs phosphate: it is the only functional group that has the requirement to link reproductive entities (two to link a polymer, and one to provide the ionic group to solubilize the whole and let the strands separate while reproducing), it is the basis of adenosine tripolyphosphate which is the energy transfer agent for lfe, and the adenosine phosphates are essential solubilizing agents for many enzyme cofactors, in short, no phosphate, no life. Phosphate occurs in rocks so it will be very scarce in the atmosphere, so why would it waste what little that was there to make phosphine?To summarize, I have no idea what caused this signal and I don’t think anyone else has either. I think there is a lot of chemistry associated with the Venusian atmosphere we do not understand, but I think this will be resolved sooner or later, as it has got so much attention.

Science is No Better than its Practitioners

Perhaps I am getting grumpy as I age, but I feel that much in science is not right. One place lies in the fallacy ad verecundiam. This is the fallacy of resorting to authority. As the motto of the Royal Society puts it, nullius in verba. Now, nobody expects you to personally check everything, and if someone has measured something and either clearly shows how he/she did it, or it is something that is done reasonably often, then you take their word for it. Thus if I want to know the melting point of benzoic acid I look it up, and know that if the reported value is wrong, someone would have noticed. However, a different problem arises with theory because you cannot measure it. Further, science has got so complicated that any expert is usually an expert in a very narrow field. The net result is that  because things have got so complicated, most scientists find theories too difficult to examine in detail and do defer to experts. In physics, this tends to be because there is a tendency for the theory to descend into obscure mathematics and worse, the proponents seem to believe that mathematics IS the basis of nature. That means there is no need to think of causes. There is another problem, that also drifts over to chemistry, and that is the results of a computer-driven calculation must be right. True, there will be no arithmetical mistake but as was driven into our heads in my early computer lectures: garbage in, garbage out.

This post was sparked by an answer I gave to a chemistry question on Quora. Chemical bonds are usually formed by taking two atoms with a single electron in an orbital. Think of that as a wave that can only have one or two electrons. The reason it can have only two electrons is the Pauli Exclusion Principle, which is a very fundamental principle in physics. If each atom has only one in  such an orbital, they can combine and form a wave with two electrons, and that binds the two atoms. Yes, oversimplified. So the question was, how does phosphorus pentafluoride form. The fluorine atoms have one such unpaired electron each, and the phosphorus has three, and additionally a pair in one wave. Accordingly, you expect phosphorus to form a trifluoride, which it does, but how come the pentafluoride? Without going into too many details, my answer was that the paired electrons are unpaired, one is put into another wave and to make this legitimate, an extra node is placed in the second wave, a process called hybridization. This has been a fairly standard answer in text books.

So, what happened next? I posted that, and also shared it to something called “The Chemistry Space”. A moderator there rejected it, and said he did so because he did not believe it. Computer calculations showed there was no extra node. Eh?? So I replied and asked how this computation got around the Exclusion Principle, then to be additionally annoying I asked how the computation set the constants of integration. If you look at John Pople’s Nobel lecture, you will see he set these constants for hydrocarbons by optimizing the results for 250 different hydrocarbons, but leaving aside the case that simply degenerates into a glorified empirical procedure, for phosphorus pentafluoride there is only one relevant compound. Needless to say, I received no answer, but I find this annoying. Sure, this issue is somewhat trivial, but it highlights the greater problem that some scientists are perfectly happy to hide behind obscure mathematics, or even more obscure computer programming.

It is interesting to consider what a theory should do. First, it should be consistent with what we see. Second, it should encompass as many different types of observation as possible. To show what I mean, in phosphorus pentafluoride example, the method I described can be transferred to other structures of different molecules. That does not make it right, but at least it is not obviously wrong. The problem with a computation is, unless you know the details of how it was carried out, it cannot be applied elsewhere, and interestingly I saw a recent comment in a publication by the Royal Society of Chemistry that computations from a couple of decades ago cannot be checked or used because the details of the code are lost. Oops. A third requirement, in my opinion, is it should assist in understanding what we see, and even lead to a prediction of something new.

Fundamental theories cannot be deduced; the principles have to come from nature. Thus mathematics describes what we see in quantum mechanics, but you could find an alternative mathematical description for anything else nature decided to do, for example, classical mechanics is also fully self-consistent. For relativity, velocities are either additive or they are not, and you can find mathematics either way. The problem then is that if someone draws a wrong premise early, mathematics can be made to fit a lot of other material to it. A major discovery and change of paradigm only occurs if there is a major fault discovered that cannot be papered over.

So, to finish this post in a slightly different way to usual: a challenge. I once wrote a novel, Athene’s Prophecy, in which the main character in the first century was asked by the “Goddess” Athene to prove that the Earth went around the sun. Can you do it, with what could reasonably be seen at the time? The details had already been worked out by Aristarchus of Samos, who also worked out the size and distance of the Moon and Sun, and the huge distances are a possible clue. (Thanks to the limits of his equipment, Aristarchus’ measurements are erroneous, but good enough to show the huge distances.) So there was already a theory that showed it might work. The problem was that the alternative also worked, as shown by Claudius Ptolemy. So you have to show why one is the true one. 

Problems you might encounter are as follows. Aristotle had shown that the Earth cannot rotate. The argument was that if you threw a ball into the air so that when it reached the top of its flight it would be directly above you, when the ball fell to the ground it would be to the east of you. He did it, and it wasn’t, so the Earth does not rotate. (Can you see what is wrong? Hint – the argument implies the conservation of angular momentum, and that is correct.) Further, if the Earth went around the sun, to do so orbital motion involves falling and since heavier things fall faster than light things, the Earth would fall to pieces. Comets may well fall around the Sun. Another point was that since air rises, the cosmos must be full of air, and if the Earth went around the Sun, there would be a continual easterly wind. 

So part of the problem in overturning any theory is first to find out what is wrong with the existing one. Then to assert you are correct, your theory has to do something the other theory cannot do, or show the other theory has something that falsifies it. The point of this challenge is to show by example just how difficult forming a scientific theory actually is, until you hear the answer and then it is easy.