How Fast is the Universe Expanding?

In the last post I commented on the fact that the Universe is expanding. That raises the question, how fast is it expanding? At first sight, who cares? If all the other galaxies will be out of sight in so many tens of billions of years, we won’t be around to worry about it. However, it is instructive in another way. Scientists make measurements with very special instruments and what you get are a series of meter readings, or a printout of numbers, and those numbers have implied dimensions. Thus the number you see on your speedometer in your car represents miles per hour or kilometers per hour, depending on where you live. That is understandable, but that is not what is measured. What is usually measured is actually something like the frequency of wheel revolutions. So the revolutions are counted, the change of time is recorded, and the speedometer has some built-in mathematics that gives you what you want to know. Within that calculation is some built-in theory, in this case geometry and an assumption about tyre pressure.

Measuring the rate of expansion of the universe is a bit trickier. What you are trying to measure is the rate of change of distance between galaxies at various distances from you, average them because they have random motion superimposed, and in some cases regular motion if they are in clusters. The velocity at which they are moving apart is simply change of distance divided by change of time. Measuring time is fine but measuring distance is a little more difficult.  You cannot use a ruler.  So some theory has to be imposed.

There are some “simple” techniques, using the red shift as a Doppler shift to obtain velocity, and brightness to measure distance. Thus using different techniques to estimate cosmic distances such as the average brightness of stars in giant elliptical galaxies, type 1a supernovae and one or two other techniques it can be asserted the Universe is expanding at 73.5 + 1.4 kilometers per second for every megaparsec. A megaparsec is about 3.3 million light years, or three billion trillion kilometers.

However, there are alternative means of determining this expansion, such as measured fluctuations in the cosmic microwave background and fluctuations in matter density of the early Universe. If you know what the matter density was then, and know what it is now, it is simple to calculate the rate of expansion, and the answer is, 67.4 +0.5 km/sec/Mpc. Oops. Two routes, both giving highly accurate answers, but well outside any overlap and hence we have two disjoint sets of answers.

So what is the answer? The simplest approach is to use an entirely different method again, and hope this resolves the matter, and the next big hope is the surface brightness of large elliptical galaxies. The idea here is that most of the stars in a galaxy are red dwarfs, and hence the most “light” from a galaxy will be in the infrared. The new James Webb space telescope will be ideal for making these measurements, and in the meantime standards have been obtained from nearby elliptical galaxies at known distances. Do you see a possible problem? All such results also depend on the assumptions inherent in the calculations. First, we have to be sure we actually know the distance accurately to the nearby elliptical galaxies, but much more problematical is the assumption that the luminosity of the ancient galaxies is the same as the local ones. Thus in earlier times, since the metals in stars came from supernovae, the very earliest stars will have much less so their “colour” from their outer envelopes may be different. Also, because the very earliest stars formed from denser gas, maybe the ratio of sizes of the red dwarfs will be different. There are many traps. Accordingly, the main reason for the discrepancy is that the theory used is slightly wrong somewhere along the chain of reasoning. Another possibility is the estimates of the possible errors are overly optimistic. Who knows, and to some extent you may say it does not matter. However, the message from this is that we have to be careful with scientific claims. Always try to unravel the reasoning. The more the explanation relies on mathematics and the less is explained conceptually, the greater the risk that whoever is presenting the story does not understands it either.

Unravelling Stellar Fusion

Trying to unravel many things in modern science is painstaking, as will be seen from the following example, which makes looking for a needle in a haystack relatively easy. Here, the requirement for careful work and analysis can be seen, although less obvious is the need for assumptions during the calculations, and these are not always obviously correct. The example involves how our sun works. The problem is, how do we form the neutrons needed for fusion in the star’s interior? 

In the main process, the immense pressures force two protons form the incredibly unstable 2He (a helium isotope). Besides giving off a lot of heat there are two options: a proton can absorb an electron and give off a neutrino (to conserve leptons) or a proton can give off a positron and a neutrino. The positron would react with an electron to give two gamma ray photons, which would be absorbed by the star and converted to energy. Either way, energy is conserved and we get the same result, except the neutrinos may have different energies. 

The dihydrogen starts to operate at about 4 million degrees C. Gravitational collapse of a star starts to reach this sort of temperature if the star has a mass at least 80 times that of Jupiter. These are the smaller of the red dwarfs. If it has a mass of approximately 16 – 20 times that of Jupiter, it can react deuterium with protons, and this supplies the heat to brown dwarfs. In this case, the deuterium had to come from the Big Bang, and hence is somewhat limited in supply, but again it only reacts in the centre where the pressure is high enough, so the system will continue for a very long time, even if not very strongly.

If the temperatures reach about 17 million degrees C, another reaction is possible, which is called the CNO cycle. What this does is start with 12C (standard carbon, which has to come from accretion dust). It then adds a proton to make 13N, which loses a positron and a neutrino to make 13C. Then come a sequence of proton additions to make 14N (most stable nitrogen), then 15O, which loses a positron and a neutrino to make 15N, and when this is struck by a proton, it spits out 4He and returns to 12C. We have gone around in a circle, BUT converted four hydrogen nuclei to 4helium, and produced 25 MeV of energy. So there are two ways of burning hydrogen, so can the sun do both? Is it hot enough at the centre? How do we tell?

Obviously we cannot see the centre of the star, but we know for the heat generated it will be close to the second cycle. However, we can, in principle, tell by observing the neutrinos. Neutrinos from the 2He positron route can have any energy but not more than a little over 0.4 MeV. The electron capture neutrinos are up to approximately 1.1 MeV, while the neutrinos from 15O are from anywhere up to about 0.3 MeV more energetic, and those from 13N are anywhere up to 0.3 MeV less energetic than electron capture. Since these should be of the same intensity, the energy difference allows a count. The sun puts out a flux where the last three are about the same intensity, while the 2He neutrino intensity is at least 100 times higher. (The use of “at least” and similar terms is because such determinations are very error prone, and you will see in the literature some relatively different values.) So all we have to do is detect the neutrinos. That is easier said than done if they can pass through a star unimpeded. The way it is done is if a neutrino accidentally hits certain substances capable of scintillation it may give off a momentary flash of light.

The first problem then is, anything hitting those substances with enough energy will do it. Cosmic rays or nuclear decay are particularly annoying. So in Italy they built a neutrino detector under1400 meters of rock (to block cosmic rays). The detector is a sphere containing 300 t of suitable liquid and the flashes are detected by photomultiplier tubes. While there is a huge flux of neutrinos from the star, very few actually collide. The signals from spurious sources had to be eliminated, and a “neutrino spectrum” was collected for the standard process. Spurious sources included radioactivity from the rocks and liquid. These are rare, but so are the CNO neutrinos. Apparently only a few counts per day were recorded. However, the Italians ran the experiment for 1000 hours, and claimed to show that the sun does use this CNO cycle, which contributes about 1% of the energy. For bigger stars, this CNO cycle becomes more important. This is quite an incredible effort, right at the very edge of detection capability. Just think of the patience required, and the care needed to be sure spurious signals were not counted.

That Virus Still

By now it is probably apparent that SARS-CoV-2 is making a comeback in the Northern Hemisphere. Why now? There is no good answer to that, but in my opinion a mix of three aspects will be partly involved. The first is a bit of complacency. People who have avoided getting infected for a few months tend think they have dodged the bullet. They would have, but soldiers know that you cannot keep dodging bullets forever; either you do something about the source or get out of there. In the case of the virus, sooner or later someone with it will meet you. You can delay the inevitable by restricting your social life, but most people do not want to do that forever. 

The second may be temperature. Our Health Department has recommended that places where people congregate and have heating systems should raise the temperature to 18 degrees C from the 16 currently advocated. Apparently even that small change restricts the lifetime of the virus adhering to objects, and viruses exhaled have to settle somewhere. This won’t help from direct contact, but it may prevent some infections arising from touching some inert object. That can be overcome by good hygiene, but that can be a little difficult in some social environments. My answer to that is to have hands covered with a gel that has long-term antiviral activity. (Alcohol evaporates, and then has no effect.)

The third is the all-pervasive web. It seems to be unfortunate that the web is a great place for poorly analysed information. Thus you will see claims that the disease is very mild. For some it is, but you cannot cherry-pick and use that for a generalization. If you say, “Some, particularly the very young, often only show mild symptoms,” that is true, but it identifies the limits of the statement. For some others the disease is anything but mild. 

A more perfidious approach is the concept of “herd immunity”. The idea is that when a certain fraction of the population have been infected, the virus runs out of new people to infect, and once the infection rate falls below 1 it means the virus cannot replace itself and eventually it simply dies out. Where that value is depends on something called Ro, the number of people on average that the virus spreads itself to. This has to be guessed, but you see numbers tossed around like herd immunity comes when 60% of the people are infected. We then have to know how many have been infected, and lo and behold, you find on the web that a couple of months ago estimates said we were nearly there in many countries. The numbers of infections were guessed, and given the current situation, were obviously wrong. It is unfortunate that many people are insufficiently sceptical about web statements, especially those where there is a hidden agenda.

So, what is the truth about herd immunity? An article in Nature 587, 26-28 (2020) makes a somewhat depressing point: no other virus has ever been eliminated through herd immunity, and further, to get up to the minimum required infection rate in the US, say, will, according to the Nature paper, mean something like one to two million deaths. Is that a policy? Worse, herd immunity depends on the immunity of those infected to remain immune when the next round of viruses turn up, but corona viruses, such as those found in the common cold, do not give immunity lasting over a year. To quote the Nature paper, “Attempting to reach herd immunity via targeted infections is simply ludicrous.”

The usual way to gain herd immunity is with a vaccine. If sufficient people get the vaccine, and if the vaccine works, there are too few left to maintain the virus, although this assumes the virus cannot be carried by symptom-free vaccinated people. The big news recently is that Pfizer has a vaccine they claim is 90% effective in a clinical trial involving 43,538 participants, half of which were given a placebo. (Lucky them! They are the ones who have to get the infection to prove the vaccine works.) Moderna has a different vaccine that makes similar claims. Unfortunately, we still do not know whether long-term immunity is conveyed, and indeed the clinical trial still has to run for longer to ensure its overall effectiveness. If you know you have a 50% chance of getting the placebo, you may still carefully avoid the virus. Still, the sight of vaccines coming through at least parts of stage 3 trials successfully is encouraging.

No Phosphine on Venus

Some time previouslyI wrote a blog post suggesting the excitement over the announcement that phosphine had been discovered in the atmosphere of Venus (https://ianmillerblog.wordpress.com/2020/09/23/phosphine-on-venus/) I outlined a number of reasons why I found it difficult to believe it. Well, now we find in a paper submitted to Astronomy and Astrophysics (https://arxiv.org/pdf/2010.09761.pdf) we find the conclusion that the 12th-order polynomial fit to the spectral passband utilised in the published study leads to spurious results. The authors concluded the published 267-GHz ALMA data provide no statistical evidence for phosphine in the atmosphere of Venus.

It will be interesting to see if this denial gets the same press coverage as “There’s maybe life on Venus” did. Anyway, you heard it here, and more to the point, I hope I have showed why it is important when very unexpected results come out that they are carefully examined.

New Zealand Volcanism

If you live in New Zealand, you are aware of potential natural disasters. Where I live, there will be a major earthquake at some time, but hopefully well in the future. In other places there are volcanoes, and some are huge. Lake Rotorua is part of a caldera from a rhyolite explosion about 220,000 years ago that threw up at least 340 cubic kilometers of rock. By comparison, the Mount St Helens eruption ejected in the order of 1 cubic km of rock.  Taupo is even worse. It started erupting about 300,000 years ago and last erupted about 1800 years ago, when it devastated an area of about 20,000 square km with a pyroclastic surge and its caldera left a large lake (616 square kilometers area). Layers of ash a hundred meters deep covered nearby land. The Oranui event, about 27,000 years ago sent about 1100 cubic km of debris into the air, and was a hundred times more powerful than Krakatoa. Fortunately, these supervolcanoes do not erupt very often, although Taupo is also uncomfortably frequent, having up to 26 smaller eruptions between Oranui and the latest one. However, as far as we know, nobody has died in these explosions, largely because there were no people in New Zealand until well after the last one, the Maoris arriving somewhere like 1350 AD.

The most deadly eruption in New Zealand was Tarawera. Tarawera is a rhyolite dome, but apparently the explosion was basaltic.  Basaltic eruptions, like in Hawaii, while destructive if you are in the way of a flow, are fairly harmless because the lava simply flows out like a very slow moving river. Escape should be possible, but some eruptions, like Tarawera, become explosive too. The rhyolite eruptions like those at Taupo are explosive because molten rhyolite is often very wet, so when the pressure comes off as the magma comes to the surface, the steam simply sends it explosively upwards, but basaltic volcanoes are different. A recent article in Physics World explains why there are different outcomes for essentially the same material.

Basaltic magmas are apparently less viscous, and as the magma comes to the surface, the gases and steam are vented and the magma simply flows out, so what you get are clouds of steam and gas, often with small lumps of molten magma which gives a “fireworks” display, and a gently flowing river of magma. It turns out that the differences actually depend on the flow rate. If the flow rate is slow, or at least how the theory runs, the gases escape and the magma flows away and cools during the flow. If, however, it rises very quickly, say meters per second, it can cool at around 10 – 20 degrees per second. If it cools that quickly, the average basaltic magma forms nano-sized crystals. The theory then is, if it can get about 5% of the magma in this form, the crystals start to lock together, and when that happens the viscosity suddenly increases. Now the steam cannot escape so easily, the pressurised magma from below pushes it up, and at the surface the magma simply explodes with the steam content. That, of course, requires water, which is most likely in a subduction zone, and of course the subduction zone around New Zealand starts under the Pacific, where there is no shortage of water. It was the water content that led to the Tarawera event generate a pyroclastic surge, from which, once it starts, there is no escape, as the citizens of Pompeii would testify to if they were capable of testifying. And these sort of crises are those you cannot do anything about, other than note the warning signs and go elsewhere. The good thing about such volcanoes is that there is usually a few days warning. But if Taupo decided to erupt again, how far away is safe?

Phosphine on Venus

An article was published in Nature Astronomy on 14th September, 2020, that reported the detection of a signal corresponding to the 1 – 0 rotational transition of phosphine, which has a wavelength of 1.123 mm. This was a very weak signal that had to be obtained by mathematical processing to remove artefacts such as spectral “ripple” that originate from reflections. Nevertheless, the data at the end is strongly suggestive that the line is real. Therefore they found phosphine, right? And since phosphine is made from anaerobes and emitted from marsh gas, they found life, right? Er, hold on. Let us consider this in more detail.

First, is the signal real? The analysis detected the HDO signal at 1.126 mm, which is known to be the 2 – 3 rotational transition. That strongly confirms their equipment and analysis was working properly for that species, so this additional signal is likely to be real. The levels of phosphine have been estimated as between 10 – 30 ppb. However, there is a problem because such spectral signals come from changes to the spin rate of molecules. All molecules can only spin at certain quantised energies, but there are a number of options, thus the phosphine was supposed to be from the first excited state to the ground. There are a very large number of possible states, and higher states are more common at higher temperatures. The Venusian atmosphere ranges from about 30 oC near the top to somewhere approaching 500 oC at the bottom. Also, collisions will change spin rates. Most of our data comes from our atmospheric pressure or lower pressures as doing microwave experiments in high-pressure vessels is not easy. The position of the lines depends on the moment of inertia, so different molecules have different energy levels, and there are different ways  of spinning, tumbling, etc, for complicated molecules. Thus it is possible that the signal could be due to something else. However, the authors examined all the alternatives they could think of and only phosphine remained.

This paper rejected sulphur dioxide as a possibility because in the Venusian atmosphere it gets oxidised to sulphuric acid so there  is not enough of it, but phosphine is actually far more easily oxidised. If we look at our atmosphere, there are actually a number of odd looking molecules caused by photochemistry. The Venusian atmosphere would also have photochemistry but since its atmosphere is so different from ours we cannot guess what that is at present. However, for me I think there is a good chance this signal is from a molecule generated photochemically. The reason is the signal is strongest at the equator and fades away at the poles, where the light intensity per unit area is lower. Note that if it were phosphine generated by life and was removed photochemically, you would get the opposite result.

Phosphine is a rather reactive material, and according to the Nature article models predict its lifetime at 80 km altitude as less than a thousand seconds due to photodegradation. They argue its life should be longer lower down because the UV light intensity is weaker, but they overlook chemical reactions. Amongst other things, concentrated sulphuric acid would react instantaneously with it to make a phosphonium salt, and while the phosphine is not initially destroyed, its ability to make this signal is.

Why does this suggest life? Calculations with some fairly generous lifetimes suggest a minimum of about million molecules have to be made every second on every square centimeter of the planet. There is no known chemistry that can do that. Thus life is proposed on the basis of, “What else could it be?” which is a potential logic fallacy in the making, namely concluding from ignorance. On earth anaerobes make phosphine and it comes out as “marsh gas”, where it promptly reacts with oxygen in the air. This is actually rather rare, and is almost certainly an accident caused by phosphate particles  being in the wrong place in the enzyme system. I have been around many swamps and never smelt phosphine. What anaerobes do is take oxidised material and reduce them, taking energy and some carbon and oxygen, and spit out as waste highly reduced compounds, such as methane. There is a rather low probability they will take sulphates and make hydrogen sulphide and phosphine from phosphates. The problem I have is the Venusian atmosphere is full of concentrated sulphuric acid clouds, and enzymes would not work, or last, in that environment. If the life forms were above the sulphuric acid clouds, they would also be above the phosphoric acid, so where would they get their phosphorus? Further, all life needs phosphate: it is the only functional group that has the requirement to link reproductive entities (two to link a polymer, and one to provide the ionic group to solubilize the whole and let the strands separate while reproducing), it is the basis of adenosine tripolyphosphate which is the energy transfer agent for lfe, and the adenosine phosphates are essential solubilizing agents for many enzyme cofactors, in short, no phosphate, no life. Phosphate occurs in rocks so it will be very scarce in the atmosphere, so why would it waste what little that was there to make phosphine?To summarize, I have no idea what caused this signal and I don’t think anyone else has either. I think there is a lot of chemistry associated with the Venusian atmosphere we do not understand, but I think this will be resolved sooner or later, as it has got so much attention.

Science is No Better than its Practitioners

Perhaps I am getting grumpy as I age, but I feel that much in science is not right. One place lies in the fallacy ad verecundiam. This is the fallacy of resorting to authority. As the motto of the Royal Society puts it, nullius in verba. Now, nobody expects you to personally check everything, and if someone has measured something and either clearly shows how he/she did it, or it is something that is done reasonably often, then you take their word for it. Thus if I want to know the melting point of benzoic acid I look it up, and know that if the reported value is wrong, someone would have noticed. However, a different problem arises with theory because you cannot measure it. Further, science has got so complicated that any expert is usually an expert in a very narrow field. The net result is that  because things have got so complicated, most scientists find theories too difficult to examine in detail and do defer to experts. In physics, this tends to be because there is a tendency for the theory to descend into obscure mathematics and worse, the proponents seem to believe that mathematics IS the basis of nature. That means there is no need to think of causes. There is another problem, that also drifts over to chemistry, and that is the results of a computer-driven calculation must be right. True, there will be no arithmetical mistake but as was driven into our heads in my early computer lectures: garbage in, garbage out.

This post was sparked by an answer I gave to a chemistry question on Quora. Chemical bonds are usually formed by taking two atoms with a single electron in an orbital. Think of that as a wave that can only have one or two electrons. The reason it can have only two electrons is the Pauli Exclusion Principle, which is a very fundamental principle in physics. If each atom has only one in  such an orbital, they can combine and form a wave with two electrons, and that binds the two atoms. Yes, oversimplified. So the question was, how does phosphorus pentafluoride form. The fluorine atoms have one such unpaired electron each, and the phosphorus has three, and additionally a pair in one wave. Accordingly, you expect phosphorus to form a trifluoride, which it does, but how come the pentafluoride? Without going into too many details, my answer was that the paired electrons are unpaired, one is put into another wave and to make this legitimate, an extra node is placed in the second wave, a process called hybridization. This has been a fairly standard answer in text books.

So, what happened next? I posted that, and also shared it to something called “The Chemistry Space”. A moderator there rejected it, and said he did so because he did not believe it. Computer calculations showed there was no extra node. Eh?? So I replied and asked how this computation got around the Exclusion Principle, then to be additionally annoying I asked how the computation set the constants of integration. If you look at John Pople’s Nobel lecture, you will see he set these constants for hydrocarbons by optimizing the results for 250 different hydrocarbons, but leaving aside the case that simply degenerates into a glorified empirical procedure, for phosphorus pentafluoride there is only one relevant compound. Needless to say, I received no answer, but I find this annoying. Sure, this issue is somewhat trivial, but it highlights the greater problem that some scientists are perfectly happy to hide behind obscure mathematics, or even more obscure computer programming.

It is interesting to consider what a theory should do. First, it should be consistent with what we see. Second, it should encompass as many different types of observation as possible. To show what I mean, in phosphorus pentafluoride example, the method I described can be transferred to other structures of different molecules. That does not make it right, but at least it is not obviously wrong. The problem with a computation is, unless you know the details of how it was carried out, it cannot be applied elsewhere, and interestingly I saw a recent comment in a publication by the Royal Society of Chemistry that computations from a couple of decades ago cannot be checked or used because the details of the code are lost. Oops. A third requirement, in my opinion, is it should assist in understanding what we see, and even lead to a prediction of something new.

Fundamental theories cannot be deduced; the principles have to come from nature. Thus mathematics describes what we see in quantum mechanics, but you could find an alternative mathematical description for anything else nature decided to do, for example, classical mechanics is also fully self-consistent. For relativity, velocities are either additive or they are not, and you can find mathematics either way. The problem then is that if someone draws a wrong premise early, mathematics can be made to fit a lot of other material to it. A major discovery and change of paradigm only occurs if there is a major fault discovered that cannot be papered over.

So, to finish this post in a slightly different way to usual: a challenge. I once wrote a novel, Athene’s Prophecy, in which the main character in the first century was asked by the “Goddess” Athene to prove that the Earth went around the sun. Can you do it, with what could reasonably be seen at the time? The details had already been worked out by Aristarchus of Samos, who also worked out the size and distance of the Moon and Sun, and the huge distances are a possible clue. (Thanks to the limits of his equipment, Aristarchus’ measurements are erroneous, but good enough to show the huge distances.) So there was already a theory that showed it might work. The problem was that the alternative also worked, as shown by Claudius Ptolemy. So you have to show why one is the true one. 

Problems you might encounter are as follows. Aristotle had shown that the Earth cannot rotate. The argument was that if you threw a ball into the air so that when it reached the top of its flight it would be directly above you, when the ball fell to the ground it would be to the east of you. He did it, and it wasn’t, so the Earth does not rotate. (Can you see what is wrong? Hint – the argument implies the conservation of angular momentum, and that is correct.) Further, if the Earth went around the sun, to do so orbital motion involves falling and since heavier things fall faster than light things, the Earth would fall to pieces. Comets may well fall around the Sun. Another point was that since air rises, the cosmos must be full of air, and if the Earth went around the Sun, there would be a continual easterly wind. 

So part of the problem in overturning any theory is first to find out what is wrong with the existing one. Then to assert you are correct, your theory has to do something the other theory cannot do, or show the other theory has something that falsifies it. The point of this challenge is to show by example just how difficult forming a scientific theory actually is, until you hear the answer and then it is easy.

What are We Doing about Melting Ice? Nothing!

Over my more active years I often returned home from the UK with a flight to Los Angeles, and the flight inevitably flew over Greenland. For somewhat selfish reasons I tried to time my work visits in the northern summer, thus getting out of my winter, and the return flight left Heathrow in the middle of the day so with any luck there was good sunshine over Greenland. My navigation was such that I always managed to be at a window somewhere at the critical time, and I was convinced that by my last flight, Greenland was both dirtier and the ice was retreating. Dirt was from dust, not naughty Greenlanders, and it was turning the ice slightly browner, which made the ice less reflective, and thus would encourage melting. I was convinced I was seeing global warming in action during my last flight, which was about 2003.

As reported in “The Economist”, according to an analysis of 40 years of satellite data at Ohio State University, I was probably right. In the 1980s and 1990s, during Greenland summers it lost approximately 400 billion tonnes of ice each summer, by ice melting and by large glaciers shedding lumps of ice as icebergs into the sea. This was not critical at the time because it was more or less replenished by winter snowfalls, but by 2000 the ice was no longer being replenished and each year there was a loss approaching 100 billion t/a. By now the accumulated net ice loss is so great it has caused a noticeable change in the gravitational field over the island. Further, it is claimed that Greenland has hit the point of no return. Even if we stopped emitting all greenhouse gases now, it was claimed, more ice would be progressively lost than could be replaced.

So far the ice loss is raising the oceans by about a millimetre a year so, you may say, who cares? The problem is the end position is the sea will rise 7 metres. Oops. There is worse. Apparently greenhouse gases cause more effects at high latitudes, and there is a lot more ice on land at the Antarctic. If Antarctica went, Beijing would be under water. If only Greenland goes, most of New York would be under water, and just about all port cities would be in trouble. We lose cities, but more importantly we lose prime agricultural land at a time our population is expanding

So, what can be done? The obvious answer is, be prepared to move where we live. That would involve making huge amounts of concrete and steel, which would make huge amounts of carbon dioxide, which would make the overall problem worse. We could compensate for the loss of agricultural land, which is the most productive we have, by going to aquaculture but while some marine algae are the fastest growing plants on Earth, our bodies are not designed to digest them. We could farm animal life such as prawns and certain fish, and these would help, but whether productivity would be sufficient is another matter.

The next option is geoengineering, but we don’t know how to do it, and what the effects will be, and we are seemingly not trying to find out. We could slow the rate of ice melting, but how? If you answer, with some form of space shade, the problem is that orbital mechanics do not work in your favour. You could shade it some of the time, but so what? Slightly more promising might be to generate clouds in the summer, which would reflect more sunlight.

The next obvious answer (OK, obvious may not be the best word) is to cause more snow to fall in winter. Again, the question is, how? Generating clouds and seeding them in the winter might work, but again, how, and at what cost? The end result of all this is that we really don’t have many options. All the efforts at limiting emissions simply won’t work now, if the scientists at Ohio State are correct. Everyone has heard of tipping points. According to them, we passed one and did not notice until too late. Would anything work? Maybe, maybe not, but we won’t know unless we try, and wringing our hands and making trivial cuts to emissions is not the answer.

The Hangenberg Extinction

One problem of applying the scientific method to past events is there is seldom enough information to reach a proper conclusion. An obvious example is the mass extinction that we know occurred at the end of the Devonian period, and in particular, something called the Hangenberg event, which is linked to the extrinction of 44% of high-level vertebrate clades and 97% of vertebrate species. Only smaller species survived, namely sharks smaller than a meter in length and general fish less than ten centimeters in length. This is the time when most ammonites and trilobites, which had been successful for such a long time, failed to survive. One family of trilobites survived, only to be extinguished in the Permian extinction, another  of those that wiped out 90% of all species. 

So why did this happen? First, it is most likely the ecosystems had been stressed. The Hangenberg event occurred about 358 My ago, but before that, at about 382 My BP most jawless fish disappeared, while from 372 – 359 My BP there were a series of extinctions or climate changes known as the Kellwasser event (although it was almost certainly a number of events.) So for about 30 million years leading up to the Hangenberg event, there had been severe difficulties for life. At this stage, leaving aside insects and plants that had left the oceans, most life were in marine or freshwater environments and it was this life that appears to have suffered the most. That conclusion, however, may more reflect a relative paucity of land-based fossils. Climate change was almost certainly involved because over this period there was a series of sea level rises while the water became more anoxic. The causes of this are less than clear and there have been a numper of suggestions.

One possibility is an asteroid collision, and while impact craters can be found they cannot be dated sufficiently closely to be associated with any specific event. A more likely effect questions why anoxic? The climate  should have no direct effect on this, although the reverse is possible. The question is really was it the seas only that became anoxic? One possibility is that on land the late Devonian saw a dramatic change in plant life. In the early Devonian, plants had made it to land, but they were small leafy plants like liverworts and mosses. In the late Devonian they developed stems that could move water and nutrients, and suddenly huge plants emerged. One argument is that this caused a flood of nutrients through the weathering of rocks caused by the extensive root systems to flow down into the sea, which caused algal blooms, which led to anoxic conditions. Meanwhile, the huge forests of the Devonian may have reduced carbon dioxide levels, which would lead to glaciation, and the sea level fall in the very late Devonian. However, it does not explain sea level rise earlier. That may have arisen from extensive volcanism that occurred around 372 My ago, which would enhance greehouse warming. You can take your pick from these explanations because even the experts in the field are unsure.

Accordingly, a new theory has just emerged, namely Earth was bombarded by cosmic rays from a nearby supernova (Fields, et al., arXiv:2007.01887v1, 3rd July, 2020). This has the advantage that we can see why it is global. The specific event would be a core-collapse supernova. If this occurred within 33 light years from Earth, it would probably extinguish all life on Earth, but one about twice as far away, 66 light years, would exterminate much life, but not all. The mechanism is in part ozone depletion, but there is the possibility of enhanced nitrogen fixation in the atmosphere, which might lead to algal blooms. One of the good things about such a proposition is it is testable. Such an event would bombard Earth with isotopes that would otherwise be difficult to obtain, and one would be plutonium 244. There is no naturally occurring plutonium on Earth, so if some atoms were found in the fossils or in accompanying rock, that would support the supernova event.

So, is that what happened? My personal view is that is unlikely, and the reason I say that is that most of the damage would be done to life on land, and as I gather, the insects expanded into the Carboniferous period. The seas would be relatively protected because the incoming flux would be protected by the water. The nitrate fixation might cause an algal bloom and while a lot of energy would be required to saturate the world’s oceans, maybe there was sufficient. The finding of plutonium in the associated deposits would be definitive, however. The typical deposits were black shales overlaid by sandstone, and are easy to locate, so if there is plutonium in them, there is the answer. If there is not, does that mean the proposition is wrong? That is more difficult to answer, but the more samples that are examined from widespread sources, the more trouble for the proposition.

My preferred explanation is the ecological one, namely the development of tree ferns, etc. The Devonian extinction was slow, taking 24 million years, and while most marine extinctions occurred during what is called the Hangenberg event, the word event may be misleading. That specific period took 100,000 – 300,000 years, which is plenty of time for an ecological disaster to kill off that which cannot adapt. To put it into perspective, Homo Sapiens has been around for only 30,000 years, and effective for only about 10,000 years. Look at the ecological change. Now, think what will happen if we let climate change get out of control. We are already causing serious extinction of many species, but the loss of habitat if the seas rise will dwarf what we have done so far because our booming population has to eat. We should learn from the late Devonian.

Betelguese Fades

Many people will have heard about this: recently Betelgeuse became surprisingly dim. Why would that be? First, we need to understand how a star lives. They start by burning hydrogen and making helium. This is a relatively slow process, the reason being that enormous pressures are required. The reason for this is that hydrogen nuclei repel each other very strongly when they get close, and the first step seems to be to make a helium atom with no neutrons, which is two protons bound. However, they are not bound very tightly, and the electric force between them makes them fly apart in an extremely short time. Somewhere within this time, the pressure/temperature has to force an electron into one of the nuclei to transform it into a neutron, when we have deuterium, which is stable. The deuterium goes on to make the rather stable helium nuclei, and liberate a lot of energy per reaction. However, the probability of such reactions is surprisingly low, mainly because of the difficulty in making 2He. The rate is increased by temperature and pressure, but the energy liberated pushes the rest of the matter away, so an equilibrium is formed. The reason the sun pours out so much energy is because the sun is so big. The bigger the star, the more the central pressure, and the faster it can burn its hydrogen, so paradoxically the bigger the star the sooner it runs out of fuel.

Betelgeuse is the tenth brightest star in the night sky, and after Rigel, the second brightest in the constellation of Orion. It has a mass somewhere between 10 – 20 times that of the sun, so at its centre the pressure due to gravitation will be far greater than our sun, hydrogen would have burned much faster, and accordingly, it has run out of hydrogen fuel so much more quickly. When it does, if it is big enough, its core collapses somewhat due to the loss of repulsive energy until it gets hot enough to burn helium, which releases so much more energy that the outer part of the star bloats. If placed in the centre of the solar system, the surface of Betelgeuse would come close to the orbit of Jupiter. All the rocky bodies would be gone. As it grows to that size, it is not really in equilibrium. If it bloats too far, the pressure drops, the outer surface collapses, the pressure increases, reactions go faster, it expands, and so on. The periods of such pulsations can be up to thousands of days. When they pulsate, we see that as a fluctuation in brightness. It will keep pulsating and behaving errtatically until sometime, most probably within the next 100,000 years, it will collapse and form a supernova. As a star like Betelgeuse pulsates, it brightens and dims. All very expected, but recently it has dimmed to about 40% of what it was before. Was this the prelude to a supernova? The short answer is, we don’t know, but the pulsations got our attention.

Because of the size, the gravity is weaker on the surface, and huge bursts of energy send gas as a burst out into nearby space. While we have our solar winds and coronal mass ejections, those of a red supergiant are somewhat more massive, and they send out massive clouds of gas and dust. One of the first “guesses” as to the cause of the dimming was the blocking of light by dust, but further studies showed that that cannot be the case because the spectral data was more consistent with a significant mean surface cooling. Further, Betelgeuse is close enough that major telescopes can resolve the star as a ball, and it was found that between 50 – 70% of the star’s surface was significantly cooler than the rest. The star appears to have a massive star spot! So, for the time being it seems likely that Betelguese will last a little longer. As an aside, do not feel sorry for life on a planet aorund it. Betelguese is only about 20 million years old. There is no time for life to develop around such massive stars.