Nuclear War is Not Good

Yes, well that is sort of obvious, but how not good? Ukraine has brought the scenario of a nuclear war to the forefront, which raises the question, what would the outcome be? You may have heard estimates from military hawks that apart from those killed due to the blasts and those who got excessively irradiated, all would be well. Americans tend to be more hawkish because the blasts would be “over there”, although if the enemy were Russia, Russia should be able to bring it to America. There is an article in Nature (579, pp 485 – 487) that paints a worse picture. In the worst case, they estimate deaths of up to 5 billion, and none of these are due to the actual blasts or the radiation; they are additional extras. The problem lies in the food supply.

Suppose there was a war between India and Pakistan. Each fires nuclear weapons, first against military targets, then against cities. Tens of millions die in the blasts. However, there is a band of soot that rises into the air, and temperatures drop. Crop yields drop dramatically from California to China, and affect dozens of countries. Because of the limited food yields, more than a billion people will suffer from food shortages. The question then is, how valid are these sort of predictions?

Nuclear winter was first studied during the cold war. The first efforts described how such smoke would drop the planet into a deep freeze, lasting for months, even in summer. Later studies argued this effect was overdone and it would not end up with such a horrific chill, and unfortunately that has encouraged some politicians who are less mathematically inclined and cannot realize that “less than a horrific chill” can still be bad.

India and Pakistan each have around 150 nuclear warheads, so a study in the US looked into what would happen in which the countries set off 100 Hiroshima-sized bombs. The direct casualties would be about 21 million people. But if we look at how volcanic eruptions cool the planet, and how soot goes into the atmosphere following major forest fires, modelling can predict the outcome. A India-Pakistan war would put 5 million tonne of soot into the atmosphere, while a US Russian war would loft 150 million tonne. The first war would lower global temperatures by a little more than 1 degree C           , but the second would lower it by 10 degrees C, temperatures not seen since the last Ice Age. One problem that may not be appreciated is that sunlight would heat the soot, and by heating the adjacent air, it causes it to rise, and therefore persist longer.

The oceans tell a different story. Global cooling would affect the oceans’ acidity, and the pH would soar upwards (making it more alkaline). The model also suggested that would make it harder to form aragonite, making life difficult for shellfish. Currently, the shellfish are in the same danger from too much acidity; depending on aragonite is a bad option! The biggest danger would come regions that are home to coral reefs. There are some places that cannot win. However, there is worse to come: possibly a “Nuclear Niño”, which is described as a “turbo-charged El Niño”. In the case of a Russia/US war, the trade winds would reverse direction and water would pool in the eastern pacific ocean. Droughts and heavy rain would plague different parts of the world for up to seven years.

One unfortunate effect is that this is modelled. Immediately, another group from Los Alamos carried out different calculations and came to less of a disastrous result. The difference depends in part on how they simulate the amount of fuel, and how that is converted to smoke. Soot comes from partial combustion, and what happens where in a nuclear blast is difficult to calculate.

The effects on food production could be dramatic. Even following the small India-Pakistan war, grain production could drop by approximately 12% and soya bean production by 17%. The worst effects would come from the mid-latitudes such as the US Midwest and Ukraine. The trade in food would dry up because each country would be struggling to feed itself. A major war would be devastating, for other reasons as well. It is all very well to say your region might survive the climate change, and somewhere like Australia might grow more grain if it gets adequate water, as at present it is the heat that is the biggest problem. But if the war also took out industrial production and oil production and distribution, now what? Tractors are not very helpful if you cannot purchase diesel. A return to the old ways of harvesting? Even if you could find one, how many people know how to use a scythe? How do you plough? Leaving the problem of knowing where to find a plough that a horse could pull, and the problem of how to set it up, where do you find the horses? It really would not be easy.

Advertisement

Burying Carbon Dioxide, or Burying Cash?

In the last post, I expressed my doubt about the supply of metals for electric batteries. There is an alternative to giving up things that produce CO2 and that is to trap and sequester CO2. The idea is that for power stations the flu gases have the CO2 removed and pumped underground. That raises the question, how realistic is this? Chemistry World has an article that casts doubt, in my mind, that this can work. First, the size of the problem. One company aims to install 70 such plants, each capable of sequestering 1 million t of CO2. If these are actually realized, we almost reach 0.2% of what is required. Oops. Basically, we need to remove at least 1 billion t/a to stand still. This problem is large. There is also the problem of how we do it.

The simplest way is to pass the flu gases through amine solvents, with monoethanolamine the most common absorbent used. Leaving aside the problem of getting enough amine, which requires a major expansion of the chemical manufacturing industry, what happens is the amine absorbs CO2 and makes the amine carbonate, and the CO2 is recovered by heating the carbonate and regenerating the amine. However, the regeneration will never be perfect and there are losses. Leaving aside finding the raw materials actually synthesizing the amine takes about 0.8 MWh of energy, the inevitable losses mean we need up to 240 MWh every year to run a million tonne plant. We then need heat to decompose the amine carbonate, and that requires about 1 MWh per tonne of CO2 absorbed. Finally, we need a little less than 0.12 MWh per tonne of CO2 to compress it, transport it and inject it into the ground. If we wanted to inject 1 billion t of CO2, we need to generate something like 840 TWh of electricity. That is a lot of electricity.

We can do a little better with things called metal organic frameworks (MOFs).These can be made with a high surface energy to absorb CO2 and since they do not form strong chemical bonds the CO2 can be recovered at temperatures in the vicinity of  80 – 100 degrees C, which opens the possibility of using waste heat from power stations. That lowers the energy cost quite a bit. Without the waste heat the energy requirement is still significant, about half that of the amines. The comes the sting – the waste heat approach still leaves about 60% of what was absorbed, so it is not clear the waste heat has saved much. The addition of an extra step is also very expensive.

The CO2 content of effluent gases is between 4 – 15%; for ordinary air it is 0.04%, which makes it very much more difficult to capture. One proposal is to capture CO2 by bubbling air through a solution of potassium hydroxide, and then evaporating off the water and heating the potassium carbonate to decomposition temperature, which happens to be about 1200 degrees C. One might have thought that calcium oxide might be easier, which pyrolyses about 600 degrees C, but what do I know? This pyrolysis takes about 2.4 MWh per tonne of CO2, and if implemented, this pyrolysis route that absorbs CO2 from the air would require about 1.53 TWh of electricity per year for sequestering 1 million t of CO2.

When you need terawatt hours of electricity to run a plant capable of sequestering one million tonne of CO2, and you need to sequester a billion t, it becomes clear that this is going to take an awful lot of energy. That costs a lot money. In the UK, electricity costs between £35 – 65 per MWh, and we have been talking in terms of a million times that per plant. Who pays? Note this scheme has NO income stream; it sells nothing, so we have to assume it will be charged to the taxpayer. Lucky taxpayer!

One small-scale effort in Iceland offers a suggested route. It is not clear how they capture the CO2, but then they dissolve it in water and inject that into basalt, where the carbonic acid reacts with the olivine-type structures to make carbonates, where it is fixed indefinitely. That suggests that provided the concentration of CO2 is high enough, using pressure to dissolve it in water might be sufficient. That would dramatically lower the costs. Of course, the alternative is to crush the basalt and spread it in farmland, instead of lime. My preferred option to remove CO2 from the air is to grow plants. They work for free at the low concentrations. Further, if we select seaweed, we get the added benefit of improving the ecology for marine life. But that requires us to do something with the plants, or the seaweed. Which means more thinking and research. The benefit, though, is the scheme could at least earn revenue. The alternatives are to bankrupt the world or find some other way of solving this problem.

A Plan to Counter Global Warming Must be Possible to Implement

Politicians seem to think that once there is a solution to the problem in theory, the problem is solved so they stop thinking about it. Let us look at a reality. We know we have a problem with global warming and we have to stop burning fossil fuels. The transport sector is a big problem, but electric vehicles will do the trick, and in theory that might be true, but as I have pointed out in previous posts there is this troublesome matter of raw materials. Now the International Energy Agency has brought a little unpleasantness to the table. They have reported that global battery and minerals supply chains need to expand ten-fold to meet the critical needs of 2030 if the plan is to at least maintain schedule. If we take the average size of a major producer as a “standard mine” according to the IEA we need 50 more such lithium mines, 60 more nickel mines, and 17 more cobalt mines operating fully by 2030. Generally speaking, a new mine needs about ten years between starting a feasibility study and serious production. See a problem here? Because of the costs and exposure, you need feasibility studies to ensure that there is sufficient ore where you can’t see, that there is an economic way of processing the ore, and you must have a clear plan on what to do with what to do with minerals you do not want, with materials like arsenates or other undesirables also being present. You also have to build new roads, pipe in water, provide electricity, and do a number of other things to make the mine work that are not directly part of the mine. This does not mean you cannot mine, but it does mean it won’t be quite as easy as some might have you think. We now want our mines not to be environmental disasters. The IEA report notes that ten years, and then adds several more years to get production up to capacity.

The environmental issues are not to be considered as irrelevant. Thus the major deposits of lithium tend to be around the Andes, typically in rather dry areas. Then lithium is obtained by pumping down water, dissolving the salts, then bringing them up and evaporating the brine. Once most of the lithium is obtained, something has to be done with the salty residue, and of course the process needs a lot of water. The very limited water already in some locations is badly needed by the local population and their farms. The salt residues would poison agriculture.

If we consider nickel, one possible method to get more from poorer ores is high-pressure acid leaching. The process uses acid at high temperatures and pressure and end up with nickel at a grade suitable for batteries. But nickel often occurs as a sulphide, which means as a byproduct you get hydrogen sulphide, and a number of other effluents that have to be treated. Additionally, the process requires a lot of heat, which means burning coal or oil. The alternative source to the sulphide deposits, as advocated by the IEA, is laterite, a clayish material that also contains a lot of iron and aluminium oxides. These metals could also be obtained, but at a cost. The estimate of getting nickel by this process is to double the cost of the nickel.

The reason can be seen from the nature of the laterite (https://researchrepository.murdoch.edu.au/id/eprint/4340/1/nickel_laterite_processing.pdf), which is a usually a weathered rock. At the top you have well weathered rock, more a clay, and is red limonite. The iron oxide content (the cause of the red colour) is over 50% while the nickel content is usually less than 0.8% and the cobalt less than 0.1%. Below that is yellow limonite, where the nickel and cobalt oxides double their concentration. Below that we get saprolite/serpentine/garnierite (like serpentine but with enhanced nickel concentration). These can have up to 3% nickel, mainly due to the garnierite, but the serpentine family are silicates, where the ferrous such as in olivine has been removed. The leaching of a serpentine is very difficult simply because silicates are very resistant. Try boiling your average piece of basalt in acid. There are other approaches and for those interested, the link above shows them. However, the main point is that much of the material does not contain nickel. Do y9ou simply dump it, or produce iron at a very much higher cost than usual?

However, the major problems for each are they are all rather energy intensive, and the whole point of this is to reduce greenhouse emissions. The acid leach is very corrosive, and hence maintenance is expensive, while the effluents are troublesome for disposal. The disposal of the magnesium sulphate at sea is harmless, but the other materials with it may not be. Further, if the ore is somewhere like the interior of Australia, even finding water will be difficult.

Of course all these negatives can be overcome, with effort, if we are prepared to pay the price. Now, look around and ask yourself how much effort is going into establishing all those mines that are required? What are the governments doing? The short answer, as far as I can tell, is not much. They leave it to private industry. But private industry will be concerned that their balance sheets can only stand so much speculative expansion. My guess is that 2030 objectives will not be fulfilled.

Space – To the Final Frontier, or Not

In a recent publication in Nature Astronomy (https://www.nature.com/articles/s41550-022-01718-8) Byers point out an obvious hazard that seems to be increasing in frequency: all those big rockets tend to eventually come down, somewhere, and the return is generally uncontrolled. Modest-sized bits of debris meet a fiery end, burning up in the atmosphere, but larger pieces hit the surface and the kinetic energy makes comparison of them to an oversized bullet or cannon-ball make the latter seem relatively harmless. In May, 2020, wreckage from the 18 tonne core of a Chinese Long March 5B rocket hit two villages in the Ivory Coast, damaging buildings. In July 2022, suspected wreckage from a SpaceX Crew-1 capsule landed on farmland in Australia, Another Long March 5B landed just south of the Philippines. In 1979, NASA’s Skylab fell back to Earth, scattering debris across Western Australia. So far, nobody has been injured, but it is something of a matter of luck.

According to Physics World the US has an Orbital Debris Mitigation Standard Practices stipulation that all launches should have a risk of casualty from uncontrolled re-entry of less than one in 10,000, but the USAF, and even NASA have flouted this rule on numerous occasions. Many countries may have no regulations. As far as I am aware my own country (New Zealand) has none yet New Zealand launches space vehicles. The first stage always falls back into the Pacific, which is a large expanse of water, but what happens after that is less clear.

In the past thirty years, more than 1500 vehicles have fallen out of orbit, and about three quarters of these have been uncontrolled. According to Byers, there was a 14% chance someone could have been killed.

So what can be done? The simplest is to provide each rocket with extra fuel. Each time it is time to end its orbit, the descent can be controlled to the extent it lands at the point in the Pacific that is farthest from land. So far, this has not been done because of the extra cost. A further technique would be to de-orbit rocket bodies immediately following satellite deployment. That still requires additional fuel. In principle, with proper design, the rocket bodies could be recovered and reused. Rather perversely, it appears the greatest risk is for countries in the Southern hemisphere. The safest places are those at greater inclination than the launch site.

Meanwhile, never mind the risk to those left behind; you want to go into space, right? Well, you may have heard of bone density loss. This effect has finally had numbers put on it (https://www.nature.com/articles/s41598-022-13461-1) Basically, after six months in space, the loss of bone density corresponded to 20 years of ongoing osteoporosis, particularly in load bearing (on Earth) bones, such as the tibia. Worse, these only partially recovered, even after one year on Earth, and the lasting effect was equivalent to ten years of aging. The effect, of course, is due to microgravity, which is why, in my SF novels, I have always insisted on ships either having a rotating ring to create a centrifugal “artificial gravity”. On the other hand, the effect can vary between people. Apparently the worst cases can hardly walk on return for some time, while other apparently continue on more or less as usual and ride bikes to work rather than drive cars. And as if bone loss was not bad enough, there is a further adverse possibility: accelerated neurodegenerations. (https://jamanetwork.com/journals/jamaneurology/article-abstract/2784623). By tracking the concentration of brains specific proteins before and after a space mission it was concluded that long-term spaceflight presents a slight but lasting threat to neurological health. However, this study concluded three weeks after landing, so it is unclear whether long-term repair is possible. Again, it is assumed that it is weightlessness that is responsible. On top of that, apparently there are long-lasting changes in the brain’s white matter volume and the shape of the pituitary gland. Apparently more than half of astronauts developed far-sightedness and mild headaches. Seemingly, this could be because in microgravity the blood no longer concentrated in your legs.

Rotation, Rotation

You have probably heard of dark matter. It is the stuff that is supposed to be the predominant matter of the Universe, but nobody has ever managed to find any, which is a little embarrassing when there is supposed to be something like about 6 times more dark matter in the Universe than ordinary matter. Even more embarrassing is the fact that nobody has any real idea what it could be. Every time someone postulates what it is, and work out a way to detect it, they find absolutely nothing. On the other hand there may be a simpler reason for this. Just maybe they postulated what they thought they could find, as opposed to what it is, in other words it was a proposal to get more funds with the uncovering the nature of the Universe as a hoped-for by-product.

The first reason why there might be dark matter came from the rotation of galaxies. Newtonian mechanics makes some specific predictions. Very specifically, the periodic time for an object orbiting the centre of mass at a distance r varies as r^1.5. That means that say there are two orbiting objects, say Earth and Mars, where Mars is about 1.52 times more distant, the Martian year is about 1.88 Earth years. The relationship works very well in our solar system, and it was from the unexpected effects on Uranus that Neptune was predicted, and found to be in the expected place. However, when we take this up to galactic level, things come unstuck. As we move out from the centre, stars move faster than predicted from the speed of those in the centre. This is quite unambiguous, and has been found in many galaxies. The conventional explanation is that enormous quantities of cold dark matter provide the additional gravitational binding.

However, that explanation also has problems. A study of 175 galaxies showed that the radial acceleration at different distances correlated with the amount of visible matter attracting it, but the relationship does not match Newtonian dynamics. If the discrepancies are due to dark matter, one might expect the dark matter to be present in different amounts in different galaxies, and different parts of the same galaxy. Any such relationship should have a lot of scatter, but it hasn’t. Of course, that might be a result of dark matter being attracted to ordinary matter.

There is an alternative explanation called MOND, which stands for modified Newtonian gravity, which proposes that at large distances and small accelerations, gravity decays more slowly than the inverse square law. The correlation of the radial acceleration with the amount of visible matter would be required by something like MOND, so that is a big plus for it, although the only reason it was postulated in this form was to account for what we see. However, a further study has shown there is no simple scale factor. What this means is that if MOBD is correct the effects on different galaxies should be essentially dependent on the mass of visible matter but it isn’t. MOND can explain any galaxy, but the results don’t translate to other galaxies in any simple way. This should rule out MOND without amending the underlying dynamics, in other words, altering Newtonian laws of motion as well as gravity. This may be no problem for dark matter, as different distributions would give different effects. But wait: in the previous paragraph it was claimed there was no scatter.

The net result: there are two sides to this: one says MOND is ruled out and the other says no it isn’t, and the problem is that it is observational uncertainties that suggest it might be. The two sides of the argument seem to be either using different data or are interpreting the same data differently. I am no wiser.

Astronomers have also observed one of the most distant galaxies ever, MACS1149-JD1, which is over ten billion light years away, and it too is rotating, although the rotational velocity is much slower than galaxies that we see that are much closer and nowhere near as old. So why is it slower? Possible reasons include it has much less mass, hence the gravity is weaker.

However, this galaxy is of significant interest because its age makes it one of the earliest galaxies to form. It also has stars in it estimated to be 300 million years old, which puts the star formation at just 270 million years after the Big Bang. The problem with that is it is in the dark period, when matter as we know it had presumably not formed, so how did a collection of stars start? For gravity to cause a star to accrete, it has to give off radiation but supposedly no radiation was given off then. Again, something seems to be wrong. That most of the stars are just this age makes it appear that the galaxy formed about the same time as the stars, or put it another way, something made a whole lot of stars form at the same time in places where the net result was a galaxy. How did that happen? And where did the angular momentum come from? Then again, did it happen? This is at the limit of observational techniques, so have we drawn a non-valid conclusion from difficult to interpret data. Again, I have no idea, but I mention this to show there is a still a lot to learn about how things started.