About ianmillerblog

I am a semi-retired professional scientist who has taken up writing futuristic thrillers, which are being published by myself as ebooks on Amazon and Smashwords, and a number of other sites. The intention is to publish a sequence, each of which is stand-alone, but when taken together there is a further story through combining the backgrounds. This blog will be largely about my views on science in fiction, and about the future, including what we should be doing about it, but in my opinion, are not. In the science area, I have been working on products from marine algae, and on biofuels. I also have an interest in scientific theory, which is usually alternative to what others think. This work is also being published as ebooks under the series "Elements of Theory".

Did Mars Have an Ocean?

It is now generally recognized that Mars has had fluid flows, and a number of riverbeds, lake beds, etc have been identified, but there are also maps on the web of a proposed Northern Ocean. It has also been proposed that there has been polar wander, and this Northern Ocean was more an equatorial one when it was there about 3.6 billion years ago. The following is a partial summary from my ebook “Planetary Formation and Biogenesis”, where references to scientific papers citing the information can be found.

Various options include: (with bracketed volumes of water in cubic kilometre): a northern lake (54,000), the Utopia basin, (if interconnected, each with 1,000,000), filled to a possibly identified ‘shoreline’ (14,000,000), to a massive northern hemisphere ocean (96,000,000). Of particular interest is that the massive channels (apart from two that run into Hellas) all terminate within an elevation of 60 m of this putative shoreline.

A Northern Ocean would seem to require an average temperature greater than 273 degrees K, but the faint sun (the sun is slowly heating and three and a half billion years ago, when it is assumed water flowed, it had only about two thirds its current output) and an atmosphere restricted to CO2/H2O leads in most simulations to mean global temperatures of approximately 225 degrees K. There is the possibility of local variations, however, and one calculation claimed that if global temperatures were thirty degrees higher, local conditions could permit Hellas to pond if the subsurface contained sufficient water, and with sufficient water, the northern ocean would be possible and for maybe a few hundred years be ice free. A different model based on simulations, assuming a 1 bar CO2 atmosphere with a further 0.1 bar of hydrogen, considered that a northern ocean would be stable up to about three billion years. There is quite an industry of such calculations and it is hard to make out how valid they are, but this one seems not to be appropriate. If we had one bar pressure of carbon dioxide for such a long time there would be massive carbonate deposits, such as lime, or iron carbonates, and these are not found in the required volumes. Also, the gravity of Earth is insufficient to hold that amount of hydrogen and Mars has only 40% of Earth’s gravity. This cannot be correct.

This northern ocean has been criticized on the basis that the shoreline itself is not at a constant gravitational potential, and variations of as much as 1.8 km in altitude are found. This should falsify the concept, except that because this proposed ocean is close to the Tharsis volcanic area, the deformation of forming these massive volcanoes could account for the differences. The magma that is ejected had to come from somewhere, and where it migrated from would lead to an overall lowering of the surface there, while where it migrated to would rise.

Support for a northern sea comes from the Acidalia region, where resurfacing appears to have occurred in pulses, finishing somewhere around 3.65 Gy BP.  Accumulation of bright material from subsequent impacts and flow-like mantling was consistent with a water/mud northern ocean. If water flows through rock to end in a sea, certain water-soluble elements are concentrated in the sea, and gamma ray spectra indicates that this northern ocean is consistent with enhanced levels of potassium and possibly thorium and iron. There may, however, be other reasons for this. While none of this is conclusive, a problem with such data is that we only see the top few centimeters and better evidence could be buried in dust.

Further possible support comes from the Zhurong rover that landed in Utopia Planitia (Liu, Y., and 11 others. 2022. Zhurong reveals recent aqueous activities in Utopia Planitia, Mars. Science Adv., 8: eabn8555). Duricrusts formed cliffs perched through loose soil, which requires a substantial amount of water, and also avoids the “buried in dust” problem. The authors considered these were formed through regolith undergoing cementation through rising or infiltration of briny groundwater. The salt cements precipitate from groundwater in a zone where active evaporation and accumulation can occur. Further, it is suggested thus has occurred relatively recently. On the other hand, ground water seepage might also do it, although the water has to be salty.

All of which is interesting, but the question remains: why was the water liquid? 225 degrees K is about fifty degrees below water’s freezing point. Second, because the sun has been putting out more heat, why is the water not flowing now? Or, alternatively, as generally believed, why did it flow for a brief period than stop? My answer, somewhat unsurprisingly since I am a chemist, is that it depends on chemistry. The gases had to be emitted from below the surface, such as from volcanoes or fumaroles. The gases could not have been adsorbed there as the planet accreted otherwise there would be comparable amounts of neon as to nitrogen on the rocky planets, and there is not. That implies the gases were accreted as chemical compounds; neon was not because it has no chemistry. When the accreted compounds are broken down with water, ammonia forms. Ammonia dissolves very rapidly in water, or ice, and liquefies it down to about 195 degrees K, which is well within the proposed range stated above. However, ammonia is decomposed slowly by sunlight, to form nitrogen, but it will be protected when dissolved in water. The one sample of seawater from about 3.2 billion years ago is consistent with Earth having about 10% of its nitrogen still as ammonia. However, on Mars ammonia would slowly react with carbon dioxide being formed, and end up as solids buried under the dust.

Does this help a northers sea? If this is correct, there should be substantial deposits of nitrogen rich solids below the dust. If we went there to dig, we would find out.

Reducing Electricity Usage in Your Refrigerator

When thinking about battling climate change, did you know that a major electricity consumer in your house is your refrigerator? The International Institute of Refrigeration (yes, there are institutes for just about everything) estimates that about 20% of all electricity used is expended on vapour compression refrigeration. The refrigerator works by compressing a vapour, often some sort of freon, or in industry, with ammonia, and when compressed it gives off heat. When you ship it somewhere else and expand it, the somewhere else gets colder. This is also the principle of the heat pump and various air conditioning units. Compress gas here to add heat from a heat exchanger; ship the gas there and expand it, where it cools a heat exchanger. The compression and expansion of gas moves heat from A to B, hence the name heat pump. Also, the refrigerant gases tend to be powerful greenhouse gases. One kilogram of R410a has the same greenhouse impact as two tonne of carbon dioxide. Refrigerants leak into the atmosphere from faulty equipment or when equipment is not properly disposed of.

It is possible to heat or cool without any gas through the Peltier effect. Basically, when electric current passes between two conductors, heating or cooling effects may be generated. There are commercial solid-state such cooling systems, but they suffer from high cost and poor efficiency, in part because the effect is restricted to the specific junction.

There is an alternative. Some solid state materials can cool when they are subjected to strain, which is generated from an external field, such as  the electric or magnetic fields, or simply pressure. So far most efforts have been focused on the magnetic field, and one material, Mn3SnC apparently gives significant cooling,  but the magnetic field has to be greater than 2 tesla. That means expensive and bulky magnets, and additionally the refrigerator, if it used them, would have to be a “no-go” area for credit cards, and possibly people with a pacemaker. Even aside from direct messing with the pacemaker, losing all those bitcoin just because you wanted a cool beer could lead to medical problems.

However, there has been an advance. Wu et al. (Acta Materiala 237: 118154) have taken the Mn3SnC and coated it with a piezoelectric layer of lead zirconate titanate. Don’t ask me how they come up with these things; I have no idea, but this is certainly interesting. They probably do it by looking through the literature to find materials already known to have certain properties. Thus a piezoelectric effect is where you generate an electric voltage by applying pressure. Such effects are reversible so if you can generate a voltage by applying pressure to something, you can generate pressure by applying a potential difference. Recall that pressure also can generate a cooling effect. Accordingly, by applying an electric field to this coated material a cooling effect was obtained equivalent to that of a 3 tesla magnetic field. When the electric field is removed, the temperature returns to where it was. How useful will this be? Hard to tell right now. The temperature drop when applying a field of 0.8 kV/cm was slightly under 0.6 of a degree Centigrade, which is not a huge change while the voltage is tolerably high. Interestingly, if you apply a magnetic field you also get a temperature change, but in the opposite direction – instead of cooling, it heats. Why that is is unclear.

As you might guess, there is still a significant distance to go before we get to a refrigerator. First, you have to get the cooling into some other fluid that can transport it to where you need it. To do that you have to take the heat out of the cooling fluid, but that will heat up your unit, so you need another fluid to take the heat to where it can be dissipated. We have very roughly the same cycle as our present system except we are not compressing anything but we have two fluids. Except I rather think we will be, because pumping a fluid involves increasing its pressure so it flows. The alternative is to put the material across the rear wall of the refrigerator thus to cool the interior and have the heat dissipated out the back. The problem now is the change of temperature is rather small for the voltage. This is not so much a problem with fluids transporting it, but if the solids transport it, the solids are always heated by the environment so your temperature drop is from the room temperature. Half a degree is not very helpful, although you could increase the electric field. Unfortunately, to get a big enough temperature change you might be into the spark jumping region. Lightning in the kitchen! Finally, do you want the back of your refrigerator to be carrying even a kilovolt electric field? My guess is this effect may remain a curiosity for some length of time.

Economic Consequences of the Ukraine War

My last post mentioned the USSR collapse. One of the longer term consequences has been this Ukraine war. Currently, there have been problems of shelling of the Zaporizhzhia nuclear plant, and this appears to have happened in that our TV news has shown some of the smashed concrete, etc. The net result is the plant has shut down. Each side accuses the other of doing the shelling, but it seems to me that it had to be the Ukrainians. Russia has troops there, and no military command is going to put up with his side shelling his own troops. However, that is far from the total bad news. So far, Ukraine has been terribly lucky, but such luck cannot last indefinitely. There are consequences outside the actual war itself. The following is a summary of some of what was listed in the August edition of Chemistry World.

The Donbas area is Ukraine’s heavy industry area, and this includes the chemical industry. Recently, Russian air strikes at Sieverierodonetsk hit a nitric acid plant, and we saw images of the nitrogen dioxide gas spewing into the atmosphere.

Apparently, in 2017 Ukrainian shelling was around a chemical plant that contained 7 tonne of chlorine. Had a shell hit a critical tank, that would have been rather awkward. Right now, in the eastern Donbas there is a pipeline almost 700 km long that pipes ammonia. There are approximately 1.5 million people in danger from that pipeline should it burst; exactly how many depends on where it is broken. There are also just under  200,000 t of hazardous waste stored in various places. The question now is, with all this mess generated, in addition to demolished buildings and infrastructure, who will pay what to clean it up? It may or may not be fine for Western countries to use their taxes to produce weapons to give to Ukraine, but cleaning up the mess requires the money to go to Ukraine, not armament-making corporations at home.

The separation of the Donbas has led to many mines being closed, and these have filled with water. This has allowed mercury and sulphuric acid to be leached and then enter the water table. During 2019, a survey of industrial waste was made, and Ukraine apparently stores over 5.4 billion t of industrial waste, about half of which is in the Donbas. Ukraine has presumably inherited a certain amount, together with some of the attitudes, from the old Soviet Union. From experience, their attitude to environmental protection was not their strong point. I recall  one very warm sunny morning going for a walk around Tashkent. I turned a corner and saw rather a lot of rusty buildings, and also, unbelievably, a cloud. How could water droplets form during such a warm dry climate? The answer was fairly clear when I got closer. One slight whiff, and I knew what it was: the building was emitting hydrogen chloride into the atmosphere and the hydrochloric acid droplets were the reason for the rust.

Meanwhile, some more glum news. We all know that the sanctions in response to the Ukraine war has led to a gas shortage. What most people will not realize is what this is doing to the chemical industry. The problem for the chemical industry is that unlike most other industries, other than the very sophisticated, the chemical industry is extremely entangled and interlinked. A given company may make a very large amount of chemical A, which is then sold as a raw material to a number of other companies, who in turn may do the same thing. There are many different factories dependent on the same raw chemical and the material in a given chemical available to the public may have gone through several different steps in several different factories.

An important raw mixture is synthesis gas, which is a mix of carbon monoxide and hydrogen. The hydrogen may be separated and used in steps to make a variety of chemicals, such as ammonia, the base chemical for just about all nitrogen fertilizer, as well as a number of other uses. The synthesis gas is made by heating a mixture of methane gas and water. Further, almost all chemical processing requires heat, and by far the bulk of the heat is produced by burning gas. In Europe, the German government is asking people to cut back on gas usage. Domestic heating can survive simply by lowering the temperature, although how far down one is prepared to go during winter is another question. However, the chemical industry is not so easily handled. Many factories use multiple streams, and it is a simple matter to shut down such a stream, but you cannot easily reduce the amounts going through a stream because the reactions are highly dependent on pressure, and the plant is in a delicate balance between amount processed and heat generated.  A production unit is really only designed to operate one way, and that is continuously with a specific production rate. If you close it down, it may take a week to get it started again, to get the temperature gradients right. One possibility is the complete shutdown of the BASF plant at Ludwigshafen, the biggest chemical complex in the world. The German chemical industry uses about 135 TWhr of gas, or about 15% of the total in the country. The price of such gas has risen by up to a factor of eight since Russia was sanctioned, and more price rises are likely. That means companies have to try to pass on costs, but if they face international competition, that may not be possible. This war has consequences far beyond Ukraine.

Gorbachev: A Man for What Season?

Mikhail Sergeyevich Gorbachev is dead, and the eulogies are flowing thick and fast, but mainly from those outside Russia. He may well have the record for the most praise for someone who made the worst botch-up of the job he was appointed to do. He is praised in the West for dismantling the Soviet Union, but that was the last thing he was trying to do. He believed that liberalizing the economy would improve the lot of Russians. After he was deposed and the USSR fell, the GDP of Russia almost halved, and only too much that was left fled the country in the hands of a few oligarchs. You see comments by Bill Browder on how bad Putin is, but he made a billion dollars from the ignorance of the ordinary Russian and according to Russia, he never paid a cent in tax to Russia. Quite simply, Gorbachev gave the country that hated Russia (the US) what it wanted and completely failed Russia. He had a dream, but he never checked to see whether it was realistic, and he never had a workable plan to implement it.

Gorbachev’s early important jobs related to agriculture. The Soviet Union should have been a major food exporter, but in the 1970s, in part because of poor weather, it had to import grain. Gorbachev was supposed to do something about this, and his response was to blame central decision-making. That may well have been a factor, but it was not the required answer. I visited the USSR in the early 1980s, and in a drive through the countryside of Uzbekistan the problem was easy to see. There were huge areas of the steppe that had been ploughed, and then, nothing. Not even harrowed to at least make it look as if something was being done. But in small “oases” there was extremely intense productivity. Those in the collective were given very small areas of land for themselves to use, and basically they concentrated on that. This was where almost all the local food came from, a tiny percentage of the total land. Now, if I could see this in a short visit which was really more as a tourist (I was getting over jet lag and enjoying a weekend before heading to Moscow for what I went there for) surely Gorbachev could have found this out if he wanted to.

Part of my life has been devoted as a consultant to fixing problems, and my first rule of fixing a problem has always been, first examine the problem in intense detail and understand it. To examine it, you actually have to go and look at it yourself. Reading a report is no good unless there is a recommendation to fix it, because if whoever wrote the report does not know how to fix it, perforce (s)he does not understand it. This is probably a major failing of leaders everywhere, but it was much worse in the USSR. If the leader cannot take the time off to look at it, he should delegate to someone who can. At this time fixing agriculture was Gorbachev’s main job; he was the delegated man, and at this point he failed. The USSR kept importing food, which also was a drain on foreign currency. One could argue that the system prevented success, but Gorbachev had Yuri Andropov as a friend, and if he could persuade Andropov, almost certainly what he recommended would be done. The simple answer is the farmers had to have an incentive to increase the communal yield. That introduces the most significant problem in economics: how to properly reward people. Something needed to be tried, such as giving small groups of farmers (since they had to maintain some part of communism) the right to take shares of the yield from a block of the commune land.

When Gorbachev became effectively the leader of the USSR, he had learned nothing from his agrarian program, and while he recognized industry needed a better output and productivity, he still relied on central planning, while ignoring implementation. A plan that might work is only of use if there is a working procedure to make it work. For his plans to work, first he had to remove the many layers of bureaucrats between the major decision and implementation. His first move was to remove the “old guard”. This was a clear mistake as a first move. They knew someone younger was needed, which was why they put him there. Many should have been potential allies. His replacements included people like Yeltsin, who ended up doing everything he could to subvert Gorbachev. Gorbachev was not a good judge of character, and completely failed the next step: if you want to run a central system the top priority is to find someone who gets things done. They are seldom people who play the political game making fine speeches praising Lenin. Gorbachev tried to introduce some sort of limited private enterprise and market economics, but because of the layer of incompetents under him and his demand that central planning be retained, that did not work.

The next major blunder was “glasnost”; giving the people the right to complain. There was too much to complain about. Freedom to criticize is all very well, but if the criticisms are going to be well-grounded and nobody is fixing them, the society fragments. Gorbachev apparently thought that if the people knew about all the problems they would rally behind his efforts to fix them. That was ridiculous. The noisiest dissidents are the least constructive. He needed fixers in place before allowing people to shout out what needed fixing. After all, a lot was obvious. Again, I recall going into a building in the old USSR that was supposed to be where you bought things. The shelves were empty. Fixing food production and providing a range of consumable goods should have been the first priority. If everyone had more to purchase, and higher incomes from the increased productivity, now open criticism would be harmless.

Gorbachev made an impression on Western leaders. The nuclear disarmament treaty was an achievement, but when it came to the reunification of Germany and the USSR giving the Warsaw Pact countries their independence from Moscow control, Gorbachev badly needed to ensure that NATO did not march East. Given that he was offering a lot, he needed a signed treaty ensuring the “neutral zone”. He could have obtained that from the Eastern countries, although he probably also needed the US to agree, but for some reason he made no effort at all, which eventually brings us to the current Ukraine conflict. He was to permit some of the republics to leave the USSR, but he made no effort to settle the legal conditions of doing so, which later led to the Georgian and now Ukrainian problems. The USSR owned all the factories, etc, in the breakaway republics, but these ended up in the hands of a very few oligarchs.  Gorbachev had great ideas, but was seemingly uninterested in the details of achieving them, or of ensuring decisions were not undermined by others. The end result was the rule of Yeltsin, and the impoverishment of a very large fraction of the population, the transfer of virtually all of Russia’s industrial and resource assets into the hands of a few oligarchs, the almost halving of the nation’s GDP, and all this really followed from Gorbachev’s inept governance. Gorbachev seemed to think people would rally behind to get the best outcome. That is delusional. They follow incentives or they fly off in different directions. He failed to provide incentives or control where things went. He achieved nothing of substance for those who depended on him. He is popular with those who took advantage of him.

Nuclear War is Not Good

Yes, well that is sort of obvious, but how not good? Ukraine has brought the scenario of a nuclear war to the forefront, which raises the question, what would the outcome be? You may have heard estimates from military hawks that apart from those killed due to the blasts and those who got excessively irradiated, all would be well. Americans tend to be more hawkish because the blasts would be “over there”, although if the enemy were Russia, Russia should be able to bring it to America. There is an article in Nature (579, pp 485 – 487) that paints a worse picture. In the worst case, they estimate deaths of up to 5 billion, and none of these are due to the actual blasts or the radiation; they are additional extras. The problem lies in the food supply.

Suppose there was a war between India and Pakistan. Each fires nuclear weapons, first against military targets, then against cities. Tens of millions die in the blasts. However, there is a band of soot that rises into the air, and temperatures drop. Crop yields drop dramatically from California to China, and affect dozens of countries. Because of the limited food yields, more than a billion people will suffer from food shortages. The question then is, how valid are these sort of predictions?

Nuclear winter was first studied during the cold war. The first efforts described how such smoke would drop the planet into a deep freeze, lasting for months, even in summer. Later studies argued this effect was overdone and it would not end up with such a horrific chill, and unfortunately that has encouraged some politicians who are less mathematically inclined and cannot realize that “less than a horrific chill” can still be bad.

India and Pakistan each have around 150 nuclear warheads, so a study in the US looked into what would happen in which the countries set off 100 Hiroshima-sized bombs. The direct casualties would be about 21 million people. But if we look at how volcanic eruptions cool the planet, and how soot goes into the atmosphere following major forest fires, modelling can predict the outcome. A India-Pakistan war would put 5 million tonne of soot into the atmosphere, while a US Russian war would loft 150 million tonne. The first war would lower global temperatures by a little more than 1 degree C           , but the second would lower it by 10 degrees C, temperatures not seen since the last Ice Age. One problem that may not be appreciated is that sunlight would heat the soot, and by heating the adjacent air, it causes it to rise, and therefore persist longer.

The oceans tell a different story. Global cooling would affect the oceans’ acidity, and the pH would soar upwards (making it more alkaline). The model also suggested that would make it harder to form aragonite, making life difficult for shellfish. Currently, the shellfish are in the same danger from too much acidity; depending on aragonite is a bad option! The biggest danger would come regions that are home to coral reefs. There are some places that cannot win. However, there is worse to come: possibly a “Nuclear Niño”, which is described as a “turbo-charged El Niño”. In the case of a Russia/US war, the trade winds would reverse direction and water would pool in the eastern pacific ocean. Droughts and heavy rain would plague different parts of the world for up to seven years.

One unfortunate effect is that this is modelled. Immediately, another group from Los Alamos carried out different calculations and came to less of a disastrous result. The difference depends in part on how they simulate the amount of fuel, and how that is converted to smoke. Soot comes from partial combustion, and what happens where in a nuclear blast is difficult to calculate.

The effects on food production could be dramatic. Even following the small India-Pakistan war, grain production could drop by approximately 12% and soya bean production by 17%. The worst effects would come from the mid-latitudes such as the US Midwest and Ukraine. The trade in food would dry up because each country would be struggling to feed itself. A major war would be devastating, for other reasons as well. It is all very well to say your region might survive the climate change, and somewhere like Australia might grow more grain if it gets adequate water, as at present it is the heat that is the biggest problem. But if the war also took out industrial production and oil production and distribution, now what? Tractors are not very helpful if you cannot purchase diesel. A return to the old ways of harvesting? Even if you could find one, how many people know how to use a scythe? How do you plough? Leaving the problem of knowing where to find a plough that a horse could pull, and the problem of how to set it up, where do you find the horses? It really would not be easy.

Burying Carbon Dioxide, or Burying Cash?

In the last post, I expressed my doubt about the supply of metals for electric batteries. There is an alternative to giving up things that produce CO2 and that is to trap and sequester CO2. The idea is that for power stations the flu gases have the CO2 removed and pumped underground. That raises the question, how realistic is this? Chemistry World has an article that casts doubt, in my mind, that this can work. First, the size of the problem. One company aims to install 70 such plants, each capable of sequestering 1 million t of CO2. If these are actually realized, we almost reach 0.2% of what is required. Oops. Basically, we need to remove at least 1 billion t/a to stand still. This problem is large. There is also the problem of how we do it.

The simplest way is to pass the flu gases through amine solvents, with monoethanolamine the most common absorbent used. Leaving aside the problem of getting enough amine, which requires a major expansion of the chemical manufacturing industry, what happens is the amine absorbs CO2 and makes the amine carbonate, and the CO2 is recovered by heating the carbonate and regenerating the amine. However, the regeneration will never be perfect and there are losses. Leaving aside finding the raw materials actually synthesizing the amine takes about 0.8 MWh of energy, the inevitable losses mean we need up to 240 MWh every year to run a million tonne plant. We then need heat to decompose the amine carbonate, and that requires about 1 MWh per tonne of CO2 absorbed. Finally, we need a little less than 0.12 MWh per tonne of CO2 to compress it, transport it and inject it into the ground. If we wanted to inject 1 billion t of CO2, we need to generate something like 840 TWh of electricity. That is a lot of electricity.

We can do a little better with things called metal organic frameworks (MOFs).These can be made with a high surface energy to absorb CO2 and since they do not form strong chemical bonds the CO2 can be recovered at temperatures in the vicinity of  80 – 100 degrees C, which opens the possibility of using waste heat from power stations. That lowers the energy cost quite a bit. Without the waste heat the energy requirement is still significant, about half that of the amines. The comes the sting – the waste heat approach still leaves about 60% of what was absorbed, so it is not clear the waste heat has saved much. The addition of an extra step is also very expensive.

The CO2 content of effluent gases is between 4 – 15%; for ordinary air it is 0.04%, which makes it very much more difficult to capture. One proposal is to capture CO2 by bubbling air through a solution of potassium hydroxide, and then evaporating off the water and heating the potassium carbonate to decomposition temperature, which happens to be about 1200 degrees C. One might have thought that calcium oxide might be easier, which pyrolyses about 600 degrees C, but what do I know? This pyrolysis takes about 2.4 MWh per tonne of CO2, and if implemented, this pyrolysis route that absorbs CO2 from the air would require about 1.53 TWh of electricity per year for sequestering 1 million t of CO2.

When you need terawatt hours of electricity to run a plant capable of sequestering one million tonne of CO2, and you need to sequester a billion t, it becomes clear that this is going to take an awful lot of energy. That costs a lot money. In the UK, electricity costs between £35 – 65 per MWh, and we have been talking in terms of a million times that per plant. Who pays? Note this scheme has NO income stream; it sells nothing, so we have to assume it will be charged to the taxpayer. Lucky taxpayer!

One small-scale effort in Iceland offers a suggested route. It is not clear how they capture the CO2, but then they dissolve it in water and inject that into basalt, where the carbonic acid reacts with the olivine-type structures to make carbonates, where it is fixed indefinitely. That suggests that provided the concentration of CO2 is high enough, using pressure to dissolve it in water might be sufficient. That would dramatically lower the costs. Of course, the alternative is to crush the basalt and spread it in farmland, instead of lime. My preferred option to remove CO2 from the air is to grow plants. They work for free at the low concentrations. Further, if we select seaweed, we get the added benefit of improving the ecology for marine life. But that requires us to do something with the plants, or the seaweed. Which means more thinking and research. The benefit, though, is the scheme could at least earn revenue. The alternatives are to bankrupt the world or find some other way of solving this problem.

A Plan to Counter Global Warming Must be Possible to Implement

Politicians seem to think that once there is a solution to the problem in theory, the problem is solved so they stop thinking about it. Let us look at a reality. We know we have a problem with global warming and we have to stop burning fossil fuels. The transport sector is a big problem, but electric vehicles will do the trick, and in theory that might be true, but as I have pointed out in previous posts there is this troublesome matter of raw materials. Now the International Energy Agency has brought a little unpleasantness to the table. They have reported that global battery and minerals supply chains need to expand ten-fold to meet the critical needs of 2030 if the plan is to at least maintain schedule. If we take the average size of a major producer as a “standard mine” according to the IEA we need 50 more such lithium mines, 60 more nickel mines, and 17 more cobalt mines operating fully by 2030. Generally speaking, a new mine needs about ten years between starting a feasibility study and serious production. See a problem here? Because of the costs and exposure, you need feasibility studies to ensure that there is sufficient ore where you can’t see, that there is an economic way of processing the ore, and you must have a clear plan on what to do with what to do with minerals you do not want, with materials like arsenates or other undesirables also being present. You also have to build new roads, pipe in water, provide electricity, and do a number of other things to make the mine work that are not directly part of the mine. This does not mean you cannot mine, but it does mean it won’t be quite as easy as some might have you think. We now want our mines not to be environmental disasters. The IEA report notes that ten years, and then adds several more years to get production up to capacity.

The environmental issues are not to be considered as irrelevant. Thus the major deposits of lithium tend to be around the Andes, typically in rather dry areas. Then lithium is obtained by pumping down water, dissolving the salts, then bringing them up and evaporating the brine. Once most of the lithium is obtained, something has to be done with the salty residue, and of course the process needs a lot of water. The very limited water already in some locations is badly needed by the local population and their farms. The salt residues would poison agriculture.

If we consider nickel, one possible method to get more from poorer ores is high-pressure acid leaching. The process uses acid at high temperatures and pressure and end up with nickel at a grade suitable for batteries. But nickel often occurs as a sulphide, which means as a byproduct you get hydrogen sulphide, and a number of other effluents that have to be treated. Additionally, the process requires a lot of heat, which means burning coal or oil. The alternative source to the sulphide deposits, as advocated by the IEA, is laterite, a clayish material that also contains a lot of iron and aluminium oxides. These metals could also be obtained, but at a cost. The estimate of getting nickel by this process is to double the cost of the nickel.

The reason can be seen from the nature of the laterite (https://researchrepository.murdoch.edu.au/id/eprint/4340/1/nickel_laterite_processing.pdf), which is a usually a weathered rock. At the top you have well weathered rock, more a clay, and is red limonite. The iron oxide content (the cause of the red colour) is over 50% while the nickel content is usually less than 0.8% and the cobalt less than 0.1%. Below that is yellow limonite, where the nickel and cobalt oxides double their concentration. Below that we get saprolite/serpentine/garnierite (like serpentine but with enhanced nickel concentration). These can have up to 3% nickel, mainly due to the garnierite, but the serpentine family are silicates, where the ferrous such as in olivine has been removed. The leaching of a serpentine is very difficult simply because silicates are very resistant. Try boiling your average piece of basalt in acid. There are other approaches and for those interested, the link above shows them. However, the main point is that much of the material does not contain nickel. Do y9ou simply dump it, or produce iron at a very much higher cost than usual?

However, the major problems for each are they are all rather energy intensive, and the whole point of this is to reduce greenhouse emissions. The acid leach is very corrosive, and hence maintenance is expensive, while the effluents are troublesome for disposal. The disposal of the magnesium sulphate at sea is harmless, but the other materials with it may not be. Further, if the ore is somewhere like the interior of Australia, even finding water will be difficult.

Of course all these negatives can be overcome, with effort, if we are prepared to pay the price. Now, look around and ask yourself how much effort is going into establishing all those mines that are required? What are the governments doing? The short answer, as far as I can tell, is not much. They leave it to private industry. But private industry will be concerned that their balance sheets can only stand so much speculative expansion. My guess is that 2030 objectives will not be fulfilled.

Space – To the Final Frontier, or Not

In a recent publication in Nature Astronomy (https://www.nature.com/articles/s41550-022-01718-8) Byers point out an obvious hazard that seems to be increasing in frequency: all those big rockets tend to eventually come down, somewhere, and the return is generally uncontrolled. Modest-sized bits of debris meet a fiery end, burning up in the atmosphere, but larger pieces hit the surface and the kinetic energy makes comparison of them to an oversized bullet or cannon-ball make the latter seem relatively harmless. In May, 2020, wreckage from the 18 tonne core of a Chinese Long March 5B rocket hit two villages in the Ivory Coast, damaging buildings. In July 2022, suspected wreckage from a SpaceX Crew-1 capsule landed on farmland in Australia, Another Long March 5B landed just south of the Philippines. In 1979, NASA’s Skylab fell back to Earth, scattering debris across Western Australia. So far, nobody has been injured, but it is something of a matter of luck.

According to Physics World the US has an Orbital Debris Mitigation Standard Practices stipulation that all launches should have a risk of casualty from uncontrolled re-entry of less than one in 10,000, but the USAF, and even NASA have flouted this rule on numerous occasions. Many countries may have no regulations. As far as I am aware my own country (New Zealand) has none yet New Zealand launches space vehicles. The first stage always falls back into the Pacific, which is a large expanse of water, but what happens after that is less clear.

In the past thirty years, more than 1500 vehicles have fallen out of orbit, and about three quarters of these have been uncontrolled. According to Byers, there was a 14% chance someone could have been killed.

So what can be done? The simplest is to provide each rocket with extra fuel. Each time it is time to end its orbit, the descent can be controlled to the extent it lands at the point in the Pacific that is farthest from land. So far, this has not been done because of the extra cost. A further technique would be to de-orbit rocket bodies immediately following satellite deployment. That still requires additional fuel. In principle, with proper design, the rocket bodies could be recovered and reused. Rather perversely, it appears the greatest risk is for countries in the Southern hemisphere. The safest places are those at greater inclination than the launch site.

Meanwhile, never mind the risk to those left behind; you want to go into space, right? Well, you may have heard of bone density loss. This effect has finally had numbers put on it (https://www.nature.com/articles/s41598-022-13461-1) Basically, after six months in space, the loss of bone density corresponded to 20 years of ongoing osteoporosis, particularly in load bearing (on Earth) bones, such as the tibia. Worse, these only partially recovered, even after one year on Earth, and the lasting effect was equivalent to ten years of aging. The effect, of course, is due to microgravity, which is why, in my SF novels, I have always insisted on ships either having a rotating ring to create a centrifugal “artificial gravity”. On the other hand, the effect can vary between people. Apparently the worst cases can hardly walk on return for some time, while other apparently continue on more or less as usual and ride bikes to work rather than drive cars. And as if bone loss was not bad enough, there is a further adverse possibility: accelerated neurodegenerations. (https://jamanetwork.com/journals/jamaneurology/article-abstract/2784623). By tracking the concentration of brains specific proteins before and after a space mission it was concluded that long-term spaceflight presents a slight but lasting threat to neurological health. However, this study concluded three weeks after landing, so it is unclear whether long-term repair is possible. Again, it is assumed that it is weightlessness that is responsible. On top of that, apparently there are long-lasting changes in the brain’s white matter volume and the shape of the pituitary gland. Apparently more than half of astronauts developed far-sightedness and mild headaches. Seemingly, this could be because in microgravity the blood no longer concentrated in your legs.

Rotation, Rotation

You have probably heard of dark matter. It is the stuff that is supposed to be the predominant matter of the Universe, but nobody has ever managed to find any, which is a little embarrassing when there is supposed to be something like about 6 times more dark matter in the Universe than ordinary matter. Even more embarrassing is the fact that nobody has any real idea what it could be. Every time someone postulates what it is, and work out a way to detect it, they find absolutely nothing. On the other hand there may be a simpler reason for this. Just maybe they postulated what they thought they could find, as opposed to what it is, in other words it was a proposal to get more funds with the uncovering the nature of the Universe as a hoped-for by-product.

The first reason why there might be dark matter came from the rotation of galaxies. Newtonian mechanics makes some specific predictions. Very specifically, the periodic time for an object orbiting the centre of mass at a distance r varies as r^1.5. That means that say there are two orbiting objects, say Earth and Mars, where Mars is about 1.52 times more distant, the Martian year is about 1.88 Earth years. The relationship works very well in our solar system, and it was from the unexpected effects on Uranus that Neptune was predicted, and found to be in the expected place. However, when we take this up to galactic level, things come unstuck. As we move out from the centre, stars move faster than predicted from the speed of those in the centre. This is quite unambiguous, and has been found in many galaxies. The conventional explanation is that enormous quantities of cold dark matter provide the additional gravitational binding.

However, that explanation also has problems. A study of 175 galaxies showed that the radial acceleration at different distances correlated with the amount of visible matter attracting it, but the relationship does not match Newtonian dynamics. If the discrepancies are due to dark matter, one might expect the dark matter to be present in different amounts in different galaxies, and different parts of the same galaxy. Any such relationship should have a lot of scatter, but it hasn’t. Of course, that might be a result of dark matter being attracted to ordinary matter.

There is an alternative explanation called MOND, which stands for modified Newtonian gravity, which proposes that at large distances and small accelerations, gravity decays more slowly than the inverse square law. The correlation of the radial acceleration with the amount of visible matter would be required by something like MOND, so that is a big plus for it, although the only reason it was postulated in this form was to account for what we see. However, a further study has shown there is no simple scale factor. What this means is that if MOBD is correct the effects on different galaxies should be essentially dependent on the mass of visible matter but it isn’t. MOND can explain any galaxy, but the results don’t translate to other galaxies in any simple way. This should rule out MOND without amending the underlying dynamics, in other words, altering Newtonian laws of motion as well as gravity. This may be no problem for dark matter, as different distributions would give different effects. But wait: in the previous paragraph it was claimed there was no scatter.

The net result: there are two sides to this: one says MOND is ruled out and the other says no it isn’t, and the problem is that it is observational uncertainties that suggest it might be. The two sides of the argument seem to be either using different data or are interpreting the same data differently. I am no wiser.

Astronomers have also observed one of the most distant galaxies ever, MACS1149-JD1, which is over ten billion light years away, and it too is rotating, although the rotational velocity is much slower than galaxies that we see that are much closer and nowhere near as old. So why is it slower? Possible reasons include it has much less mass, hence the gravity is weaker.

However, this galaxy is of significant interest because its age makes it one of the earliest galaxies to form. It also has stars in it estimated to be 300 million years old, which puts the star formation at just 270 million years after the Big Bang. The problem with that is it is in the dark period, when matter as we know it had presumably not formed, so how did a collection of stars start? For gravity to cause a star to accrete, it has to give off radiation but supposedly no radiation was given off then. Again, something seems to be wrong. That most of the stars are just this age makes it appear that the galaxy formed about the same time as the stars, or put it another way, something made a whole lot of stars form at the same time in places where the net result was a galaxy. How did that happen? And where did the angular momentum come from? Then again, did it happen? This is at the limit of observational techniques, so have we drawn a non-valid conclusion from difficult to interpret data. Again, I have no idea, but I mention this to show there is a still a lot to learn about how things started.

More music

I have now begun putting some of my music compositions on my website, and currently  there are five short pieces and my third piano sonata available for listening, or, if you play piano, a link to where scores can be purchased. If nothing else, they are different, and for skilled pianists there is the opportunity to give a world premier performance of something. The computer-generated sound is not without problems. For some reason, while it does well with accel., it hopelessly overdoes rit and ral. In the first movement of the sonata, it badly underdoes the sforzandos, and it ignores repeats, which is not necessarily bad. However, it gives some idea how I amuse myself. Listen at

https://ianmiller.co.nz/music