A Solution to the Elements Shortage for Lithium Ion Batteries?

Global warming together with a possible energy crisis gives us problems for transport. One of the alleged solutions is battery powered cars. This gives three potential problems. One of these is how to generate sufficient electricity to power the batteries, but I shall leave that for the moment. The other two relate to chemistry. A battery (or fuel cell) has two basic electrodes: a cathode, which is positive, and an anode, which is negative. The difference in potential between these is the voltage, and is usually taken as the voltage at half discharge. The potential is caused by the ability to supply electrons to the anode while taking them at the cathode. At each there is a chemical oxidation/reduction reaction going on. The anode is most easily answered by oxidising a metal. Thus if we oxidise lithium we get Li  ➝ Li+  + e-. The electron disappears off to the circuit. We need something to accept an electron at the cathode, and that is Co+++, which gets reduced to the more stable Co++.  (Everything is a bit more complicated – I am just trying to highlight the problem.) Now superficially the cobalt could be replaced by a variety of elements, but the problem is the cobalt is embedded in a matrix. Most other ions have a substantial volume change of the ions, and if they are embedded in some cathode matrix, the stresses lead it to fall to bits. Cobalt seems to give the least stress, hence will give the batteries a longer life. So we have a problem of sorts: not enough easily accessible lithium, and not enough cobalt. There are also problems that can reduce the voltage or current, including side reactions and polarization.

In a fuel cell we can partly get over that. We need something at the cathode that will convert an input gas into an anion by accepting an electron, thus oxygen and water forms hydroxide. At the anode we need something that “burns”, i.e. goes to a higher valence state and gives up an electron. In my ebook “Red Gold”, a science friction story relating to the first attempt at permanent settlement of Mars, a portable power source was necessary. With no hidden oil fields on Mars, and no oxygen in the air to burn it if there were, I resorted to the fuel cell. The fuel cell chemistry I chose for Mars was to oxidize aluminium, which generates three electrons, and to reduce chlorine. The reason for these was that the settlement on Mars needed to make things from Martian resources, and the most available resource was the regolith, which is powdered rock. This was torn apart by nuclear fusion power, and the elements separated by magnetohydrodynamics, similar to what happens in a mass spectrometer. The net result is you get piles of elements. I chose aluminium because it has three electrons and hence more power capacity, and I chose chlorine because it is a liquid at Martian temperatures so no pressure vessel was required. Also, while oxygen might produce a slightly higher voltage, oxygen forms a protective coating on aluminium, and that stops that reaction.

An aluminium battery would have aluminium at the anode, and might have something in the electrolyte that could deposit more aluminium on it. Thus during a charge, you might get, if chlorine is the oxidiser,

4(Al2Cl7)-   + 3e-  → Al  +  7(AlCl4)-   

which deposits aluminium on the anode. During discharge the opposite happens and you burn aluminium off. Notice here the chlorine is actually tied up in chemical complexes and the battery has no free chlorine. Here, the electrolyte is aluminium chloride (Al2Cl6). For the fuel cell, we would be converting the gas to a complex at the cathode. That is not very practical on Earth, but the enclosed battery would be fine.

The main advantage of aluminium is that it gets rid of the supply problem. Aluminium is extremely common on Earth, as the continents are essentially made of aluminosilicates. The cathode can be simple carbon. A battery with this technology was proposed in 2015 (Nature 520: 325 – 328) that used graphite cathodes. It was claimed to manage 7,500 cycles without capacity decay, which looks good, but so far nobody seems to be taking this up.

Now, for an oddity. For discharge, we need to supply (AlCl4)- to the anode as it effectively supplies chlorine. Rather than have complicated chemistry at the cathode we can have an excess of AlCl4– from the start, and during charging, store it in the cathode structure. During discharge it is released. So now we need something to store it in. The graphite used for lithium-ion batteries comes to mind, but here is an oddity: you get twice the specific capacity, twice the cell efficiency and a 25% increase in voltage by using human hair! Next time you go to the hair dresser, note that in the long term that might be valuable. Of course, before we get too excited, we still need such batteries to be constructed and tested because so far we have no idea how such hair stands up to repeated cycles.

What we do not know about such batteries is how much dead weight has to be carried around and how small they can be made for a given charge. The point about cars is that eventually the critical point is how far will it go on one charge, how long does it take to safely recharge, how much volume of the vehicle does it take, and is it flammable? The advantage of the aluminium chloride system described above is that there are probably no side reactions, and a fire is somewhat unlikely. The materials are cheap. So, the question is, why hasn’t more been done on this system? My guess is that the current manufacturers know that lithium is working, so why change? The fact that eventually they will have to does not bother them. The accountants in charge think beyond the next quarter is long-term. Next year can look after itself. Except we know that when the problem strikes, it takes years to solve it. We should get prepared, but our economic system does  not encourage that.


This and That from the Scientific World

One of the consequences of writing blogs like this is that one tends to be on the lookout for things to write about. This ends up with a collection of curiosities, some of which can be used, some of which eventually get thrown away, and a few I don’t know what to do about. They tend to be too short to write a blog post, but too interesting, at least to me, to ignore. So here is a “Miscellaneous” post.

COP 27.

They agreed that some will pay the poorer nations for damage so far, although we have yet to see the money. There was NO promise by anyone to reduce emissions, and from my point of view, even worse o promise to investigate which technologies are worth going after. Finally, while at the conference there were a number of electric buses offering free rides, at the end of the conference these buses simply disappeared. Usual service (or lack thereof) resumed.


You may think that humans alone fight by throwing things at each other but you would be wrong. A film has been recorded ( https://doi.org/10.1038/d41586-022-03592-w) of two gloomy octopuses throwing things at each other, including clam shells. Octopuses are generally solitary animals, but in Jervis Bay, Australia, the gloomy octopus lives at very high densities, and it appears they irritate each other. When an object was thrown at another one, the throw was far stronger than when just clearing stuff out of the way and it tended to come from specific tentacles, the throwing ones. Further, octopuses on the receiving end ducked! A particularly interesting tactic was to throw silt over the other octopus. I have no idea what the outcome of these encounters were.


The star HD 23472 has a mass of about 0.67 times that of our sun, and has a surface temperature of about 4,800 degrees K. Accordingly, it is a mid-range K type star, and it has at least five planets. Some of the properties of these include the semi-major axis a (distance from the star if the orbit is circular), the eccentricity e, the mass relative to Earth (M), the density ρ  and the inclination i. The following table gives some of the figures, taken from the NASA exoplanet archive.

Planet     a              e            M        ρ           i

b           0.116      0.07       8.32      6.15      88.9

c           0.165      0.06       3.41      3.10      89.1

d           0.043      0.07       0.55      7.50      88.0

e           0.068      0.07       0.72      7.50      88.6

f           0.091      0.07       0.77       3.0        88.1

The question then is, what to make of all that? The first thing to notice is all the planets are much closer to the star than Earth is to the sun. Is that significant? Maybe not, because another awkward point is that the inclinations are all approaching 90 degrees. The inclination is the angle the orbital plane of the planet makes with the equatorial plane of the star. Now planets usually lie on the equatorial plane because that was also the plane of the accretion disk, so something has either moved the planets, or moved the star. Moving the planets is most probable, and the reason the inclinations are all very similar is because they are close together, and they will probably be in some gravitational resonance with each other. What we see are two super Earths (b and c), two small planets closest to the star, which are small, but very dense. Technically, they are denser than Mercury in our system. There are also two planets (c and f) with densities a little lower than that of Mars.

The innermost part of the habitable zone of that star is calculated to be at 0.364 AU, the Earth-equivalent (where it gets the same radiation as Earth) at 0.5 AU, and the outer boundary of the habitable zone is at 0.767 AU. All of these planets lie well inside the habitable zone. The authors who characterised these planets (Barros, S. C. C. et al. Astron. Astrophys. 665, A154 (2022).) considered the two inner planets to be Mercury equivalents, presumably based on their densities, which approximate to pure iron. My guess is the densities are over-estimated, as the scope for error is fairly large, but they certainly look like Mercury equivalents that are somewhat bigger than our Mercury

Laughing Gas on Exoplanets

One of the targets of the search for exoplanets is to try and find planets that might carry life. The question is, how can you tell? At present, all we can do is to examine the spectra of atmospheres around the planet, and this is not without its difficulties. The most obvious problem is signal intensity. What we look for is specific signals in the infrared spectrum and these will arise from the vibrations of molecules. This can be done from absorptions if the planet transits across the star’s face or better (because the stellar intensity is less a problem) from starlight that passes through the planet’s atmosphere.

The next problem is to decide on what could signify life. Something like carbon dioxide or methane will be at best ambiguous. Carbon dioxide makes up a huge atmosphere on Venus, but we do not expect life there. Methane comes from anaerobic digestion (life) or geological activity (no life). So, the proposal is to look for laughing gas, better known as nitrous oxide. Nitrous oxide is made by some life forms, and oddly enough, it is a greenhouse gas that is becoming more of a problem from the overuse of agricultural fertilizer, as it is a decomposition product of ammonium nitrate. If nothing else, we might find planets with civilizations fighting climate change!

COP 27 – the 27th Cop-out??

Currently, a number of parties have descended on Sharm el-Sheikh for COP 27. This is the 27th “Conference of the Parties” to deal with climate change. Everybody, by now, should be aware that a major contributor to climate change is the increased levels of carbon dioxide in the atmosphere, and we have to reduce emissions. In the previous 26 conferences various pledges were made to reduce such emissions, but what has happened? According to Nature, CO2 emissions are set to reach a record high of 37.5 billion tonne in 2022. So much for a controlled reduction of emissions. In my opinion, the biggest effect from such conferences is the increased emissions due to getting all the participants to them. In this context there are apparently over 600 fossil fuel lobbyists at this conference. So the question then is, why has so little been achieved?

My answer is the politicians took the easy way out: to be seen to be doing something they implemented a carbon trading scheme. This allows money to flow, and the idea was that economics will make people transition to non-fossil-based energy by raising the price of the fossil fuels. To make this look less like a tax, they then allowed the trade of credits, and politicians issued credits to “deserving causes”. There were two reasons why this had to fail: the politicians had the ability to sabotage it by issuing credits to those it favoured, and secondly, there are no alternatives that everyone can switch to. Therein lies the first problem. Price does not alter activity much unless there is an alternative, and in most cases there is no easy alternative.

The second one is that even if you can develop alternatives, there is far too much installed capacity. The economies of just about every country are highly dependent on using fossil fuel. People are not going to discard their present vehicles and join a queue to purchase an electric one. They are still selling new petroleum-powered vehicles, and a lot of energy has been invested in making them. Like it or not, the electricity supply of many countries is dependent on coal-fired generation, and it costs a lot to construct a new plant to generate electricity. No country can afford to throw away their existing generation capacity.

In principle, the solution for electricity is simple: nuclear. So why not? Some say it is dangerous, and there remains the problems of storing wastes. It is true that people died at Chernobyl, but that was an example of crass incompetence. Further, in principle molten salt reactors cannot melt down while they also burn much of the waste. There is still waste that has to be stored somewhere, but the volume is very small in comparison. So why is this not used? Basically, the equipment has not been properly developed, the reason being that reactors were first designed so they could provide the raw material for making bombs. So, when the politicians recognized the problem at the end of the 1980s, what should have happened is that money was invested for developing such technology so that coal-fired power could be laid to rest. Instead, there was a lot of arm-waving and calls for solar and wind power. It is true these generate electricity, and in some cases they do it efficiently, however they cannot handle main load in most countries. Similarly with transport fuels. Alternative technologies for advanced biofuels were developed in the early 1980s, but were never taken to the next stage because the price of crude oil dropped to very low levels and nothing could compete. The net result was that technology was lost, and much had to be relearned. We cannot shut down the world industries and transport, and the politicians have refused to fund the development of alternative fuels.

So, what will happen? We shall continue on the way we are, while taking some trivial steps that make at least some of us, usually politicians, feel good because we are doing something. Unfortunately, greenhouse gas levels are still rising, and consider what is happening at the levels we are at. The Northeast Greenland Ice Stream is melting and the rate of melt is increasing because the protection from the Zachariae Isstrøm glacier that protected the coastal part of the ice stream broke off. Now, warmer seawater is penetrating up to 300 km under the ice stream. Global ocean levels are now predicted to rise up to a meter by the end of the century from the enhanced melting of Greenland ice. More important still is Antarctica. There is far more ice there, and it has been calculated that if the temperatures rose by four degrees Centigrade above pre-industrial levels up to two thirds of that ice could go.

Unfortunately, that is not the worst of the problems. If the climate heats, food becomes more difficult to provide. The most obvious problem is that most of the very best agricultural land is close to sea level, so we lose that. But additionally, there will be regions of greatly increased drought, and others with intense floods. Neither are good for agriculture. Yet there is an even worse problem: as soil gets hotter, it loses carbon and becomes less productive, while winds tend to blow soil away. So, what can we do about this? Unfortunately, it has to be everyone. We have to not only stop venting greenhouse gases into the atmosphere, but we have to work out ways to take it out. Stop and ask yourself, does your local politician understand this? My guess is no. Does your local politician understand what a partial differential equation means? My guess is no.

Is Science Sometimes Settled Wrongly

In a post two weeks ago I raised the issue of “settled science”. The concept was there had to be things you were not persistently checking. Obviously, you do not want everyone wasting time rechecking the melting point of benzoic acid, but fundamental theory is different. Who settles that, and how? What is sufficient to say, “We know it must be that!” In my opinion, admittedly biased, there really is something rotten in the state of science. Once upon a time, namely in the 19th century, there were many mistakes made by scientists, but they were sorted out by vigorous debate. The Solvay conferences continued that tradition in the 1920s for quantum mechanics, but something went wrong in the second half of the twentieth century. A prime example occurred in 1952, when David Bohm decided the mysticism inherent in the Copenhagen Interpretation of quantum mechanics required a causal interpretation, and he published a paper in the Physical Review. He expected a storm of controversy and he received – silence. What had happened was that J. Robert Oppenheimer, previously Bohm’s mentor, had called together a group of leading physicists to find an error in the paper. When they failed, Oppenheimer told the group, “If we cannot disprove Bohm we can all agree to ignore him”. Some physicists are quite happy to say Bohm is wrong; they don’t actually know what Bohm said, but they know he is wrong. (https://www.infinitepotential.com/david-bohm-silenced-by-his-colleagues/   ) If that were one isolated example, that would be wrong, but not exactly a crisis. Unfortunately, it is not an isolated case. We cannot know how bad the problem is because we cannot sample it properly.

A complicating issue is how science works. There are millions of scientific papers produced every year. Thanks to time constraints, very few are read by several people. The answer to that would be to publish in-depth reviews, but nobody appears to publish logic analysis reviews. I believe science can be “settled” by quite unreasonable means. As an example, my career went “off the standard rails” with my PhD thesis.

My supervisor’s projects would not work, so I selected my own. There was a serious debate at the time whether strained systems could delocalize their electrons into adjacent unsaturation in the same way double bonds did. My results showed they did not, but it became obvious that cyclopropane stabilized adjacent positive charge. Since olefins do this by delocalizing electrons, it was decided that cyclopropane did that too. When the time came for my supervisor to publish, he refused to publish the most compelling results, despite suggesting this sequence of experiments was his only contribution, because the debate was settling down to the other side. An important part of logic must now be considered. Suppose we can say, if theory A is correct, then we shall see Q. If we see Q we can say that the result is consistent with A, but we cannot say that theory B would not predict Q also. So the question is, is there an alternative?

The answer is yes. The strain arises from orbitals containing electrons being bent inwards towards the centre of the system, hence coming closer to each other. Electrons repel each other. But it also becomes obvious that if you put positive charge adjacent to the ring, that charge will attract the electrons and override the repulsion on and attract electrons moving towards the positive charge. That lowers the energy, and hence stabilizes the system. I actually used an alternative way of looking at it: If you move charge, by bending the orbitals, you should generate a polarization field, and that stabilizes the positive charge. So why look at it like that? Because if the cause of a changing electric field is behind a wall, say, you cannot tell the difference between charge moving or of charge being added. Since the field contains the energy the two approaches give the same strain energy but by considering an added pseudocharge it was easy to put numbers on effects.

However, the other side won, by “proving” delocalization through molecular orbital theory, which, as an aside, assumes it is delocalized. Aristotle had harsh words for people who prove what they assume after a tortuous path. As another aside, the same quantum methodology proved the stability of “polywater” – where your water could turn into a toffee-like consistency. A review came out, and confirmed the “other side” by showing numerous examples of where the strained ring stabilized positive charge. It also it ignored everything that contradicted it.

Much later I wrote a review that showed this first one had ignored up to sixty different types of experimental result that contradicted the conclusion. That was never published by a journal – the three reasons for rejection, in order, were: not enough pictures and too many mathematics; this is settled; and some other journals that said “We do not publish logic analyses”.

I most certainly do not want this post to simply turn into a general whinge, so I shall stop there other than to say I could make five other similar cases that I know about. If that happens to or comes to the attention of one person, how general is it? Perhaps a final comment might be of interest. As those who have followed my earlier posts may know, I concluded that the argument that the present Nobel prize winners in physics found violations of Bell’s Inequality is incorrect in logic. (Their procedure violates Noether’s theorem.) When the prize was announced, I sent a polite communication to the Swedish Academy of Science, stating one reason why the logic was wrong, and asking them, if I had missed something, to inform me where I was wrong. So far, over five weeks later, no response. It appears no response might be the new standard way of dealing with those who question the standard approach.

Fire in Space

Most of us have heard of the dangers of space flight such as solar storms, cosmic rays, leaks to the space crafts, and so on, but there are some ordinary problems too. TV programs like to have space ships in battles, whereupon “shields fail” (there is no such thing as a shield, except the skin of the craft, but let’s leave that pass) and we then have fire. When you stop and think about it, fire of a space ship would be a nasty problem. It burns material that presumably had some use, it overheats things like electronics, which will stop them working, then we come to the real problem: if you don’t have spares, you cannot fix it. You often see scenes where engineers have been running around “beating the clock” but what do they use for parts? If they are going to make parts, out of what? If you say, recycle, then at the very least they should be assiduously collecting “smashed stuff”.

Accordingly, it would make sense for astronauts to prevent fires from starting in the first place. You may recall Apollo 1. The three astronauts were inside the command module practising a countdown. The module used pressurised oxygen, and somehow a fire broke out. Pure oxygen and flammable material is a bad mix, and the astronauts died from carbon monoxide poisoning. The hatch opened inwards, and the rapid increase of pressure from the fire meant that it was impossible to open the hatch. The fire was presumed to have started by some loose wiring arcing, and igniting something. Now, we know better than to have pure oxygen, but the problem remains. Fire in space would not be good.

One obvious defence is to reduce the amount of combustible materials. If there is nothing to burn, there will not be a fire, but that is not entirely practical, so the next question is, how do fires burn in space? At first sight that is obvious: the organic matter gets hot and oxygen reacts with it to make a flame. However, there is more to it.

First, how does a fire burn on Earth? For a simple look, light a candle. What you see is the heat melts the wax, the wax runs up the wick and vaporises. The combustion involves breaking the wax down into a number of smaller molecules (which can be seen as smoke if combustion is incomplete) and free radial fragments, which react with oxygen. Some of the fragments combine to form carbon (soot, if it doesn’t burn further). The carbon is important because it glows, giving off the orange colour, but it also radiates a lot of heat, and that heat that radiates downwards melts the wax. What you will notice is that the flame moves upwards. That is because the gas is hot, hence it expands and occupies less volume than a similar number of moles of air. Going up is simply a consequence of Archimedes’ Principle. As it goes up, it sucks in air from below, so there is a flow of gas entering the flame from below, and exiting above. If you can get hold of some methanol, you could light that. Its formula is CH3OH, which means there are no carbon-carbon bonds, which means it cannot form soot. Therefore, it will burn with a light blue coloured flame and it does not radiate much heat. Methanol burning on your skin will not burn you as long as the skin is below the flame.

Which brings us to space. Since fire is possible on a space ship, NASA has done some experiments, partly to learn more about fire, but also to learn how to put them out on a space ship. The first difference is that in the absence of gravity, flames do not go up, after all there is no “up”. Instead, they form little spheres. Further, since there is no gravity, Archimedes’ Principle is no longer important, so there is nothing to suck fresh air in. Oxygen has to enter by diffusion, and oxygen and fuel combine in a narrow zone on the surface of the sphere. The “fire” continues with no significant flame, and further, while a normal fire burns at about 1500 – 2000 degrees K, these fires using droplets of heptane eventually form cool fire, reaching temperatures of 500 – 800 degrees K. Also, the chemistry is different. On Earth, flames usually produce water, carbon dioxide and soot. In microgravity they were producing water, carbon monoxide and formaldehyde. In short, a rather poisonous mix. Cool fires in space are no less dangerous; just different dangerous. Dealing with them may be different too. Extinguishers that squirt gas may simply move the fire, or supply extra air to it. So far, I doubt we have worked out our final methods for fighting fires in space, but I am sure the general principle is to have the fewest possible combustible materials in the space ship.