Roman Concrete

I hope all of you had a Merry Christmas and a Happy New Year, and 2023 is shaping up well for you. They say the end of the year is a time to look back, so why not look really back? Quite some time ago, I visited Rome, and I have always been fascinated by the Roman civilization, so why not start this year by looking that far back?

Perhaps one of the more rather remarkable buildings is the Pantheon, which has the world’s largest unreinforced concrete dome. That was built under the direction of Marcus Vipsanius Agrippa, the “get-things-done” man for Augustus. No reinforcement, and it lasted that long. Take a look at modern concrete and as often as not you will find it cracks and breaks up. Concrete is a mix of aggregate (stones and sand) that provides the bulk, and a cement that binds the aggregate together. We use Portland cement, which is made by heating limestone and clay (usually with some other material but the other material is not important) in a kiln up to about 1450 degrees Centigrade. The product actually depends to some extent on what the clay is, but the main products are bellite (Ca2SiO4) and alite (Ca3SiO5). If the clays contain aluminium, which most clays do, various calcium aluminosilicates are formed. Most standard cement is mainly calcium silicate to which a little gypsum is added at the end, which makes the end surface smoother.

Exactly what happens during setting is unknown. The first thing to note is that stone does not have a smooth surface at close to the molecular level, and further, stones are silicates, in which the polymer structure is perforce terminated at the surface. That would mean there are incomplete bonds. An element like carbon would fix this problem by forming double bonds but silicon cannot do that so these “awkward” surface molecules react with water to form hydroxides. What I think happens is the water in the mix hydrolyses the calcium silicate and forms silica with surface hydroxyls, and these eliminate with hydroxyls on the stone, with the calcium hydroxide also taking part, in effect forming microscopic junctions between it and stone. All of this is slow, particularly when polymeric solids cannot move easily. So to make a good concrete, besides getting the correct mix you have to let it cure for quite some time before it is at its best.

So what did the Romans do? They could not make cement by heating clay and lime up to that temperature easily, but there were sources where it was done for them: the silicate around volcanoes like Vesuvius. The Roman architect and engineer Vitruvius used a hot mix of quicklime (calcium oxide) that was hydrated and mixed with volcanic tephra. Interestingly, this will also introduce some magnesiosilicates, which are themselves cements, but magnesium may fit better than calcium onto basaltic material. For aggregate Vitruvius used fist-sized pieces of rock, including “squared red stone or brick or lava laid down in courses”. In short, Vitruvius was selecting aggregate that was much better than ordinary stone in the sense of having surface hydroxyl groups to react. That Roman concrete lasted so long may in part be due to a better choice of aggregate.

A second point was the use of hot mixing. One possibility is they used a mix of freshly slaked lime and quicklime and by freshly slaking the mix became very hot. This speeds up chemical reactions, and also allows compound formation that is not possible at low temperatures. By reacting so hot it reduced setting times. But even more interestingly, it appears to allow self-healing. If cracks begin to form, they are more likely to form around lime clasts, which can then react with water to make a calcium-rich solution, which can react with pozzolanic components to strengthen the composite material. To support this, Admir Masic, who had been studying Roman cement, made concretes using the Roman recipe and a modern method. He then deliberately cracked the samples and ran water through them. The Roman cement self-healed completely within two weeks, while the cracks in the modern cement never healed.

Advertisement

A Solution to the Elements Shortage for Lithium Ion Batteries?

Global warming together with a possible energy crisis gives us problems for transport. One of the alleged solutions is battery powered cars. This gives three potential problems. One of these is how to generate sufficient electricity to power the batteries, but I shall leave that for the moment. The other two relate to chemistry. A battery (or fuel cell) has two basic electrodes: a cathode, which is positive, and an anode, which is negative. The difference in potential between these is the voltage, and is usually taken as the voltage at half discharge. The potential is caused by the ability to supply electrons to the anode while taking them at the cathode. At each there is a chemical oxidation/reduction reaction going on. The anode is most easily answered by oxidising a metal. Thus if we oxidise lithium we get Li  ➝ Li+  + e-. The electron disappears off to the circuit. We need something to accept an electron at the cathode, and that is Co+++, which gets reduced to the more stable Co++.  (Everything is a bit more complicated – I am just trying to highlight the problem.) Now superficially the cobalt could be replaced by a variety of elements, but the problem is the cobalt is embedded in a matrix. Most other ions have a substantial volume change of the ions, and if they are embedded in some cathode matrix, the stresses lead it to fall to bits. Cobalt seems to give the least stress, hence will give the batteries a longer life. So we have a problem of sorts: not enough easily accessible lithium, and not enough cobalt. There are also problems that can reduce the voltage or current, including side reactions and polarization.

In a fuel cell we can partly get over that. We need something at the cathode that will convert an input gas into an anion by accepting an electron, thus oxygen and water forms hydroxide. At the anode we need something that “burns”, i.e. goes to a higher valence state and gives up an electron. In my ebook “Red Gold”, a science friction story relating to the first attempt at permanent settlement of Mars, a portable power source was necessary. With no hidden oil fields on Mars, and no oxygen in the air to burn it if there were, I resorted to the fuel cell. The fuel cell chemistry I chose for Mars was to oxidize aluminium, which generates three electrons, and to reduce chlorine. The reason for these was that the settlement on Mars needed to make things from Martian resources, and the most available resource was the regolith, which is powdered rock. This was torn apart by nuclear fusion power, and the elements separated by magnetohydrodynamics, similar to what happens in a mass spectrometer. The net result is you get piles of elements. I chose aluminium because it has three electrons and hence more power capacity, and I chose chlorine because it is a liquid at Martian temperatures so no pressure vessel was required. Also, while oxygen might produce a slightly higher voltage, oxygen forms a protective coating on aluminium, and that stops that reaction.

An aluminium battery would have aluminium at the anode, and might have something in the electrolyte that could deposit more aluminium on it. Thus during a charge, you might get, if chlorine is the oxidiser,

4(Al2Cl7)-   + 3e-  → Al  +  7(AlCl4)-   

which deposits aluminium on the anode. During discharge the opposite happens and you burn aluminium off. Notice here the chlorine is actually tied up in chemical complexes and the battery has no free chlorine. Here, the electrolyte is aluminium chloride (Al2Cl6). For the fuel cell, we would be converting the gas to a complex at the cathode. That is not very practical on Earth, but the enclosed battery would be fine.

The main advantage of aluminium is that it gets rid of the supply problem. Aluminium is extremely common on Earth, as the continents are essentially made of aluminosilicates. The cathode can be simple carbon. A battery with this technology was proposed in 2015 (Nature 520: 325 – 328) that used graphite cathodes. It was claimed to manage 7,500 cycles without capacity decay, which looks good, but so far nobody seems to be taking this up.

Now, for an oddity. For discharge, we need to supply (AlCl4)- to the anode as it effectively supplies chlorine. Rather than have complicated chemistry at the cathode we can have an excess of AlCl4– from the start, and during charging, store it in the cathode structure. During discharge it is released. So now we need something to store it in. The graphite used for lithium-ion batteries comes to mind, but here is an oddity: you get twice the specific capacity, twice the cell efficiency and a 25% increase in voltage by using human hair! Next time you go to the hair dresser, note that in the long term that might be valuable. Of course, before we get too excited, we still need such batteries to be constructed and tested because so far we have no idea how such hair stands up to repeated cycles.

What we do not know about such batteries is how much dead weight has to be carried around and how small they can be made for a given charge. The point about cars is that eventually the critical point is how far will it go on one charge, how long does it take to safely recharge, how much volume of the vehicle does it take, and is it flammable? The advantage of the aluminium chloride system described above is that there are probably no side reactions, and a fire is somewhat unlikely. The materials are cheap. So, the question is, why hasn’t more been done on this system? My guess is that the current manufacturers know that lithium is working, so why change? The fact that eventually they will have to does not bother them. The accountants in charge think beyond the next quarter is long-term. Next year can look after itself. Except we know that when the problem strikes, it takes years to solve it. We should get prepared, but our economic system does  not encourage that.

This and That from the Scientific World

One of the consequences of writing blogs like this is that one tends to be on the lookout for things to write about. This ends up with a collection of curiosities, some of which can be used, some of which eventually get thrown away, and a few I don’t know what to do about. They tend to be too short to write a blog post, but too interesting, at least to me, to ignore. So here is a “Miscellaneous” post.

COP 27.

They agreed that some will pay the poorer nations for damage so far, although we have yet to see the money. There was NO promise by anyone to reduce emissions, and from my point of view, even worse o promise to investigate which technologies are worth going after. Finally, while at the conference there were a number of electric buses offering free rides, at the end of the conference these buses simply disappeared. Usual service (or lack thereof) resumed.

Fighting!

You may think that humans alone fight by throwing things at each other but you would be wrong. A film has been recorded ( https://doi.org/10.1038/d41586-022-03592-w) of two gloomy octopuses throwing things at each other, including clam shells. Octopuses are generally solitary animals, but in Jervis Bay, Australia, the gloomy octopus lives at very high densities, and it appears they irritate each other. When an object was thrown at another one, the throw was far stronger than when just clearing stuff out of the way and it tended to come from specific tentacles, the throwing ones. Further, octopuses on the receiving end ducked! A particularly interesting tactic was to throw silt over the other octopus. I have no idea what the outcome of these encounters were.

Exoplanets

The star HD 23472 has a mass of about 0.67 times that of our sun, and has a surface temperature of about 4,800 degrees K. Accordingly, it is a mid-range K type star, and it has at least five planets. Some of the properties of these include the semi-major axis a (distance from the star if the orbit is circular), the eccentricity e, the mass relative to Earth (M), the density ρ  and the inclination i. The following table gives some of the figures, taken from the NASA exoplanet archive.

Planet     a              e            M        ρ           i

b           0.116      0.07       8.32      6.15      88.9

c           0.165      0.06       3.41      3.10      89.1

d           0.043      0.07       0.55      7.50      88.0

e           0.068      0.07       0.72      7.50      88.6

f           0.091      0.07       0.77       3.0        88.1

The question then is, what to make of all that? The first thing to notice is all the planets are much closer to the star than Earth is to the sun. Is that significant? Maybe not, because another awkward point is that the inclinations are all approaching 90 degrees. The inclination is the angle the orbital plane of the planet makes with the equatorial plane of the star. Now planets usually lie on the equatorial plane because that was also the plane of the accretion disk, so something has either moved the planets, or moved the star. Moving the planets is most probable, and the reason the inclinations are all very similar is because they are close together, and they will probably be in some gravitational resonance with each other. What we see are two super Earths (b and c), two small planets closest to the star, which are small, but very dense. Technically, they are denser than Mercury in our system. There are also two planets (c and f) with densities a little lower than that of Mars.

The innermost part of the habitable zone of that star is calculated to be at 0.364 AU, the Earth-equivalent (where it gets the same radiation as Earth) at 0.5 AU, and the outer boundary of the habitable zone is at 0.767 AU. All of these planets lie well inside the habitable zone. The authors who characterised these planets (Barros, S. C. C. et al. Astron. Astrophys. 665, A154 (2022).) considered the two inner planets to be Mercury equivalents, presumably based on their densities, which approximate to pure iron. My guess is the densities are over-estimated, as the scope for error is fairly large, but they certainly look like Mercury equivalents that are somewhat bigger than our Mercury

Laughing Gas on Exoplanets

One of the targets of the search for exoplanets is to try and find planets that might carry life. The question is, how can you tell? At present, all we can do is to examine the spectra of atmospheres around the planet, and this is not without its difficulties. The most obvious problem is signal intensity. What we look for is specific signals in the infrared spectrum and these will arise from the vibrations of molecules. This can be done from absorptions if the planet transits across the star’s face or better (because the stellar intensity is less a problem) from starlight that passes through the planet’s atmosphere.

The next problem is to decide on what could signify life. Something like carbon dioxide or methane will be at best ambiguous. Carbon dioxide makes up a huge atmosphere on Venus, but we do not expect life there. Methane comes from anaerobic digestion (life) or geological activity (no life). So, the proposal is to look for laughing gas, better known as nitrous oxide. Nitrous oxide is made by some life forms, and oddly enough, it is a greenhouse gas that is becoming more of a problem from the overuse of agricultural fertilizer, as it is a decomposition product of ammonium nitrate. If nothing else, we might find planets with civilizations fighting climate change!

Is Science Sometimes Settled Wrongly

In a post two weeks ago I raised the issue of “settled science”. The concept was there had to be things you were not persistently checking. Obviously, you do not want everyone wasting time rechecking the melting point of benzoic acid, but fundamental theory is different. Who settles that, and how? What is sufficient to say, “We know it must be that!” In my opinion, admittedly biased, there really is something rotten in the state of science. Once upon a time, namely in the 19th century, there were many mistakes made by scientists, but they were sorted out by vigorous debate. The Solvay conferences continued that tradition in the 1920s for quantum mechanics, but something went wrong in the second half of the twentieth century. A prime example occurred in 1952, when David Bohm decided the mysticism inherent in the Copenhagen Interpretation of quantum mechanics required a causal interpretation, and he published a paper in the Physical Review. He expected a storm of controversy and he received – silence. What had happened was that J. Robert Oppenheimer, previously Bohm’s mentor, had called together a group of leading physicists to find an error in the paper. When they failed, Oppenheimer told the group, “If we cannot disprove Bohm we can all agree to ignore him”. Some physicists are quite happy to say Bohm is wrong; they don’t actually know what Bohm said, but they know he is wrong. (https://www.infinitepotential.com/david-bohm-silenced-by-his-colleagues/   ) If that were one isolated example, that would be wrong, but not exactly a crisis. Unfortunately, it is not an isolated case. We cannot know how bad the problem is because we cannot sample it properly.

A complicating issue is how science works. There are millions of scientific papers produced every year. Thanks to time constraints, very few are read by several people. The answer to that would be to publish in-depth reviews, but nobody appears to publish logic analysis reviews. I believe science can be “settled” by quite unreasonable means. As an example, my career went “off the standard rails” with my PhD thesis.

My supervisor’s projects would not work, so I selected my own. There was a serious debate at the time whether strained systems could delocalize their electrons into adjacent unsaturation in the same way double bonds did. My results showed they did not, but it became obvious that cyclopropane stabilized adjacent positive charge. Since olefins do this by delocalizing electrons, it was decided that cyclopropane did that too. When the time came for my supervisor to publish, he refused to publish the most compelling results, despite suggesting this sequence of experiments was his only contribution, because the debate was settling down to the other side. An important part of logic must now be considered. Suppose we can say, if theory A is correct, then we shall see Q. If we see Q we can say that the result is consistent with A, but we cannot say that theory B would not predict Q also. So the question is, is there an alternative?

The answer is yes. The strain arises from orbitals containing electrons being bent inwards towards the centre of the system, hence coming closer to each other. Electrons repel each other. But it also becomes obvious that if you put positive charge adjacent to the ring, that charge will attract the electrons and override the repulsion on and attract electrons moving towards the positive charge. That lowers the energy, and hence stabilizes the system. I actually used an alternative way of looking at it: If you move charge, by bending the orbitals, you should generate a polarization field, and that stabilizes the positive charge. So why look at it like that? Because if the cause of a changing electric field is behind a wall, say, you cannot tell the difference between charge moving or of charge being added. Since the field contains the energy the two approaches give the same strain energy but by considering an added pseudocharge it was easy to put numbers on effects.

However, the other side won, by “proving” delocalization through molecular orbital theory, which, as an aside, assumes it is delocalized. Aristotle had harsh words for people who prove what they assume after a tortuous path. As another aside, the same quantum methodology proved the stability of “polywater” – where your water could turn into a toffee-like consistency. A review came out, and confirmed the “other side” by showing numerous examples of where the strained ring stabilized positive charge. It also it ignored everything that contradicted it.

Much later I wrote a review that showed this first one had ignored up to sixty different types of experimental result that contradicted the conclusion. That was never published by a journal – the three reasons for rejection, in order, were: not enough pictures and too many mathematics; this is settled; and some other journals that said “We do not publish logic analyses”.

I most certainly do not want this post to simply turn into a general whinge, so I shall stop there other than to say I could make five other similar cases that I know about. If that happens to or comes to the attention of one person, how general is it? Perhaps a final comment might be of interest. As those who have followed my earlier posts may know, I concluded that the argument that the present Nobel prize winners in physics found violations of Bell’s Inequality is incorrect in logic. (Their procedure violates Noether’s theorem.) When the prize was announced, I sent a polite communication to the Swedish Academy of Science, stating one reason why the logic was wrong, and asking them, if I had missed something, to inform me where I was wrong. So far, over five weeks later, no response. It appears no response might be the new standard way of dealing with those who question the standard approach.

Fire in Space

Most of us have heard of the dangers of space flight such as solar storms, cosmic rays, leaks to the space crafts, and so on, but there are some ordinary problems too. TV programs like to have space ships in battles, whereupon “shields fail” (there is no such thing as a shield, except the skin of the craft, but let’s leave that pass) and we then have fire. When you stop and think about it, fire of a space ship would be a nasty problem. It burns material that presumably had some use, it overheats things like electronics, which will stop them working, then we come to the real problem: if you don’t have spares, you cannot fix it. You often see scenes where engineers have been running around “beating the clock” but what do they use for parts? If they are going to make parts, out of what? If you say, recycle, then at the very least they should be assiduously collecting “smashed stuff”.

Accordingly, it would make sense for astronauts to prevent fires from starting in the first place. You may recall Apollo 1. The three astronauts were inside the command module practising a countdown. The module used pressurised oxygen, and somehow a fire broke out. Pure oxygen and flammable material is a bad mix, and the astronauts died from carbon monoxide poisoning. The hatch opened inwards, and the rapid increase of pressure from the fire meant that it was impossible to open the hatch. The fire was presumed to have started by some loose wiring arcing, and igniting something. Now, we know better than to have pure oxygen, but the problem remains. Fire in space would not be good.

One obvious defence is to reduce the amount of combustible materials. If there is nothing to burn, there will not be a fire, but that is not entirely practical, so the next question is, how do fires burn in space? At first sight that is obvious: the organic matter gets hot and oxygen reacts with it to make a flame. However, there is more to it.

First, how does a fire burn on Earth? For a simple look, light a candle. What you see is the heat melts the wax, the wax runs up the wick and vaporises. The combustion involves breaking the wax down into a number of smaller molecules (which can be seen as smoke if combustion is incomplete) and free radial fragments, which react with oxygen. Some of the fragments combine to form carbon (soot, if it doesn’t burn further). The carbon is important because it glows, giving off the orange colour, but it also radiates a lot of heat, and that heat that radiates downwards melts the wax. What you will notice is that the flame moves upwards. That is because the gas is hot, hence it expands and occupies less volume than a similar number of moles of air. Going up is simply a consequence of Archimedes’ Principle. As it goes up, it sucks in air from below, so there is a flow of gas entering the flame from below, and exiting above. If you can get hold of some methanol, you could light that. Its formula is CH3OH, which means there are no carbon-carbon bonds, which means it cannot form soot. Therefore, it will burn with a light blue coloured flame and it does not radiate much heat. Methanol burning on your skin will not burn you as long as the skin is below the flame.

Which brings us to space. Since fire is possible on a space ship, NASA has done some experiments, partly to learn more about fire, but also to learn how to put them out on a space ship. The first difference is that in the absence of gravity, flames do not go up, after all there is no “up”. Instead, they form little spheres. Further, since there is no gravity, Archimedes’ Principle is no longer important, so there is nothing to suck fresh air in. Oxygen has to enter by diffusion, and oxygen and fuel combine in a narrow zone on the surface of the sphere. The “fire” continues with no significant flame, and further, while a normal fire burns at about 1500 – 2000 degrees K, these fires using droplets of heptane eventually form cool fire, reaching temperatures of 500 – 800 degrees K. Also, the chemistry is different. On Earth, flames usually produce water, carbon dioxide and soot. In microgravity they were producing water, carbon monoxide and formaldehyde. In short, a rather poisonous mix. Cool fires in space are no less dangerous; just different dangerous. Dealing with them may be different too. Extinguishers that squirt gas may simply move the fire, or supply extra air to it. So far, I doubt we have worked out our final methods for fighting fires in space, but I am sure the general principle is to have the fewest possible combustible materials in the space ship.

Trees for Carbon Capture, and Subsequent Problems

A little over fifty years ago, a 200 page book called The Limits to Growth was published, and the conclusion was that unless something was done, continued economic and population growth would deplete our resources and lead to global economic collapse around 2070. Around 1990, we predicted that greenhouse gases would turn our planet into something we would not like. So, what have we done? In an organized way, not much. One hazard with problem solving is that focusing on one aspect and fixing that often simply shifts it, and sometimes even makes it worse. Currently, we are obsessed with carbon dioxide, but all we appear to be doing is to complacently pat ourselves on the back because we shall be burning somewhat less Russian gas and oil in the future, oblivious to the fact that the substitute is likely to be coal.

One approach to mitigate global warming involves using biomass for carbon capture and storage (See Nature vol 609, p299 – 305). The authors here note that the adverse effects of climate change on crop yields may reduce the capacity of biomass to do this, as well as threaten food security. There are two approaches to overcoming the potential food shortage: increase agricultural land by using marginal land and cutting down forests, or increase nitrogen fertilizer. Now we see what “shifting the problem” means. If we use marginal land, we still have to increase the use of nitrogen fertilizer. This leads to the production of nitrous oxide gas, and these authors show the production of nitrous oxide would be roughly three times as effective as a greenhouse gas as the saving of carbon dioxide in their model. This is serious. All we have done is to generate a worse problem, to say nothing about the damage done to the environment. We have to leave some land for animals and wild plants.

There is a further issue: nitrogen fertiliser is currently made by reacting natural gas to make hydrogen, so for every tonne of fertilizer we will be making something like a tonne of CO2. Much the same happens if we make hydrogen from coal. Rather interestingly for such a paper, the authors concede they may have over-estimated the problems of food shortages on the grounds that new technology and practices may increase yields.

Suppose we make hydrogen by electrolysing water? Ammonia is currently made by heating nitrogen and hydrogen together at 200 times atmospheric pressure. This is by no means optimal, but higher pressures cost a lot more to construct, and there are increasing problems with corrosion, etc. Hydrogen made by electrolysis is also more expensive, in part because electricity is in demand for other purposes, and worse, electricity is also made at least in part by burning fossil fuels, and only a third of the energy is recovered as electricity. When considering a new use, it is important to not that the most adverse in terms of cost and effectiveness must be considered. Even if there are more friendly ways of getting electricity, you get favourable effects by doing nothing and turning off the adverse supply, so that must be assigned to your new use.

There is, however, an alternative in that electricity can directly reduce nitrogen to nitride in the presence of lithium, and if in the presence of a proton-donating substance (technically an acid, but not as you would probably recognize) you directly make ammonia, with no high pressure. So far, this is basically a laboratory curiosity because the yields and yield rates have been just too small, but there was a recent paper in Nature (vol 609, 722 – 727) which claims good increased efficiency. Since the authors write, “We anticipate that these findings will guide the development of a robust, high-performance process for sustainable ammonia production.” They do not feel they are there yet, but it is encouraging that improvements are being made.

Ammonia would be a useful means of carrying hydrogen for transport uses, but nitrogen fertilizer is important for maintaining food production. So can we reduce the nitrous oxide production? Nitrous oxide is a simple decomposition product of ammonium nitrate, which is the usual fertilizer used, but could we use something else, such as urea? Enzymes do convert urea to ammonium nitrate, but slowly, and maybe more nitrogen would end up in the plants. Would it? We don’t know but we could try finding out. The alternative might be to put lime, or even crushed basalt with the fertilizer. The slightly alkaline nature of these materials would react in part with ammonium nitrate and make metal nitrate salts, which would still be good fertilizer, and ammonia, which hopefully could also be used by plants, but now the degradation to nitrous oxide would stop. Would it? We don’t know for sure, but simple chemistry strongly suggests it would. So does it hurt to do then research and find out? Or do we sit on our backsides and eventually wail when we cannot stop the disaster.

Energy Sustainability

Sustainability is the buzzword. Our society must use solar energy, lithium-ion batteries, etc to save the planet, at least that is what they say. But have they done their sums?. Lost in this debate is the fact that many of the technologies use relatively difficult to obtain elements. In a previous post I argued that battery technology was in trouble because there is a shortage of cobalt, required to make the cathode work for a reasonable number of cycles. Others argue that we could obtain sufficient elements. But if we are going to be sustainable, we have to be sustainable for an indefinite length of time, and mining is not sustainable; you can only dig up the ore once. Of course, there are plenty of elements left. There is more gold in the sea than has ever been mined; the problem is that it is too dilute. Similarly, most elements are present in a lump of basalt; just not much of anything useful and it is extremely difficult to get it out. The original copper mines of Cyprus, where even lumps of copper could occasionally be found, are all worked out, at least to the extent that mining is no longer profitable there.

The answer is to recycle, right? Well, according to an article [Charpentier Poncelet, A. et al. Nature Sustain. https://doi.org/10.1038/s41893-022- 00895-8 (2022)] there are troubles. The problem is that even if we recycle, every time we do something we lose some of the metal. Losses here refer to material emitted into the environment, stored in waste-disposal facilities, or diluted in material where the specific characteristics of the elements are no longer required. The authors define a lifetime as the average duration of their use, from mining through to being entirely lost. As with any such definition-dependent study, there will be some points where you disagree.

The first loss for many elements lies in the production state. Quite often it is only economical to obtain one or two elements, and the remaining minor components of the ore disappear in slag. These losses are mainly important for specialty elements. Production losses account for 30% of rare earth metals, 50% for cobalt, 70% for indium, and greater than 95% for arsenic, gallium, germanium, hafnium, selenium and tellurium. So most of those elements critical for certain electronic and photo-electric effects are simply thrown out. We are a wasteful lot.

Manufacturing and use incur very few losses. There are some, but because materials are purified ready for use, pieces that are not immediately used can be remelted and used. There are exceptions. 80% of barium is lost through use; it is used in drilling muds.

The largest losses come from the waste management and recycling stage. Metals undergoing multiple life cycles are still lost this way; it just takes longer to lose them. Recycling losses occur when the metal accumulates in a dust (zinc) or slag(e.g. chromium or vanadium), or get lost in another stream, thus copper is prone to dissolve in an iron stream. Longest lifetimes occur for non-ferrous metals (8 to 76 years) precious metals (4 to 192 years), and ferrous metals (8 to 154 years). The longest lifetimes are for gold and iron.

Now for the problem areas. Lithium has a life-cycle use of 7 years, then it is all gone. But lithium-ion batteries last about this long, which suggests that as yet (if these data are correct) there is very little real recycling of lithium. Elements like gallium and tellurium last less than a year, while indium manages a year. Before you protest that most of the indium goes into swipeable mobile phone screens and mobile phones last longer than a year, at least for some of us, remember that losses occur through being discarded at the mining stage, where the miner/processor can’t be bothered. Of the fifteen metals most lost during mining/processing, thirteen are critical for sustainable energy, such as cobalt (lithium-ion batteries), neodymium (permanent magnets), indium, gallium, germanium, selenium and tellurium (solar cells) and scandium (solid oxide fuel cells). If we look at the recycled content of “new material” lithium is less than 1% as is indium. Gallium and tellurium are seemingly not recycled. Why are they not recycled? Metals that are recycled tend to be like iron, aluminium, the precious metals and copper. It is reasonably easy to find uses for them where purity is not critical. Recycling and purifying most of the others requires technical skill and significant investment. If we think of lithium-ion batteries, the lithium reacts with water, and if it starts burning it is unlikely to be put out. Some items may have over a dozen elements, and some are highly toxic, and not to be in the hands of the amateur. What we see happening is that the “easy” metals are recycled by organizations that are really low-technology organizations. It is not an area attractive to the highly skilled because the economic risk/return is just not worth it, while the less-skilled simply cannot do it safely.

Banana-skin Science

Every now and again we find something that looks weird, but just maybe there is something in it. And while reading it, one wonders, how on Earth did they come up with this? The paper in question was Silva et. al. 2022. Chemical Science 13: 1774. What they did was to take dried biomass powder and exposed it to a flash of 14.5 ms duration from a high-power xenon flash lamp. That type of chemistry was first developed to study the very short-lived intermediates generated in photochemistry, when light excites the molecule to a high energy state, where it can decay through unusual rearrangements. This type of study has been going on since the 1960s and equipment has steadily been improving and being made more powerful. However, it is most unusual to find it used for something that ordinary heat would do far more cheaply. Anyway, 1 kg of such dried powder generated about 100 litres of hydrogen and 330 g of biochar. So, what else was weird? The biomass was dried banana skin! Ecuador, sit up and take notice. But before you do, note that flash xenon lamps are not going to be an exceptionally economical way of providing heat. That is the point; this very expensive source of light was actually merely providing heat.

There are three ways of doing pyrolysis. In the previous post I pointed out that if you took cellulose and eliminated all the oxygen in the form of water, you were left with carbon. If you eliminate the oxygen as carbon monoxide you are left with hydrogen. If you eliminate it as carbon dioxide you get hydrogen and hydrocarbon. In practice what you get depends on how you do it. Slow pyrolysis at moderate heat mainly makes charcoal and water, with some gas. It may come as a surprise to some but ordinary charcoal is not carbon; it is about 1/3 oxygen, some minor bits and pieces such as nitrogen, phosphorus, potassium, and sulphur, and the rest carbon.

If you do very fast pyrolysis, called ablative pyrolysis, you can get almost all liquids and gas. I once saw this done in a lab in Colorado where a tautly held (like a hacksaw blade) electrically heated hot wire cut through wood like butter, the wire continually moving so the uncondensed liquids (which most would call smoke) and gas were swept out. There was essentially no sign of “burnt wood”, and no black. The basic idea of ablative pyrolysis is you fire wood dust or small chips at a plate at an appropriate angle to the path so the wood sweeps across it and the gas is swept away by the gas stream (which can be recycled gas) propelling the wood. Now the paper I referenced above claimed much faster pyrolysis, but got much more charcoal. The question is, why? The simple answer, in my opinion, is nothing was sweeping the product away so it hung around and got charred.

The products varied depending on the power from the lamp, which depended on the applied voltage. At what I assume was maximum voltage the major products were (apart from carbon) hydrogen and carbon monoxide. 100 litres of hydrogen, and a bit more carbon monoxide were formed, which is a good synthesis gas mix. There were also 10 litres of methane, and about 40 litres of carbon dioxide that would have to be scrubbed out. The biomass had to be reduced to 20 μm size and placed on a surface as a layer 50 μm thick. My personal view is that is near impossible to scale this up to useful sizes. It uses light as an energy source, which is difficult to generate so almost certainly the process is a net energy consumer. In short, this so-called “breakthrough” could have been carried out to give better yields of whatever was required far more cheaply by people a hundred years ago.

Perhaps the idea of using light, however, is not so retrograde. The trick would be to devise apparatus that with pyrolyse wood ablatively (or not if you want charcoal) using light focused by large mirrors. The source, the sun, is free until it hits the mirrors. Most of us will have ignited paper with a magnifying glass. Keep the oxygen out and just maybe you have something that will make chemical intermediates that you can call “green”.

Biofuels to Power Transport

No sooner do I post something than someone says something to contradict the post. In this case, immediately after the last post, an airline came out and said it would be zero carbon by some time in the not-too-distant future. They talked about, amongst other things, hydrogen. There is no doubt hydrogen could power an aircraft, as it also powers rockets that go into space. That is liquid hydrogen, and once the craft takes off, it burns for a matter of minutes. I still think it would be very risky for aircraft to try to hold the pressures that could be generated for hours. If you do contain it, the extra weight and volume occupied would make such travel extremely expensive, while sitting above a tank of hydrogen is risky.

Hydrocarbons make by far the best aircraft fuel, and one alternative source of them is from biomass. I should caution that I have been working in this area of scientific research on and off for decades (more off than on because of the need to earn money.) With that caveat, I ask you to consider the following:

C6H12O6  ->  2 CO2 +2H2O + “C4H8”

That is idealized, but the message is a molecule of glucose (from water plus cellulose) can give two molecules each of CO2 and water, plus two thirds of the starting carbon as a hydrocarbon, which would be useful as a fuel. If you were to add enough hydrogen to convert the CO2 to a fuel you get more fuel. Actually, you do not need much hydrogen because we usually get quite a few aromatics, thus if we took two “C4H8” and make xylene or ethyl benzene (both products that are made in simple liquefactions) these total C8H10, which gives us a surplus of three H2 molecules. The point here is that in each of these cases we could imagine the energy coming from solar, but if you use biomass, much of the energy is collected for you by nature. Of course, if you take the oxygen out as water you are left with carbon. In practice there are a lot of options, and what you get tends to depend on how you do it. Biomass also contains lignin, which is a phenolic material. This is much richer in hydrocarbon material, but also it is much harder to remove the oxygen.

In my opinion, there are four basic approaches to making hydrocarbon fuels from biomass. The first, which everyone refers to, is pyrolysis. You heat the biomass, you get a lot of charcoal, but you also get liquids. These still tend to have a lot of oxygen in them, and I do not approve of this because the yields of anything useful are too low unless you want to make charcoal, or carbon, say for metal refining, steel making, electrodes for batteries, etc. There is an exception to that statement, but that needs a further post.

The second is to gasify the biomass, preferably by forcing oxygen into it and partially burning it. This gives you what chemists call synthesis gas, and you can make fuels through a further process called the Fischer-Tropsch process. Germany used that during the war, and Sasol in South Africa Sasol, but in both cases coal was the source of carbon. Biomass would work, and in the 1970s Union Carbide built such a gasifier, but that came to nothing when the oil price collapsed.

The third is high-pressure hydrogenation. The biomass is slurried in oil and heated to something over 400 degrees Centigrade in then presence of a nickel catalyst and hydrogen. A good quality oil is obtained, and in the 1980s there was a proposal to use the refuse of the town of Worcester, Mass. to operate a 50 t/d plant. Again, this came to nothing when the price of oil slumped.

The fourth is hydrothermal liquefaction. Again, what you get depends on what you put in but basically there are two main fractions from woody biomass: hydrocarbons and phenolics. The phenolics (which includes aromatic ethers) need to be hydrogenated, but the hydrocarbons are directly usable, with distillation. The petrol fraction is a high octane, and the heavier hydrocarbons qualify as very high-quality jet fuel. If you use microalgae or animal residues, you also end up with a high cetane diesel cut, and nitrogenous chemicals. Of particular interest from the point of view of jet fuel, in New Zealand they once planted Pinus Radiata which grew very quickly, and had up to 15% terpene content, most of which would make excellent jet fuel, but to improve the quality of the wood, they bred the terpenes more or less out of the trees.

The point of this is that growing biomass could help remove carbon dioxide from the atmosphere and make the fuels needed to keep a realistic number of heritage cars on the road and power long-distance air transport, while being carbon neutral. This needs plenty of engineering development, but in the long run it may be a lot cheaper than just throwing everything we have away and then finding we can’t replace it because there are shortages of elements.

Molecular Oxygen in a Comet

There is a pressure, these days, on scientists to be productive. That is fair enough – you don’t want them slacking off in a corner, but a problem arises when this leads to the publication of papers: there are so many of them that nobody can keep up with even a small fraction of them. Worse, many of them do not seem to say much. Up to a point, this has an odd benefit: if you leave a lot unclear, all your associates can publish away and cite you, which has this effect of making you seem more important because funders like to count citations. In short, with obvious exceptions, the less you advance the science, the more important you seem at second level funding. I am going to pick, maybe unfairly, on one paper from Nature Astronomy (https://www.nature.com/articles/s41550-022-01614-1) as an illustration.

One of the most unexpected findings in the coma of comet 67P/Churyumov-Gerasimenko was “a large amount” of molecular oxygen. Something to breathe! Potential space pilots should not get excited; “a large amount” is only large with respect to what they expected, which was none. At the time, this was a surprise to astronomers because molecular oxygen is rather reactive and it is difficult to see why it would be present. Now there is a “breakthrough”: it has been concluded there is not that much oxygen in the comet at all, but this oxygen came from a separate small reservoir. The “clue” came from the molecular oxygen being associated with molecular water when emitted from a warm site. As it got cooler, any oxygen was associated with carbon dioxide or carbon monoxide. Now, you may well wonder what sort of clue that is? My question is, given there is oxygen there, what would you expect? The comet is half water, so when the surface gets warm, it sublimes. When cooler, only gases at that lower temperature get emitted. What is the puzzle?

However, the authors of the paper came to a different conclusion. They decided that there had to be a deep reservoir of oxygen within the comet, and a second reservoir close to the surface that is made of porous frozen water. According to them, oxygen in the core works its way to the surface and gets trapped in the second reservoir. Note that this is an additional proposition to the obvious one that oxygen was trapped in ice near the surface. We knew there was gas trapped in ice that was released with heat, so why postulate multiple reservoirs, other than to get a paper published?

So, where did this oxygen come from? There are two possibilities. The first is it was accreted with the gas from the disk when the comet formed. This is somewhat difficult to accept. Ordinary chemistry suggests that if oxygen molecules were present in the interstellar dust cloud it should react with hydrogen and form water. Maybe that conclusion is somehow wrong, but we can find out. We can estimate the probability by observing the numerous dust clouds from which stars accrete. As far as I am aware, nobody has ever found rich amounts of molecular oxygen in them. The usual practice when you are proposing something unusual is you find some sort of supporting evidence. Seemingly, not this time.

The second possibility is that we know how molecular oxygen could be formed at the surface. High energy photons and solar wind smash water molecules in ice to form hydrogen and hydroxyl radicals. The hydrogen escapes to space but the hydroxyl radicals unite to form hydrogen peroxide or other peroxides or superoxides, which can work their way into the ice. There are a number of other solids that catalyse the degradation of peroxides and superoxides back to oxygen, which would be trapped in the ice, but released when the ice sublimed. So, from the chemist’s point of view there is a fairly ordinary explanation why oxygen might be formed and gather near the surface. From my point of view, Occam’s Razor should apply: you use the simplest explanation unless there is good evidence. I do not see any evidence about the interior of the comet.

Does it matter? From my point of view when someone with some sort of authority/standing says something like this, there is the danger that the next paper will say “X established that . . “  and it becomes almost a gospel. This is especially so when the assertion cannot be easily challenged with evidence as you cannot get inside that comet. Which gives the perverse realization that you need strong evidence to challenge an assertion, but maybe no evidence at all to assert it in the first place. Weird?