Phosphine on Venus

An article was published in Nature Astronomy on 14th September, 2020, that reported the detection of a signal corresponding to the 1 – 0 rotational transition of phosphine, which has a wavelength of 1.123 mm. This was a very weak signal that had to be obtained by mathematical processing to remove artefacts such as spectral “ripple” that originate from reflections. Nevertheless, the data at the end is strongly suggestive that the line is real. Therefore they found phosphine, right? And since phosphine is made from anaerobes and emitted from marsh gas, they found life, right? Er, hold on. Let us consider this in more detail.

First, is the signal real? The analysis detected the HDO signal at 1.126 mm, which is known to be the 2 – 3 rotational transition. That strongly confirms their equipment and analysis was working properly for that species, so this additional signal is likely to be real. The levels of phosphine have been estimated as between 10 – 30 ppb. However, there is a problem because such spectral signals come from changes to the spin rate of molecules. All molecules can only spin at certain quantised energies, but there are a number of options, thus the phosphine was supposed to be from the first excited state to the ground. There are a very large number of possible states, and higher states are more common at higher temperatures. The Venusian atmosphere ranges from about 30 oC near the top to somewhere approaching 500 oC at the bottom. Also, collisions will change spin rates. Most of our data comes from our atmospheric pressure or lower pressures as doing microwave experiments in high-pressure vessels is not easy. The position of the lines depends on the moment of inertia, so different molecules have different energy levels, and there are different ways  of spinning, tumbling, etc, for complicated molecules. Thus it is possible that the signal could be due to something else. However, the authors examined all the alternatives they could think of and only phosphine remained.

This paper rejected sulphur dioxide as a possibility because in the Venusian atmosphere it gets oxidised to sulphuric acid so there  is not enough of it, but phosphine is actually far more easily oxidised. If we look at our atmosphere, there are actually a number of odd looking molecules caused by photochemistry. The Venusian atmosphere would also have photochemistry but since its atmosphere is so different from ours we cannot guess what that is at present. However, for me I think there is a good chance this signal is from a molecule generated photochemically. The reason is the signal is strongest at the equator and fades away at the poles, where the light intensity per unit area is lower. Note that if it were phosphine generated by life and was removed photochemically, you would get the opposite result.

Phosphine is a rather reactive material, and according to the Nature article models predict its lifetime at 80 km altitude as less than a thousand seconds due to photodegradation. They argue its life should be longer lower down because the UV light intensity is weaker, but they overlook chemical reactions. Amongst other things, concentrated sulphuric acid would react instantaneously with it to make a phosphonium salt, and while the phosphine is not initially destroyed, its ability to make this signal is.

Why does this suggest life? Calculations with some fairly generous lifetimes suggest a minimum of about million molecules have to be made every second on every square centimeter of the planet. There is no known chemistry that can do that. Thus life is proposed on the basis of, “What else could it be?” which is a potential logic fallacy in the making, namely concluding from ignorance. On earth anaerobes make phosphine and it comes out as “marsh gas”, where it promptly reacts with oxygen in the air. This is actually rather rare, and is almost certainly an accident caused by phosphate particles  being in the wrong place in the enzyme system. I have been around many swamps and never smelt phosphine. What anaerobes do is take oxidised material and reduce them, taking energy and some carbon and oxygen, and spit out as waste highly reduced compounds, such as methane. There is a rather low probability they will take sulphates and make hydrogen sulphide and phosphine from phosphates. The problem I have is the Venusian atmosphere is full of concentrated sulphuric acid clouds, and enzymes would not work, or last, in that environment. If the life forms were above the sulphuric acid clouds, they would also be above the phosphoric acid, so where would they get their phosphorus? Further, all life needs phosphate: it is the only functional group that has the requirement to link reproductive entities (two to link a polymer, and one to provide the ionic group to solubilize the whole and let the strands separate while reproducing), it is the basis of adenosine tripolyphosphate which is the energy transfer agent for lfe, and the adenosine phosphates are essential solubilizing agents for many enzyme cofactors, in short, no phosphate, no life. Phosphate occurs in rocks so it will be very scarce in the atmosphere, so why would it waste what little that was there to make phosphine?To summarize, I have no idea what caused this signal and I don’t think anyone else has either. I think there is a lot of chemistry associated with the Venusian atmosphere we do not understand, but I think this will be resolved sooner or later, as it has got so much attention.

Science is No Better than its Practitioners

Perhaps I am getting grumpy as I age, but I feel that much in science is not right. One place lies in the fallacy ad verecundiam. This is the fallacy of resorting to authority. As the motto of the Royal Society puts it, nullius in verba. Now, nobody expects you to personally check everything, and if someone has measured something and either clearly shows how he/she did it, or it is something that is done reasonably often, then you take their word for it. Thus if I want to know the melting point of benzoic acid I look it up, and know that if the reported value is wrong, someone would have noticed. However, a different problem arises with theory because you cannot measure it. Further, science has got so complicated that any expert is usually an expert in a very narrow field. The net result is that  because things have got so complicated, most scientists find theories too difficult to examine in detail and do defer to experts. In physics, this tends to be because there is a tendency for the theory to descend into obscure mathematics and worse, the proponents seem to believe that mathematics IS the basis of nature. That means there is no need to think of causes. There is another problem, that also drifts over to chemistry, and that is the results of a computer-driven calculation must be right. True, there will be no arithmetical mistake but as was driven into our heads in my early computer lectures: garbage in, garbage out.

This post was sparked by an answer I gave to a chemistry question on Quora. Chemical bonds are usually formed by taking two atoms with a single electron in an orbital. Think of that as a wave that can only have one or two electrons. The reason it can have only two electrons is the Pauli Exclusion Principle, which is a very fundamental principle in physics. If each atom has only one in  such an orbital, they can combine and form a wave with two electrons, and that binds the two atoms. Yes, oversimplified. So the question was, how does phosphorus pentafluoride form. The fluorine atoms have one such unpaired electron each, and the phosphorus has three, and additionally a pair in one wave. Accordingly, you expect phosphorus to form a trifluoride, which it does, but how come the pentafluoride? Without going into too many details, my answer was that the paired electrons are unpaired, one is put into another wave and to make this legitimate, an extra node is placed in the second wave, a process called hybridization. This has been a fairly standard answer in text books.

So, what happened next? I posted that, and also shared it to something called “The Chemistry Space”. A moderator there rejected it, and said he did so because he did not believe it. Computer calculations showed there was no extra node. Eh?? So I replied and asked how this computation got around the Exclusion Principle, then to be additionally annoying I asked how the computation set the constants of integration. If you look at John Pople’s Nobel lecture, you will see he set these constants for hydrocarbons by optimizing the results for 250 different hydrocarbons, but leaving aside the case that simply degenerates into a glorified empirical procedure, for phosphorus pentafluoride there is only one relevant compound. Needless to say, I received no answer, but I find this annoying. Sure, this issue is somewhat trivial, but it highlights the greater problem that some scientists are perfectly happy to hide behind obscure mathematics, or even more obscure computer programming.

It is interesting to consider what a theory should do. First, it should be consistent with what we see. Second, it should encompass as many different types of observation as possible. To show what I mean, in phosphorus pentafluoride example, the method I described can be transferred to other structures of different molecules. That does not make it right, but at least it is not obviously wrong. The problem with a computation is, unless you know the details of how it was carried out, it cannot be applied elsewhere, and interestingly I saw a recent comment in a publication by the Royal Society of Chemistry that computations from a couple of decades ago cannot be checked or used because the details of the code are lost. Oops. A third requirement, in my opinion, is it should assist in understanding what we see, and even lead to a prediction of something new.

Fundamental theories cannot be deduced; the principles have to come from nature. Thus mathematics describes what we see in quantum mechanics, but you could find an alternative mathematical description for anything else nature decided to do, for example, classical mechanics is also fully self-consistent. For relativity, velocities are either additive or they are not, and you can find mathematics either way. The problem then is that if someone draws a wrong premise early, mathematics can be made to fit a lot of other material to it. A major discovery and change of paradigm only occurs if there is a major fault discovered that cannot be papered over.

So, to finish this post in a slightly different way to usual: a challenge. I once wrote a novel, Athene’s Prophecy, in which the main character in the first century was asked by the “Goddess” Athene to prove that the Earth went around the sun. Can you do it, with what could reasonably be seen at the time? The details had already been worked out by Aristarchus of Samos, who also worked out the size and distance of the Moon and Sun, and the huge distances are a possible clue. (Thanks to the limits of his equipment, Aristarchus’ measurements are erroneous, but good enough to show the huge distances.) So there was already a theory that showed it might work. The problem was that the alternative also worked, as shown by Claudius Ptolemy. So you have to show why one is the true one. 

Problems you might encounter are as follows. Aristotle had shown that the Earth cannot rotate. The argument was that if you threw a ball into the air so that when it reached the top of its flight it would be directly above you, when the ball fell to the ground it would be to the east of you. He did it, and it wasn’t, so the Earth does not rotate. (Can you see what is wrong? Hint – the argument implies the conservation of angular momentum, and that is correct.) Further, if the Earth went around the sun, to do so orbital motion involves falling and since heavier things fall faster than light things, the Earth would fall to pieces. Comets may well fall around the Sun. Another point was that since air rises, the cosmos must be full of air, and if the Earth went around the Sun, there would be a continual easterly wind. 

So part of the problem in overturning any theory is first to find out what is wrong with the existing one. Then to assert you are correct, your theory has to do something the other theory cannot do, or show the other theory has something that falsifies it. The point of this challenge is to show by example just how difficult forming a scientific theory actually is, until you hear the answer and then it is easy.

Materials that Remember their Original Design

Recall in the movie Terminator 2 there was this robot that could turn into a liquid then return to its original shape and act as if it were solid metal. Well, according to Pu Zhang at Binghampton University in the US, something like that has been made, although not quite like the evil robot. What he has made is a solid that acts like a metal that, with sufficient force, can be crushed or variously deformed, then brought back to its original shape spontaneously by warming.

The metal part is a collection of small pieces of Field’s alloy, an alloy of bismuth, indium and tin. This has the rather unusual property of melting at 62 degrees Centigrade, which is the temperature reached by fairly warm water. The pieces have to be made with flat faces of the desired shape so that they effectively lock themselves together and it is this locking that at least partially gives the body its strength. The alloy pieces are then coated with a silicone shell using a process called conformal coating, a technique used to coat circuit boards to protect them from the environment and the whole is put together with 3D printing. How the system works (assuming it does) is that when force is applied that would crush or variously deform the fabricated object, as the metal pieces get deformed, the silicone coating gets stretched. The silicone is an elastomer, so as it gets stretched, just like a rubber band, it stores energy. Now, if the object is warmed the metal melts and can flow. At this point, like a rubber band let go, the silicone restores everything to the original shape, the when it cools the metal crystallizes and we are back where we started.

According to Physics World Zhang and his colleagues made several demonstration structures such as a honeycomb, a spider’s web-like structure and a hand, these were all crushed, and when warmed they sprang back to life in their original form. At first sight this might seem to be designed to put panel beaters out of business. You have a minor prang but do not worry: just get out the hair drier and all will be well. That, of course, is unlikely. As you may have noticed, one of the components is indium. There is not a lot of indium around and for its currently very restricted uses it costs about $US800/kg, which would make for a rather expensive bumper. Large-scale usage would make the cost astronomical. The cost of manufacturing would also always limit its use to rather specialist objects, irrespective of availabiity.One of the uses advocated by Zhang is in space missions. While weight has to be limited on space missions, volume is also a problem, especially for objects with awkward shapes, such as antennae or awkward shaped superstructures. The idea is they could be crushed down to a flat compact load for easy storage, then reassembled. The car bumper might be out of bounds because of cost and limited indium supply, but the cushioning effect arising from its ability to absorb a considerable amount of energy might be useful in space missions. Engineers usually use aluminium or steel for cushioning parts, but they are single use. A spacecraft with such landing cushions can be used once, but landing cushions made of this material could be restored simply by heating them. Zhang seems to favour the use in space engineering. He says he is contemplating building a liquid robot, but there is one thing, apart from behaviour, that such a robot could not do that the terminator robot did, and that is, if the robot has bits knocked off and the bits melt, they cannot reassemble into a whole. Leaving aside the fact there is no force to rejoin the bits, the individual bits will merely reassemble into whatever parts they were and cannot rejoin with the other bits. Think of it as held together by millions of rubber bands. Breaking into bits breaks a fraction of the rubber bands, which leaves no force to restore the original shape at the break.

A Different Way of Applying Quantum Mechanics to Chemistry

Time to explain what I presented at the conference, and to start, there are interpretive issues to sort. The first issue is either there is a physical wave or there is not. While there is no definitive answer to this, I argue there is, because something has to go through both slits in the two slit experiment, and the evidence is the particle always goes through only one slit. That means something else should be there. 

I differ from the pilot wave theory in two ways. The first is mathematical. The wave is taken as complex because its phase is. (Here, complex means a value includes the square root of minus 1, and is sometimes called an imginary number for obvious reasons.) However, Euler, the mathematician that really invented complex numbers, showed that if the phase evolves, as waves always do by definition of an oscillation and as action does with time, there are two points that are real, and these are at the antinodes of the wave. That means the amplitudes of the quantal matter wave should have real values there. The second difference is if the particle is governed by a wave pulse, the two must travel at the same velocity. If so, and if the standard quantum equations apply, there is a further energy, equal in magnitude to the kinetic energy of the particle. This could be regarded as equivalent to Bohm’s quantum potential, except there is a clear difference in that there is a clear value of it. It is a hidden variable in that you cannot measure it, but then again, so is potential energy; you have never actually measured that.

This gives a rather unexpected simplification: the wave behaves exactly as a classical wave, in which the square of the amplitude, which is a real number, gives the energy of the particle. This is a big advance over the probabilistic interpetation because while it may be correct,  the standard wave theory would say that A^2 = 1 per particle, that is, the probability of one particle being somewhere is 1, which is not particularly informative. The difference now is that for something like the chemical bond, the probabilistic interpretation requires all values of ψ1 to interact with all values of ψ2; The guidance wave method merely needs the magnitude of the combined amplitude.  Further, the wave refers only to a particle’s motion, and not to a state (although the particle motion may define a state). This gives a remarkable simplification for the stationary state, and in particular, the chemical bond. The usual way of calculating this involves calculating the probabilities of where all the electrons will be, then further calculating the interactions between them, recalculating their positions with the new interactions, recalculating the new interactions, etc, and throughout this procedure, getting the positional probabilities requires double integrations because forces give accelerations, which means constants have to be assigned. This is often done, following Pople, by making the calculations fit similar molecules and transferring the constants, which to me is essentially an empirical assignment, albeit disguised with some fearsome mathematics.

The approach I introduced was to ignore the position of the electrons and concentrate on the waves. The waves add linearly, but you introduce two new interactions that effectively require two new wave components. They are restricted to being between the nuclei, the reason being that the force on nucleus 1, from atom 2 must be zero, otherwise it would accelerate and there would be no bond. The energy of an interaction is the energy added to the antinode. For a hydrogen molecule, the electrons are indistinguishable, so the energy of the two new interactions are equivalent to the energy of the two equivalent ones from the linear addition, therefore the bond energy of H2 is 1/3 the Rydberg energy. This is out by about 0.3%. We get better agreement using potential energy, and introduce electric field renormalisation to comply with Maxwell’s equations. Needless to say this did not get much excitement from the conference audience. You cannot dazzle with obscure mathematics doing this.

The first shock for the audience was when I introduced my Certainty Principle, which, roughly stated, is the action of a stationary state must be quantized, and further, because of the force relationship I introduced above, the action must be that of the sum of the participating atoms. Further, the periodic time of the initial electron interactions must be constant (because their waves add linearly, a basic wave property). The reason you get bonding from electron pairing is that the two electrons halve the periodic time on that zone, which permits twice the energy at constant action, or the addition of the two extras as shown for hydrogen. That also contracts the distance to the antinode, in which case the appropriate energy can only arise because the electron – electron repulsion compensates. This action relationship is also the first time a real cause of there being a bond radius that is constant across a number of bonds has been made. The repulsion energy which is such a problem if you consider it from the point of view of electron position is self-correcting if you consider the wave aspect. The waves cannot add linearly and maintain constant action unless the electrons provide exactly the correct compensation, and the proposition is the guidance waves guide them into doing that.

The next piece of “shock” comes with other atoms. The standard theory says their wave functions correspond to the excited states of hydrogen because they are the only solutions of the Schrödinger equation. However, if you do that, you find that the calculated values are too small. By caesium, the calculated energy is out by an order of magnitude. The standard answer – the electrons penetrate the space occupied by the inner electrons and experience stronger electric field. If you accept the guidance wave principle, that cannot be because the energy is determined at the major antinode. (If you accept Maxwell’s equations I argue it cannot be either, but that is too complicated to be put here.) So my answer is that the wave is actually a sum of component waves that have different nodal structures, and an expression for the so-called screening constant (which is anything but constant, and varies according to circumstances) is actually a function of quantum numbers, and I produced tables that shows the functions give good agreement for the various groups, and for excited states.

Now, the next shock. Standard theory has argued that the properties of heavy elements like gold have unusual properties because these “penetrations” lead to significant relativistic corrections. My guidance wave theory requires such relativistic effects to be very minor, and I provided  evidence that showed the properties of elements like francium, gold, thallium, and lead were as good or better fitting with my functions as any of the other elements.

What I did not tell them was that the tables of data I showed them had been published in a peer reviewed journal over thirty years before, but nobody took any notice. As someone said to me, when I mentioned this as an aside, “If it isn’t published in the Physical Review or J. Chem Phys., nobody reads it.” So much for publishing in academic journals.  As to why I skipped those two, I published that while I was starting my company, and $US 1,000 per page did not strike me as a good investment.So, if you have got this far, I appreciate your persistence. I wish you a very Merry Christmas and the best for 2020. I shall stop posting until a Thursday in mid January, as this is the summer holiday season here.

The Sociodynamics of Science

The title is a bit of an exaggeration as to the importance of this post, nevertheless since I was at what was probably my last scientific conference (NZ Institute of Chemistry, at Christchurch) I could not resist looking around at behaviour as well as the science. I also gave two presentations. Speaking to an audience gives the speaker an opportunity to order the presentation so as to give the most force to the surprising parts of it, not that many took advantage of this. Overall, very few, if any (apart from yours truly) seemed to want to provide their audience with something that might be uncomfortable for their preconceived notions.

First, the general part provided great support for Thomas Kuhn’s analysis. I found most of the invited speakers and keynote speakers to illustrate an interesting aspect: why are they speaking? Very few actually wished to educate or convince anyone of anything in particular, and personally, I found the few that did to be by far the most interesting. Most of the presentations from academics could be summarised as, “I have a huge number of research students and here is what they have done.” What then followed was a very large amount of results, but there was seldom an interesting unifying principle. Chemistry tends to be susceptible to this, as a very common student research program is to try to make a variety of related compounds. This may well have been very useful, but if we do not see why this approach was taken, it tends to feel like filling up some compendium of compounds, or, as Rutherford put it rather acidly, “stamp collecting”. These types of talks are characterised by the speaker trying to get in as many compounds as they can, so they keep talking and use up the allocated question time. I suspect that one of the purposes of these presentations is to say, “Look at what we have done. This has given our graduate students a good number of scientific publications, so if you are thinking of being a grad student, why not come here?” I can readily understand that line of thinking, but its relevance for older scientists is questionable. There were a few presentations where the output would be of more general interest, though. I found the odd presentation that showed how to do something new, where it could have quite wide applications, to be of particular interest.

Now to the personal. My first presentation was a summary of my biogenesis approach. It may have had too much information across too wide a field, but the interesting point was that it generated a discussion at the end relating to my concept of how homochirality was generated. My argument is that reproduction depends on it because the geometry prevents the formation of a second strand if the first strand is not either entirely left-handed or right-handed in its pitch. So the issue then was, it was pure chance that D-ribose containing helices predominated, in part because the chance of getting a long-enough homochiral strand is very remote, and when one arises, then it takes up all the resources and predominates. The legitimate question then is, why doesn’t the other handed helix eventually arise? It may be slower to do so, but it is not necessarily impossible. My partial answer to that is the mer units are also used to bind to some other important units for life to give them solubility, and the wrong sort gets used up and does not build up concentration. Maybe that is so, but there is no evidence.

It was my second presentation that would be controversial, and it was interesting to watch the expressions. Part of the problem for me was it was the last such presentation (there were some closing speakers after me, and after morning tea) and there is something about conferences at the end – everyone is busy thinking about how to get to the airport, etc, so they tend to lose concentration. My first slide put up three propositions: the wave functions everyone uses for atomic orbitals are wrong; because of that, the calculation of the chemical bond requires the use of a hitherto unrecognised quantum effect (which is a very specific expression involving only universally recognised quantum numbers) and finally, the commonly held belief that relativistic effects on the inner electrons make a major effect on the valence electron of the heaviest elements is wrong. 

As you might expect, this was greeted initially with yawns and disinterest: this was going to be wrong. At least that seemed to be written over their faces. I then diverted to explain my guidance wave interpretation, which is essentially the de Broglie pilot wave concept, but with two additions: an application of Euler’s complex number theory that everyone seems to have missed, and secondly, I argued that if the wave really causes diffraction in the two-slit-type experiment, it has to travel at the same speed as the particle. These two points lead to serious simplifications in the calculation of properties of chemical bonds. The next step was to put up a lot of evidence for the different wave functions, with about 70 data points spanning a selection of atoms, of which about twenty supported the absence of any significant relativistic effect. (This does not say relativity is wrong, but merely that its effects on valence electrons are too small to be noticed at this level of analysis.) What this was effectively saying was that most of the current calculations only give agreement with observation when liberal use is made of assignable constants, which conveniently can be adjusted so you get the “right” answer.So, question time. One question surprised me: Does my new approach do anything new? I argued that the fact everyone is using the wrong wave functions, there is a quantum effect that nobody has recognised, and everyone is wrong with those relativistic effects could be considered new. Yes, but have you got a prediction? This was someone difficult to satisfy. Well, if you have access to a good physics lab, I suggested, here is where you can show that, assuming my theory is correct, make an adjustment to the delayed choice quantum eraser experiment (and I outlined the simple change) then you will reach the opposite conclusion. If you don’t agree with me, then you should do the experiment to prove I am wrong. The stunned expressions were worth the cost of going to the conference. Not that anyone will do the experiment. That would show interest in finding the truth, and in fairness, it is more a job for a physicist.

Recycling Lithium Ion Batteries

One of the biggest contributors to greenhouse warming is transport, and the solution that seems to be advocated is to switch to electric vehicles as they do not release CO2, and the usual option is to use the lithium ion battery A problem that I highlighted in a previous blog is we don’t have enough cobalt, and we run out of a lot of other things if we do not recycle. A recent review in Nature (https://doi.org/10.1038/s41586-019-1682-5)   covered recycling and the following depends on that review. The number of vehicles in the world is estimated to reach 2 billion by 2035 and if all are powered by lithium ion batteries the total pack wastes would be 500 million tonnes, and occupy a billion cubic meters. Since the batteries last about nine years, we eventually get drowned in dead batteries, unless we recycle. Also, dead lithium ion batteries are a fire hazard. 

There are two initial approaches, assuming we get the batteries cleanly out of the vehicle. One is to crush the whole and burn off the graphite, plastics, electrolyte, etc, which gives an alloy of Co, Cu, Fe and Ni, together with a slag that contains aluminium and manganese oxides, and some lithium carbonate. This loses over half the mass of the batteries and contributes to more greenhouse warming, which was what we were trying to avoid. Much of the lithium is often lost this way to, and finally, we generate a certain amount of hydrogen fluoride, a very toxic gas. The problem then is to find a use for an alloy of unknown composition. Alternatively, the alloy can be treated with chlorine, or acid, to dissolve it and get the salts of the elements.

The alternative is to disassemble the batteries, and some remaining electricity can be salvaged. It is imperative to avoid short-circuiting the pack, to prevent thermal runaway, which produces hydrofluoric acid and carcinogenic materials, while fire is a continual hazard. A further complication is that total discharge is not desirable because copper can dissolve into the electrolyte, contaminating the materials that could be recycled. There is a further problem that bedevils recycling and arises from free market economics: different manufacturers offer different batteries with different physical configurations, cell types and even different chemistries. Some cells have planar electrodes, others are tightly coiled and there are about five basic types of chemistries used. All have lithium, but additionally: cobalt oxide, iron phosphorus oxide, manganese oxide, nickel/cobalt.aluminium oxide, then there are a variety of cell manufacturers that use oxides of lithium/manganese/cobalt in various mixes. 

Disassembling starts with removing and the wiring, bus bars, and miscellaneous external electronics without short-circuiting the battery, and this gets you to the modules. These may have sealants that are difficult to remove, and then you may find the cells inside stuck together with adhesive, the components may be soldered, and we cannot guarantee zero charge. Then if you get to the cell, clean separation of the cathode, anode, and electrolyte may be difficult, we might encounter nanoparticles which provide a real health risk, the electrolyte may generate hydrogen fluoride and the actual chemistry of the cell may be unclear. The metals in principle available for recycling are cobalt, nickel, lithium, manganese and aluminium, and there is also graphite.

Suppose we try to automate? Automation requires a precisely structured environment, in which the robot makes a pre-programmed repetitive action. In principle, machine sorting would be possible if the batteries had some sort of label that would specify precisely what it was. Reading and directing to a suitable processing stream would be simple, but as yet there are no such labels, which, perforce, must be readable at end of life. It would help recycling if there were some standardised designs, but good luck trying to get that in a market economy. If you opt for manual disssembling, this is very laboour intensive and not a particularly healthy occupation.

If the various parts were separated, metal recovery can be carried out chemically, usually by treating the parts with sulphuric acid and hydrogen peroxide. The next part is to try to separate them, and how you go about that depends on what you think the mixture is. Essentially, you wish to precipitate one material and leave the others, or maybe precipitate two. Perhaps easier is to try to reform the most complex cathode by taking a mix of Ni, Mn, and Co that has been recovered as hydroxides, analysing it and making up what is deficient with new material, then heat treating to make the desired cathode material. This assumes you have physically separated the anodes and cathodes previously.

If the cathodes and anodes have been recovered, in principle they can be directly recycled to make new anodes and cathodes, however the old chemistry is retained. Cathode strips are soaked in N-methylpyrrolidine (NMP) then ultrasonicated to make the powder to be used to reformulate a cathode. Here, it is important that only one type is used, and it means new improved versions are not made. This works best when the state of the battery before recycling was good. Direct recycling is less likely to work for batteries that are old and of unknown provenance. NMP is a rather expensive solvent and somewhat toxic. Direct recycling is the most complicated process.

The real problem is costs. As we reduce the cobalt content, we reduce the value of the metals. Direct recycling may seem good, but if it results in an inferior product, who will buy it? Every step in a process incurs costs, and also produces is own waste stream, including a high level of greenhouse gases. If we accept the Nature review, 2% of the world’s cars would eventually represent a stream of waste that would encircle the planet so we have to do something, but the value of the metals in a lithium ion battery is less than 10% of the cost of the battery, and with all the toxic components, the environmental cost of such electric vehicles is far greater than people think. All the steps generate their own waste streams that have to be dealt with, and most steps would generate their own greenhouse gases. The problem with recycling is that since it usually makes products of inferior quality because of the cost of separating out all the “foreign” material, economics means that in a market economy, only a modest fraction actually gets recycled.

Where to Find Life? Not Europa

Now that we have found so many exoplanets, we might start to wonder whether they have life. It so happens I am going to give a presentation on this at a conference in about three weeks time, hence the temptation to focus attention on the topic. My argument is that whether a place could support life is irrelevant; the question is, could it get started? For the present, I am not considering panspermia, i.e. it came from somewhere else on the grounds that if it did, the necessities to reproduce still had to be present and if they were, life would probably evolve anyway. 

I consider the ability to reproduce to be critical because, from the chemistry point of view, it is the hardest to get right. One critical problem is reproduction itself is not enough; it is no use using all resources to make something that reproduces a brown sludge. It has to guess right, and the only way to do that is to make lots of guesses. The only way to do that is to tear to bits that which is a wrong guess and try again and re-use the bits. But then, when you get something useful that might eventually work, you have to keep the good bits. So reproduction and evolution have opposite requirements, but they have to go through the same entity. Reproduction requires the faithful transmission of information; evolution requires the information to change on transmission, but eventually not by much. Keep what is necessary, reject that which is bad. But how?

Information transfer requires a choice of entities to be attached to some polymer, and which can form specific links with either the same entity only (positive reproduction) or through a specific complementary entity (to make a negative copy). To be specific they have to have a strongly preferred attachment, but to separate them later, the attachment has to be able to be converted to near zero energy. This can be done with hydrogen bonds, because solvent water can make up the energy during separation. One hydrogen bond is insufficient; there are too many other things that could get in the road. Adenine forms two hydrogen bonds with uracil, guanine three with cytosine, and most importantly, guanine and uracil both have N-H bonds while adenine and cytosine have none; the wrong pairing either leads to a steric clash that pushes them apart or ends up with only one hydrogen bond that is not strong enough. Accordingly we have the condition for reliable information transfer. Further good news is these bases form themselves from ammonium cyanide, urea and cyanoacetylene, all of which are expected on an earth-like planet from my concept of planetary formation.

The next problem is to form two polymer strands that can separate in water. First, to link them something must have two links. For evolution to work, these have to be strong, but breakable under the right conditions. To separate, they need to have a solubilizing agent, which means an ionic bond. In turn, this means three functional valence electrons. Phosphate alone can do this. The next task is to link the phosphate to the bases that carries the information code. That something must also determine a fixed shape for the strands, and for this nature chose ribose. If we link adenine, ribose and phosphate at the 5 position of ribose we get adenosine monophosphate (AMP); if we do he same for uracil we get uridine monophosphate (UMP). If we put dilute solutions of AMP and UMP into vesicles (made by a long chain hydrocarbon-based surfactant) and let them lie on a flat rock in the sun and splash them from time to time with water, we end with what is effectively random “RNA” strands with over eighty units in a few hours. At this point, useful information is unlikely, but we are on the way.

Why ribose? Because the only laboratory synthesis of AMP from only the three constituents involves shining ultraviolet light on the mixture, and to me, this shows why ribose was chosen, even though ribose is one of the least likely sugars to be formed. As I see it, the reason is we have to form a phosphate ester specifically on the 5-hydroxyl. That means there has to be something unique about the 5-hydroxyl of ribose compared with all other sugar hydroxyl groups. To form such an ester, a hydroxyl has to hit the phosphate with an energy equivalent to the vibrations it would have at about 200 degrees C. Also, if any water is around at that temperature, it would immediately destroy the ester, so black smokers are out. The point about a furanose is it is a flexible molecule and when it receives energy (indirectly) from the UV light it will vibrate vigorously, and UV light has energy to spare for this task. Those vibrations will, from geometry, focus on the 5-hydroxyl. Ribose is the only sugar that has a reasonable amount of furanose; the rest are all in the rigid pyranose form. Now, an interesting point about ribose is that while it is usually only present in microscopic amounts in a non-specific sugar synthesis, it is much more common if the sugar synthesis occurs in the presence of soluble silica/silicic acid. That suggests life actually started at geothermal vents.

Now, back to evolution. RNA has a rather unique property amongst polymers in that the strands, when they get to a certain length and can be bent into a certain configuration and presumably held there with magnesium ions, they can catalyse the hydrolysis of other strands. It does that seemingly by first attacking the O2 of ribose, which breaks the polymer by hydrolysing the adjacent phosphate ester. The next interesting point is that if the RNA can form a double helix, the O2 is more protected. DNA is, of course, much better protected because it has no O2. So the RNA can build itself, and it can reorganise itself.

If the above is correct, then it places strong restrictions on where life can form. There will be no life in under-ice oceans on Europa (if they exist) for several reasons. First, Europa seemingly has no (or extremely small amounts of) nitrogen or carbon. In the very thin atmosphere of Europa (lower pressures than most vacuum pumps can get on Earth) the major gas is the hydroxyl radical, which is made by sunlight acting on ice. It is extremely reactive, which is why there is not much of it. There is 100,000 times less sodium it the atmosphere. Nitrogen was undetected. The next reason is the formation of the nucleic acid appears to require sunlight, and the ice will stop that. The next reason is that there is no geothermal activity that will make the surfactants, and no agitation to convert them to the vesicles needed to contain the condensation products, the ice effectively preventing that. There is no sign of hydrocarbon residues on the surface. Next, phosphates are essentially insoluble in water and would sink to the bottom of an ocean. (The phosphate for life in oceans on Earth tends to come from water washed down from erosion.) Finally, there is no obvious way to make ribose if there is no silicic acid to orient the formation of the sugar.

All of which suggests that life essentially requires an earth-like planet. To get the silicic acid you need geothermal activity, and that may mean you need felsic continents. Can you get silica deposits from volcanism/geothermal activity when the land is solely basalt? I don’t know, but if you cannot, this proposed mechanism makes it somewhat unlikely there was ever life on Mars because there would be no way to form nucleic acids.

The Year of Elements, and a Crisis

This is the International Year of the Periodic Table, and since it is almost over, one can debate how useful it was. I wonder how many readers were aware of this, and how many really understand what the periodic table means. Basically, it is a means of ordering elements with respect to their atomic number in a way that allows you to make predictions of properties. Atomic number counts how many protons and electrons a neutral atom has. The number of electrons and the way they are arranged determines the atom’s chemical properties, and thanks to quantum mechanics, these properties repeat according to a given pattern. So, if it were that obvious, why did it take so long to discover it?

There are two basic reasons. The first is it took a long time to discover what were elements. John Dalton, who put the concept of atoms on a sound footing, made a list that contained twenty-one, and some of those, like potash, were not elements, although they did contain atoms that were different from the others, and he inferred there was a new element present. The problem is, some elements are difficult to isolate from the molecules they are in so Dalton, unable to break them down, but seeing from their effect on flames knew they were different, labelled them as elements. The second problem is although the electron configurations appear to have common features, and there are repeats in behaviour, they are not exact repeats and sometimes some quite small differences in electron behaviour makes very significant differences to chemical properties. The most obvious example is the very common elements carbon and silicon. Both form dioxides of formula XO2. Carbon dioxide is a gas; you see silicon dioxide as quartz. (Extreme high-pressure forces CO2 to form a quartz structure, though, so the similarity does emerge when forced.) Both are extremely stable, and silicon does not readily form a monoxide, while carbon monoxide has an anomalous electronic structure. At the other end of the “family”, lead does not behave particularly like carbon or silicon, and while it forms a dioxide, this is not at all colourless like the others. The main oxide of lead is the monoxide, and this instability is used to make the anode work in lead acid batteries.

The reason I have gone on like this is to explain that while elements have periodic properties, these are only indicative of the potential, and in detail each element is unique in many ways. If you number them on the way down the column, there may be significant changes depending on whether the number is odd or even that are superimposed on a general change. As an example: copper, silver, gold. Thus copper and gold are coloured; silver is not. The properties of silicon are wildly different from those of carbon; there is an equally dramatic change in properties from germanium to tin. What this means is that it is very difficult to find a substitute material for an element that is used for a very specific property. Further, the amounts of given elements on the planet depend partly on how the planet accreted, thus we do not have much helium or neon, despite these being extremely common elements in the Universe as a whole, and partly on the fact that nucleosynthesis gives variable yields for different elements. The heavier elements in a periodic column are generally formed in lower amounts, while elements with a greater number of stable isotopes, or particularly stable isotopes, tend to be made in greater amounts. On the other hand, their general availability tends to depend on what routes there are for their isolation during geochemical processing. Some elements such as lead form a very insoluble sulphide and that separates from the rock during geothermal processing, but others are much more resistant and remain distributed throughout the rock in highly dilute forms, so even though they are there, they are not available in concentrated forms. The problem arises when we need some of these more difficult to obtain elements, yet they have specific uses. Thus a typical mobile phone contains more than thirty different elements

The Royal Society of Chemistry has found that at least six elements used in mobile phones are going out be mined out in at least 100 years. These have other uses as well. Gallium is used in microchips, but also in LEDs and solar panels. Arsenic is also used in microchips, but also used in wood preservation and, believe it or not, poultry feed. Silver is used in microelectrical components, but also in photochromic lenses, antibacterial clothing, mirrors, and other uses. Indium is used on touchscreens and microchips, but also in solar panels and specialist ball bearings. Yttrium is used for screen colours and backlighting, but also used for white LED lights, camera lenses, and anticancer drugs, e.g. against liver cancer. Finally, there is tantalum, used for surgical implants, turbine blades, hearing aids, pacemakers, and nosescaps for supersonic aircraft. Thus mobile phones will put a lot of stress on other manufacturing. To add to the problems, cell phones tend to have a life averaging two years. (There is the odd dinosaur like me who keeps using them until technology makes it difficult to keep doing it. I am on my third mobile phone.)A couple of other facts. 23% of UK households have an unused mobile phone. While in the UK, 52% of 16 – 24 year olds have TEN or more electronic devices in their home. The RSC estimates that in the UK there are as many as 40 million old and unused such devices in people’s homes. I have no doubt that many other countries, including the US, have the same problem. So, is the obvious answer we should promote recycling? There are recycling schemes around the world, but it is not clear what is being done with what is collected. Recovering the above elements from such a mixture is anything but easy. I suspect that the recyclers go for the gold and one or two other materials, and then discard the rest. I hope I am wrong, but from the chemical point of view, getting such small mounts of so many different elements from such a mix is anything but easy. Different elements tend to be in different parts of the phone, so the phones can be dismantled and the parts chemically processed separately but this is labour intensive. They can be melted down and separated chemically, but that is a very complicated process. No matter how you do it, the recovered elements will be very expensive. My guess is most are still not recovered. All we can hope is they are discarded somewhere where they will lie inertly until they can be used economically.

Biofuels from Algae

In the previous post, I described work that I had done on making biofuels from lignin related materials, but I have also looked at algae, and here hydrothermal processing makes a lot of sense, if for no other reason than algae is always gathered wet. There are two distinct classes: microalgae and macroalga. The reason for distinguishing these has nothing to do with size, but rather with composition. Microalgae have the rather unusual property of being comprised of up to 25% nucleic acid, and the bulk of the rest is lipid or protein, and the mix is adjustable. The reason is microalgae are primarily devoted to reproduction, and if the supply of nitrogen and phosphate is surplus to requirements, they absorb what they can and reproduce, mainly making more nucleic acid and protein. Their energy storage medium is the lipid fraction, so given nutrient-rich conditions, they contain very little free lipids. Lipids are glycerol that is esterified by three fatty acids, in microalgae primarily palmitic (C16) and stearic (C18), with some other interesting acids like the omega-three acids. In principle, microalga would be very nutritious, but the high levels of nucleic acid give them some unfortunate side effects. Maybe genetic engineering could reduce this amount. Macroalgae, on the other hand, are largely carbohydrate in composition. Their structural polysaccharides are of industrial interest, although they also contain a lot of cellulose. The lipid nature of microalgae makes them very interesting when thinking of diesel fuel, where straight-chain hydrocarbons are optimal.

Microalgae have been heavily researched, and are usually grown in various tubes by those carrying out research on making biofuels. Occasionally they have been grown in ponds, which in my opinion is much more preferable, if for no other reason than it is cheaper. The ideal way to grow them seems to be to feed them plenty of nutrients, which leads them to reproduce but produce little in the way of hydrocarbons (but see below) then starve them. They cannot shut down their photosystems, so they continue to take on carbon dioxide and reduce the carbon all the way to lipids. The unimaginative thing to do then is to extract the microalgae and make “biodiesel”, a process that involves extracting the lipids, usually with a solvent such as a volatile hydrocarbon, distilling off the solvent, then reacting that with methanolic potassium hydroxide to make the methyl esters plus glycerol, and if you do this right, an aqueous phase separates out and you can recover your esters and blend them with diesel. The reason I say “unimaginative” is that when you get around to doing this, you find there are problems, and you get ferocious emulsions. These can be avoided by drying the algae, but now the eventual fuel is starting to get expensive, especially since the microalgae are very difficult to harvest in the first place. To move around in the water, they have to have a density that is essentially the same as water, so centrifuging is difficult, and since they are by nature somewhat slimy, they clog filters. There are ways of harvesting them, but that starts to get more expensive. The reason why hydrothermal processing makes so much sense is it is not necessary to dry them; the process works well if they are merely concentrated.

The venture I was involved in helping had the excellent idea of using microalgae that grow in sewage treatment plants, where besides producing the products from the algae, the pollution was also cleaned up, at least it is if the microalgae are not simply sent out into the environment. (We also can recover phosphate, which may be important in the future.). There are problems here, in that because it is so nutrient-rich the fraction of extractable lipids is close to zero. However, if hydrothermal liquefaction is used, the yield of hydrocarbons goes up to the vicinity of over 20%, of which about half are aromatic, and thus suitable for high-octane petrol. Presumably, the lipids were in the form of lipoprotein, or maybe only partially substituted glycerol, which would produce emulsifying agents. Also made are some nitrogen-rich chemicals that are about an order of magnitude more valuable than diesel. The hydrocarbons are C15 and C17 alpha unsaturated hydrocarbons, which could be used directly as a high-cetane diesel (if one hydrogenated the one double bond, you would have a linear saturated hydrocarbon with presumably a cetane rating of 100), and some aromatic hydrocarbons that would give an octane rating well over a hundred. The lipid fraction can be increased by growing them under nutrient-deprived conditions. They cannot reproduce, so they make lipids, and swell, until eventually they die. Once swollen, they are easier to handle as well. And if nothing else, there will be no shortage of sewage in the future.

Macroalgae will process a little like land plants. They are a lot easier to handle and harvest, but there is a problem in obtaining them in bulk: by and large, they only grow in a narrow band around the coast, and only on some rocks, and then only under good marine conditions. If the wave action is too strong, often there are few present. However, they can live in the open ocean. An example is the Sargasso Sea, and it appears that there are about twenty million tonne of them in the Atlantic where the Amazonian nutrients get out to sea. However, in the 1970s the US navy showed they could be grown on rafts in the open ocean with a little nutrient support. It may well also be that free-floating macroalgae can be grown, although of course the algae will move with the currents.

The reason for picking on algae is partly that some are the fastest-growing plants on the planet. They will take more carbon dioxide from the atmosphere more quickly than any other plant, the sunlight absorbed by the plant is converted to chemical energy, not heat, and finally, the use of the oceans is not competing with any other use, and in fact may assist fish growth.

Alternative​ Sources for Fuel: Rubbish

As most people have noticed, there is finally some awakening relating to climate change and the need to switch from fossil fuels, not that politicians are exactly accepting such trends, and indeed they seem to have heads firmly buried in the sand. The difficulty is there are no easy solutions, and as I remarked in a previous post, we need multiple solutions.

So what to do? I got into the matter after the first “energy crisis” in the 1970s. I worked for the New Zealand national chemistry laboratory, and I was given the task of looking at biofuels. My first consideration was that because biomass varies so much, oil would always be cheaper than anything else, and the problem was ultimately so big, one needed to start by solving two problems. My concept was that a good place to start was with municipal rubbish: they pay you to take it away, and they pay a lot. Which leads to the question, how can you handle rubbish and get something back from it? The following is restricted to municipal rubbish. Commercial waste is different because it is usually one rather awkward thing that has specific disposal issues. For example, demolition waste that is basically concrete rubble is useless for recovering energy.

The simplest way is to burn it. You can take it as is, burn it, use the heat in part to recover electricity, and dump the resultant ash, which will include metal oxides, and maybe even metals. The drawback is you should take the glass out first because it can make a slag that blocks air inlets and messes with the combustion. If you are going to do that, you might as well take out the cans as well because they can be recycled. The other drawback is the problem of noxious fumes, etc. These can be caught, or the generators can be separated out first. There are a number of such plants operating throughout the world so they work, and could be considered a base case. There have also been quite satisfactory means of separating the components of municipal refuse, and there is plenty of operational experience, so having to separate is not a big issue. Citizens can also separate, although their accuracy and cooperativeness is an issue.

There are three other technologies that have similarities, in that they basically involve pyrolysis. Simple pyrolysis of waste gives an awful mix, although pyrolysis of waste plastics is a potential source of fuel. Polystyrene gives styrene, which if hydrogenated gives ethylbenzene, a very high-octane petrol. Pyrolysis of polyethylene gives a very good diesel, but pvc and polyurethanes give noxious fumes. Pyrolysis always leaves carbon, which can either be burned or buried to fix carbon. (The charcoal generator is a sort of wood pyrolysis system.)

The next step up is the gasifier. In this, the pyrolysis is carried out by extreme heat, usually generated by burning some of it in air, or oxygen. The most spectacular option I ever saw was the “Purox” system that used oxygen to maintain the heat by burning the char that got to the bottom. It took everything and ended up with a slag that could be used as road fill. I went to see the plant, but it was down for maintenance. I was a little suspicious at the time because nobody was working on it, which is not what you expect for maintenance. Its product was supposed to be synthesis gas. Other plants tended to use air to burn waste to provide the heat, but the problem with this is that the produced gas is full of nitrogen, which means it is a low-quality gas.

The route that took my interest was high-pressure liquefaction, using hydrogen to upgrade the product. I saw a small bench-top unit working, and the product looked impressive. It was supposed to be upgraded to a 35 t/d pilot plant, to take up all of a small city’s rubbish, but the company decided not to proceed, largely because suddenly OPEC lost its cohesion and the price of oil dropped like a stone. Which is why biofuels will never stand up in their own right: it is always cheaper to pump oil from the ground than make it, and it is always cheaper to refine it in a large refinery than in a small-scale plant. This may seem to have engineering difficulties, but this process is essentially the same as the Bergius process that helped keep the German synthetic fuels going in WW II. The process works.

So where does that leave us? I still think municipal waste is a good way to start an attack on climate change, except what some places seem to be doing is shipping their wastes to dump somewhere else, like Africa. The point is, it is possible to make hydrocarbon fuels, and the vehicles that are being sold now will need to be fuelled for a number of years. The current feedstock prices for a Municipal Waste processing plant is about MINUS $100/t. Coupled with a tax on oil, that could lead to money being made. The technologies are there on the bench scale, we need more non-fossil fuel, and we badly need to get rid of rubbish. So why don’t we do something? Because our neo-liberal economics says, let the market decide. But the market cannot recognise long-term options. That is our problem with climate change. The market sets prices, but that is ALL it does, and it does not care if civilization eradicates itself in five years time. The market is an economic form of evolution, and evolution leads to massive extinction events, when life forms are unsuitable for changing situations. The dinosaurs were just too big to support themselves when food supplies became too difficult to obtain by a rather abrupt climate change. Our coming climate change won’t be as abrupt nor as devastating, but it will not be pleasant either. And it won’t be avoided by the market because the market, through the fact that fossil fuels are the cheapest, is the CAUSE of what is coming. But that needs its own post.