Energy Sustainability

Sustainability is the buzzword. Our society must use solar energy, lithium-ion batteries, etc to save the planet, at least that is what they say. But have they done their sums?. Lost in this debate is the fact that many of the technologies use relatively difficult to obtain elements. In a previous post I argued that battery technology was in trouble because there is a shortage of cobalt, required to make the cathode work for a reasonable number of cycles. Others argue that we could obtain sufficient elements. But if we are going to be sustainable, we have to be sustainable for an indefinite length of time, and mining is not sustainable; you can only dig up the ore once. Of course, there are plenty of elements left. There is more gold in the sea than has ever been mined; the problem is that it is too dilute. Similarly, most elements are present in a lump of basalt; just not much of anything useful and it is extremely difficult to get it out. The original copper mines of Cyprus, where even lumps of copper could occasionally be found, are all worked out, at least to the extent that mining is no longer profitable there.

The answer is to recycle, right? Well, according to an article [Charpentier Poncelet, A. et al. Nature Sustain. https://doi.org/10.1038/s41893-022- 00895-8 (2022)] there are troubles. The problem is that even if we recycle, every time we do something we lose some of the metal. Losses here refer to material emitted into the environment, stored in waste-disposal facilities, or diluted in material where the specific characteristics of the elements are no longer required. The authors define a lifetime as the average duration of their use, from mining through to being entirely lost. As with any such definition-dependent study, there will be some points where you disagree.

The first loss for many elements lies in the production state. Quite often it is only economical to obtain one or two elements, and the remaining minor components of the ore disappear in slag. These losses are mainly important for specialty elements. Production losses account for 30% of rare earth metals, 50% for cobalt, 70% for indium, and greater than 95% for arsenic, gallium, germanium, hafnium, selenium and tellurium. So most of those elements critical for certain electronic and photo-electric effects are simply thrown out. We are a wasteful lot.

Manufacturing and use incur very few losses. There are some, but because materials are purified ready for use, pieces that are not immediately used can be remelted and used. There are exceptions. 80% of barium is lost through use; it is used in drilling muds.

The largest losses come from the waste management and recycling stage. Metals undergoing multiple life cycles are still lost this way; it just takes longer to lose them. Recycling losses occur when the metal accumulates in a dust (zinc) or slag(e.g. chromium or vanadium), or get lost in another stream, thus copper is prone to dissolve in an iron stream. Longest lifetimes occur for non-ferrous metals (8 to 76 years) precious metals (4 to 192 years), and ferrous metals (8 to 154 years). The longest lifetimes are for gold and iron.

Now for the problem areas. Lithium has a life-cycle use of 7 years, then it is all gone. But lithium-ion batteries last about this long, which suggests that as yet (if these data are correct) there is very little real recycling of lithium. Elements like gallium and tellurium last less than a year, while indium manages a year. Before you protest that most of the indium goes into swipeable mobile phone screens and mobile phones last longer than a year, at least for some of us, remember that losses occur through being discarded at the mining stage, where the miner/processor can’t be bothered. Of the fifteen metals most lost during mining/processing, thirteen are critical for sustainable energy, such as cobalt (lithium-ion batteries), neodymium (permanent magnets), indium, gallium, germanium, selenium and tellurium (solar cells) and scandium (solid oxide fuel cells). If we look at the recycled content of “new material” lithium is less than 1% as is indium. Gallium and tellurium are seemingly not recycled. Why are they not recycled? Metals that are recycled tend to be like iron, aluminium, the precious metals and copper. It is reasonably easy to find uses for them where purity is not critical. Recycling and purifying most of the others requires technical skill and significant investment. If we think of lithium-ion batteries, the lithium reacts with water, and if it starts burning it is unlikely to be put out. Some items may have over a dozen elements, and some are highly toxic, and not to be in the hands of the amateur. What we see happening is that the “easy” metals are recycled by organizations that are really low-technology organizations. It is not an area attractive to the highly skilled because the economic risk/return is just not worth it, while the less-skilled simply cannot do it safely.

Banana-skin Science

Every now and again we find something that looks weird, but just maybe there is something in it. And while reading it, one wonders, how on Earth did they come up with this? The paper in question was Silva et. al. 2022. Chemical Science 13: 1774. What they did was to take dried biomass powder and exposed it to a flash of 14.5 ms duration from a high-power xenon flash lamp. That type of chemistry was first developed to study the very short-lived intermediates generated in photochemistry, when light excites the molecule to a high energy state, where it can decay through unusual rearrangements. This type of study has been going on since the 1960s and equipment has steadily been improving and being made more powerful. However, it is most unusual to find it used for something that ordinary heat would do far more cheaply. Anyway, 1 kg of such dried powder generated about 100 litres of hydrogen and 330 g of biochar. So, what else was weird? The biomass was dried banana skin! Ecuador, sit up and take notice. But before you do, note that flash xenon lamps are not going to be an exceptionally economical way of providing heat. That is the point; this very expensive source of light was actually merely providing heat.

There are three ways of doing pyrolysis. In the previous post I pointed out that if you took cellulose and eliminated all the oxygen in the form of water, you were left with carbon. If you eliminate the oxygen as carbon monoxide you are left with hydrogen. If you eliminate it as carbon dioxide you get hydrogen and hydrocarbon. In practice what you get depends on how you do it. Slow pyrolysis at moderate heat mainly makes charcoal and water, with some gas. It may come as a surprise to some but ordinary charcoal is not carbon; it is about 1/3 oxygen, some minor bits and pieces such as nitrogen, phosphorus, potassium, and sulphur, and the rest carbon.

If you do very fast pyrolysis, called ablative pyrolysis, you can get almost all liquids and gas. I once saw this done in a lab in Colorado where a tautly held (like a hacksaw blade) electrically heated hot wire cut through wood like butter, the wire continually moving so the uncondensed liquids (which most would call smoke) and gas were swept out. There was essentially no sign of “burnt wood”, and no black. The basic idea of ablative pyrolysis is you fire wood dust or small chips at a plate at an appropriate angle to the path so the wood sweeps across it and the gas is swept away by the gas stream (which can be recycled gas) propelling the wood. Now the paper I referenced above claimed much faster pyrolysis, but got much more charcoal. The question is, why? The simple answer, in my opinion, is nothing was sweeping the product away so it hung around and got charred.

The products varied depending on the power from the lamp, which depended on the applied voltage. At what I assume was maximum voltage the major products were (apart from carbon) hydrogen and carbon monoxide. 100 litres of hydrogen, and a bit more carbon monoxide were formed, which is a good synthesis gas mix. There were also 10 litres of methane, and about 40 litres of carbon dioxide that would have to be scrubbed out. The biomass had to be reduced to 20 μm size and placed on a surface as a layer 50 μm thick. My personal view is that is near impossible to scale this up to useful sizes. It uses light as an energy source, which is difficult to generate so almost certainly the process is a net energy consumer. In short, this so-called “breakthrough” could have been carried out to give better yields of whatever was required far more cheaply by people a hundred years ago.

Perhaps the idea of using light, however, is not so retrograde. The trick would be to devise apparatus that with pyrolyse wood ablatively (or not if you want charcoal) using light focused by large mirrors. The source, the sun, is free until it hits the mirrors. Most of us will have ignited paper with a magnifying glass. Keep the oxygen out and just maybe you have something that will make chemical intermediates that you can call “green”.

Biofuels to Power Transport

No sooner do I post something than someone says something to contradict the post. In this case, immediately after the last post, an airline came out and said it would be zero carbon by some time in the not-too-distant future. They talked about, amongst other things, hydrogen. There is no doubt hydrogen could power an aircraft, as it also powers rockets that go into space. That is liquid hydrogen, and once the craft takes off, it burns for a matter of minutes. I still think it would be very risky for aircraft to try to hold the pressures that could be generated for hours. If you do contain it, the extra weight and volume occupied would make such travel extremely expensive, while sitting above a tank of hydrogen is risky.

Hydrocarbons make by far the best aircraft fuel, and one alternative source of them is from biomass. I should caution that I have been working in this area of scientific research on and off for decades (more off than on because of the need to earn money.) With that caveat, I ask you to consider the following:

C6H12O6  ->  2 CO2 +2H2O + “C4H8”

That is idealized, but the message is a molecule of glucose (from water plus cellulose) can give two molecules each of CO2 and water, plus two thirds of the starting carbon as a hydrocarbon, which would be useful as a fuel. If you were to add enough hydrogen to convert the CO2 to a fuel you get more fuel. Actually, you do not need much hydrogen because we usually get quite a few aromatics, thus if we took two “C4H8” and make xylene or ethyl benzene (both products that are made in simple liquefactions) these total C8H10, which gives us a surplus of three H2 molecules. The point here is that in each of these cases we could imagine the energy coming from solar, but if you use biomass, much of the energy is collected for you by nature. Of course, if you take the oxygen out as water you are left with carbon. In practice there are a lot of options, and what you get tends to depend on how you do it. Biomass also contains lignin, which is a phenolic material. This is much richer in hydrocarbon material, but also it is much harder to remove the oxygen.

In my opinion, there are four basic approaches to making hydrocarbon fuels from biomass. The first, which everyone refers to, is pyrolysis. You heat the biomass, you get a lot of charcoal, but you also get liquids. These still tend to have a lot of oxygen in them, and I do not approve of this because the yields of anything useful are too low unless you want to make charcoal, or carbon, say for metal refining, steel making, electrodes for batteries, etc. There is an exception to that statement, but that needs a further post.

The second is to gasify the biomass, preferably by forcing oxygen into it and partially burning it. This gives you what chemists call synthesis gas, and you can make fuels through a further process called the Fischer-Tropsch process. Germany used that during the war, and Sasol in South Africa Sasol, but in both cases coal was the source of carbon. Biomass would work, and in the 1970s Union Carbide built such a gasifier, but that came to nothing when the oil price collapsed.

The third is high-pressure hydrogenation. The biomass is slurried in oil and heated to something over 400 degrees Centigrade in then presence of a nickel catalyst and hydrogen. A good quality oil is obtained, and in the 1980s there was a proposal to use the refuse of the town of Worcester, Mass. to operate a 50 t/d plant. Again, this came to nothing when the price of oil slumped.

The fourth is hydrothermal liquefaction. Again, what you get depends on what you put in but basically there are two main fractions from woody biomass: hydrocarbons and phenolics. The phenolics (which includes aromatic ethers) need to be hydrogenated, but the hydrocarbons are directly usable, with distillation. The petrol fraction is a high octane, and the heavier hydrocarbons qualify as very high-quality jet fuel. If you use microalgae or animal residues, you also end up with a high cetane diesel cut, and nitrogenous chemicals. Of particular interest from the point of view of jet fuel, in New Zealand they once planted Pinus Radiata which grew very quickly, and had up to 15% terpene content, most of which would make excellent jet fuel, but to improve the quality of the wood, they bred the terpenes more or less out of the trees.

The point of this is that growing biomass could help remove carbon dioxide from the atmosphere and make the fuels needed to keep a realistic number of heritage cars on the road and power long-distance air transport, while being carbon neutral. This needs plenty of engineering development, but in the long run it may be a lot cheaper than just throwing everything we have away and then finding we can’t replace it because there are shortages of elements.

Molecular Oxygen in a Comet

There is a pressure, these days, on scientists to be productive. That is fair enough – you don’t want them slacking off in a corner, but a problem arises when this leads to the publication of papers: there are so many of them that nobody can keep up with even a small fraction of them. Worse, many of them do not seem to say much. Up to a point, this has an odd benefit: if you leave a lot unclear, all your associates can publish away and cite you, which has this effect of making you seem more important because funders like to count citations. In short, with obvious exceptions, the less you advance the science, the more important you seem at second level funding. I am going to pick, maybe unfairly, on one paper from Nature Astronomy (https://www.nature.com/articles/s41550-022-01614-1) as an illustration.

One of the most unexpected findings in the coma of comet 67P/Churyumov-Gerasimenko was “a large amount” of molecular oxygen. Something to breathe! Potential space pilots should not get excited; “a large amount” is only large with respect to what they expected, which was none. At the time, this was a surprise to astronomers because molecular oxygen is rather reactive and it is difficult to see why it would be present. Now there is a “breakthrough”: it has been concluded there is not that much oxygen in the comet at all, but this oxygen came from a separate small reservoir. The “clue” came from the molecular oxygen being associated with molecular water when emitted from a warm site. As it got cooler, any oxygen was associated with carbon dioxide or carbon monoxide. Now, you may well wonder what sort of clue that is? My question is, given there is oxygen there, what would you expect? The comet is half water, so when the surface gets warm, it sublimes. When cooler, only gases at that lower temperature get emitted. What is the puzzle?

However, the authors of the paper came to a different conclusion. They decided that there had to be a deep reservoir of oxygen within the comet, and a second reservoir close to the surface that is made of porous frozen water. According to them, oxygen in the core works its way to the surface and gets trapped in the second reservoir. Note that this is an additional proposition to the obvious one that oxygen was trapped in ice near the surface. We knew there was gas trapped in ice that was released with heat, so why postulate multiple reservoirs, other than to get a paper published?

So, where did this oxygen come from? There are two possibilities. The first is it was accreted with the gas from the disk when the comet formed. This is somewhat difficult to accept. Ordinary chemistry suggests that if oxygen molecules were present in the interstellar dust cloud it should react with hydrogen and form water. Maybe that conclusion is somehow wrong, but we can find out. We can estimate the probability by observing the numerous dust clouds from which stars accrete. As far as I am aware, nobody has ever found rich amounts of molecular oxygen in them. The usual practice when you are proposing something unusual is you find some sort of supporting evidence. Seemingly, not this time.

The second possibility is that we know how molecular oxygen could be formed at the surface. High energy photons and solar wind smash water molecules in ice to form hydrogen and hydroxyl radicals. The hydrogen escapes to space but the hydroxyl radicals unite to form hydrogen peroxide or other peroxides or superoxides, which can work their way into the ice. There are a number of other solids that catalyse the degradation of peroxides and superoxides back to oxygen, which would be trapped in the ice, but released when the ice sublimed. So, from the chemist’s point of view there is a fairly ordinary explanation why oxygen might be formed and gather near the surface. From my point of view, Occam’s Razor should apply: you use the simplest explanation unless there is good evidence. I do not see any evidence about the interior of the comet.

Does it matter? From my point of view when someone with some sort of authority/standing says something like this, there is the danger that the next paper will say “X established that . . “  and it becomes almost a gospel. This is especially so when the assertion cannot be easily challenged with evidence as you cannot get inside that comet. Which gives the perverse realization that you need strong evidence to challenge an assertion, but maybe no evidence at all to assert it in the first place. Weird?

Some Scientific Curiosities

This week I thought I would try to be entertaining, to distract myself and others from what has happened in Ukraine. So to start with, how big is a bacterium? As you might guess, it depends on which one, but I bet you didn’t guess the biggest. According to a recent article in Science Magazine (doi: 10.1126/science.ada1620) a bacterium has been discovered that lives in Caribbean mangroves that, while it is a single cell, it is 2 cm long. You can see it (proposed name, Thiomargarita magnifica) with the naked eye.

More than that, think of the difference between prokaryotes (most bacteria and single-cell microbes) and eukaryotes (most everything else that is bigger). Prokaryotes have free-floating DNA while eukaryotes package their DNA nucleus and put various cell functions into separate vesicles and can move molecules between the vesicles. But this bacterium cell includes two membrane sacs, only one of which contains DNA. The other sac contains 73% of the total volume and seems to be filled with water. The genome was nearly three times bigger than those of most bacteria.

Now, from Chemistry World. You go to the Moon or Mars, and you need oxygen to breathe. Where do you get it from? One answer is electrolysis, so do you see any problems, assuming you have water and you have electricity? The answer is that it will be up to 11% less efficient. The reason is the lower gravity. If you try to electrolyse water at zero g, such as in the space station, we knew it was less efficient because the gas bubbles have no net force on them. The force arises through different densities generating a weight difference, and the lighter gas rises, but in zero g, there is no lighter gas – they might have different masses, but they all have no weight. So how do they know this effect will apply on Mars or the Moon? They carried out such experiments on board free-fall flights with the help of the European Space Agency. Of course, these free-fall experiments are somewhat brief as the pilot of the aircraft will have this desire not to fly into the Earth.

The reason the electrolysis is slower is because gas bubble desorption is hindered. Getting the gas off the electrodes occurs because there are density differences, and hence a force, but in zero gravity there is no such force. One possible solution being considered is a shaking electrolyser. Next thing we shall see is requests for funding to build different sorts of electrolysers. They have considered using them in centrifuges to construct models to compute what the lower gravity would do, but an alternative might be to have such a process operating within a centrifuge. It does not need to be a fast spinning centrifuge as all you are trying to do is to generate the equivalent of 1 g, Also, one suggestion is that people on Mars or the Moon might want to spend a reasonable fraction of their time inside one such large centrifuge, to help keep the bone density up.

The final oddity comes from Physics World. As you may be aware, according to Einstein’s relativity, time, or more specifically, clocks, run slower as the gravity increases. Apparently this was once tested by taking a clock up a mountain and comparing it with one kept at the base, and General Relativity was shown to predict the correct result. However, now we have improved clocks. Apparently the best atomic clocks are so stable they would be out by less than a second after running for the age of the universe. This precision is astonishing. In 2018 researchers at the US National Institute for Standards and Technology compared two such clocks and found their precision was about 1 part in ten to the power of eighteen. It permits a rather astonishing outcome: it is possible to detect the tiny frequency difference between the two clocks if one is a centimeter higher than the other one. This will permit “relativistic geodesy”, which could be used to more accurately measure the earth’s shape, and the nature of the interior, as variations in density outcrops would cause minute changes in gravitational potential. Needless to say, there is a catch: they may be very precise but they are not very robust. Taking them outside the lab leads to difficulties, like stopping.

Now they have done better – using strontium atoms, uncertainty to less that 1 part in ten to the power of twenty! They now claim they can test for quantum gravity. We shall see more in the not too distant future.

Seaweed and Climate Change

A happy and prosperous New Year to you all. The Great New Zealand Summer Vacation is coming to an end, so I have made an attempt at returning to normality. I hope all is well with you all.

Last year a paper in Nature Communications (https://doi.org/10.1038/s41467-021-22837-2) caught my eye for two reasons. First, it was so littered with similar abbreviations I found it difficult to follow. The second was that they seemed to conclude the idea of growing seaweed to absorb carbon dioxide would not work, but  they seemed to refuse to consider any option by which it might work. We know that much of seaweed biomass arises from photo-fixing CO2, as does biomass from all other plants. So there are problems. There were also problems ten thousand years ago for our ancestors in Anatolia or in the so-called fertile crescent wanting to grow some of those slightly bulky grass seeds for food. They addressed those problems and got to work. It might have been slow, but soon they had the start of a wheat industry.

So, what was the problem? The paper considered the Sargasso Sea as an example of massive seaweed growth. One of the first objections the paper presented was that the old seaweed fronds get coated with life forms such as bryozoans that have calcium carbonate coatings. They then state that by making this solid lime (Ca++ + CO3 -> CaCO3, a solid) it releases CO2 by reducing seawater alkalinity. The assertion was from a reference, and no evidence was supplied that it is true in the Sargasso. What this does is to deflect the obvious: for each molecule of lime formed, a molecule of CO2 was removed from the environment, not added to it as seemingly claimed. Associated with this is the statement that the lime shields the fronds from sunlight and hence reduces photosynthesis. Can we do anything about this? We could try harvesting the old fronds and keep growing new ones. Further, just as our ancestors found that by careful management they could improve the grain size (wild wheat is not very impressive) we could “weed” to improve the quality of the stock.

I don’t get the next criticism. While calcification on seaweed was bad because it liberated CO2 (so they say) they then go on to say that growing seaweed reduces the phytoplankton, and then the calcification of that gets reduced, which liberates more CO2. Here we have increased calcification and decreased calcification both increase CO2. Really?

Another criticism is that the seaweeds let out other dissolved carbon, which is not particulate carbon. That is true, but so what? The dissolved sugars are not acidic. Microalgae will gobble them up, but again, so what?

The next criticism is if we manage to reduce the CO2 levels in the ocean, we cannot calculate what is going on, and the atmosphere may not be able to replenish the levels for a up to a hundred years. Given the turbulence during storms I find this hard to believe, but if it is true, again, so what? We are busy saving the ocean food chains. Ocean acidification is on the verge of wiping out all shellfish that rely on forming aragonite for their shells. Reducing that acidity should be a good thing.

They then criticise the proposal because growing forests on land reduces the albedo, and by making the land darker, makes the locality warmer. They then say the Sargasso floating seaweed increases the albedo of that part of the ocean, and hence reflects more light back to space, which reduces heat generation. Surely this is good? But wait. They then point out that other proposals have seaweed growing in deep water and this won’t happen. In other words, some aspect of some completely different proposal is a reason not to proceed with this one. Then they conclude by saying they need more money to get more detailed information. I agree more detailed information would be helpful, but they should acknowledge possible solutions to their problems. Thus ocean fertilization and harvesting mature seaweed could change their conclusions completely. I suspect the problem is they want to measure things, possibly remotely, but they do not want to actually do things, which involves a lot more effort, specifically on location. But for me, the real annoyance is that everyone by now knows that global warming is a problem. Growing seaweed might help solve that problem. We need to know whether it will contribute to a solution or merely transfer the problem. They may not have the answers, but they at least should identify the questions that need answers.

My Introduction to the Scientific Method – as actually practised

It is often said if you can’t explain something to a six-year-old, you don’t understand.

I am not convinced, but maybe I don’t understand. Anyway, I thought I would follow from my previous post with an account of my PhD thesis. It started dramatically. My young supervisor gave me a choice of projects but only one looked sensible. I started that, then found the answer just published. Fortunately, only month wasted, but I had no project and supervisor was off on vacation. Head of Department suggested I find myself a project, so I did. There was a great debate going on whether the electrons in cyclopropane could delocalize into other parts of a molecule. To explain, carbon forms bonds at an angle of 109.5 degrees, but the three carbons of cyclopropane have to be formally at 60 degrees. In bending them around, the electrons come closer together and the resultant electric repulsions mean the overall energy is higher. The higher energy difference is called strain energy. One theory was the strain energy could be relieved if the electrons could get out and spread themselves over more space. Against that, there was no evidence of single bonds being able to do this.

My proposal was to put a substituted benzene ring on one corner, and an amine on the other. The idea was, amines are bases and react with acid, and when they do that the electrons on the amine are trapped. If the cyclopropane ring could delocalize electrons there was one substituent I could put on the benzene ring that would have different effects on that basicity depending on whether the cyclopropane ring did delocalize electrons or not. There was a test through something called the Hammett equation. My supervisor had published on this, but this would be the first time the equation might be used to do something of significance. Someone had tried that scheme with carboxylic acids, but with an extra carbon atom they were not very responsive and there were two reports with conflicting answers. My supervisor, when he came back, was not very thrilled with this proposal, but his best alternative was to measure the rates of a sequence of reactions for which I had found a report that said the reaction did not go. So he agreed. Maybe I should have been warned. Anyway, I had some long-winded syntheses to do.

When it came to reaching the end-position, my supervisor went to North America on sabbatical, and then sequentially looking for a new position in North America, so I was on my own. The amine results did not yield the desired result because the key substituent, a nitro group, reacted with the amine very quickly. That was a complete surprise. I could make the salt, but the solution with some amine quickly discoloured. However, in a fleeting visit my supervisor made a useful suggestion: react the acids in toluene with a diazo compound. While the acids previously had been too similar in properties in water, it turned out that toluene greatly amplified the differences. The results were clear: the cyclopropane ring did not delocalize electrons.

However, all did not go well. The quantum mechanical people who had shown the extreme stability of polywater through electron delocalization turned their techniques to this problem and asserted it did. In support, they showed that the cyclopropane ring stabilized adjacent positive charge. However, if the strain energy arose through increased electron repulsion, a positive charge would reduce that. There would be extra stability with a positive charge adjacent, BUT negative charge would destabilize it. So there were two possible explanations, and a clear means of telling the difference.

Anions on a carbon atom are common in organic chemistry. All attempts at making such an anion adjacent to a cyclopropane ring failed. A single carbon atom with two hydrogen atoms, and a benzene ring attached forms a very stable anion (called a benzyl anion). A big name replaced one of the hydrogen atoms of a benzyl anion with a cyclopropane ring, and finally made something that existed, although only barely. He published a paper and stated it was stabilized by delocalization. Yes, it was, and the stabilization would have come from the benzene ring. Compared with any other benzyl anion it was remarkably unstable. But the big names had spoken.

Interestingly, there is another test from certain spectra. In what is called an n->π* transition (don’t worry if that means nothing to you) there is a change of dipole moment with the negative end becoming stronger close to a substituent. I calculated the change based on the polarization theory, and came up with almost the correct answer. The standard theory using delocalization has the spectral shift due to the substituent in the opposite direction.

My supervisor, who never spoke to me again and was not present during the thesis write-up, wrote up a paper on the amines, which was safe because it never showed anything that would annoy the masses, but he never published the data that came from his only contribution!

So, what happened? Delocalization won. A review came out that ignored every paper that disagreed with its interpretation, including my papers. Another review dismissed the unexpected spectral shift I mentioned by saying “it is unimportant”. I ended up writing an analysis to show that there were approximately 60 different sorts of observation that were not in accord with the delocalization proposition. It was rejected by review journals as “This is settled” (that it was settled wrongly was irrelevant) and “We do not publish logic analyses.” Well, no, it seems they do not, and do not care that much.

The point I am trying to make here is that while this could be regarded as not exceptionally important, if this sort of wrong behaviour happens to one person, how much happens across the board? I believe I now know why science has stopped making big advances. None of those who are established want to hear anyone question their own work. The sad part is, that is not the only example I have.

Polymerised Water

In my opinion, probably the least distinguished moment in science in the last sixty years occurred in the late 1960s, and not for the seemingly obvious reason. It all started when Nikolai Fedyakin condensed steam in quartz capillaries and found it had unusual properties, including a viscosity approaching that of a syrup. Boris Deryagin improved production techniques (although he never produced more than very small amounts) and determined a freezing point of – 40 oC, a boiling point of » 150 oC, and a density of 1.1-1.2. Deryagin decided there were only two possible reasons for this anomalous behaviour:

(a) the water had dissolve quartz,

(b) the water had polymerised.

Since recently fused quartz is insoluble in water at atmospheric pressures, he concluded that the water must have polymerised. There was no other option. An infrared spectrum of the material was produced by a leading spectroscopist from which force constants were obtained, and a significant number of papers were published on the chemical theory of polywater. It was even predicted that an escape of polywater into the environment could catalytically convert the Earth’s oceans into polywater, thus extinguishing life. Then there was the inevitable wake-up call: the IR spectrum of the alleged material bore a remarkable resemblance to that of sweat. Oops. (Given what we know now, whatever they were measuring could not have been what everyone called polywater, and probably was sweat, and how that happened from a very respected scientist remains unknown.)

This material brought out some of the worst in logic. A large number of people wanted to work with it, because theory validated it existence. I gather the US navy even conducted or supported research into it. The mind boggles here: did they want to encase enemy vessels in toffee-like water, or were they concerned someone might do it to them? Or even worse, turn the oceans into toffee, and thus end all life on Earth? The fact that the military got interested, though, shows it was taken very seriously. I recall one paper that argued Venus was like it is because all its water polymerised!

Unfortunately, I think the theory validated the existence because, well, the experimentalists said it did exist, so the theoreticians could not restrain themselves from “proving” why it existed. The key to the existence is that they showed through molecular orbital theory that the electrons in water had to be delocalized. Most readers won’t see the immediate problem because we are getting a little technical here, but to put it in perspective, molecular orbital theory assumes the electrons are delocalized over the whole molecule. If you further assume water molecules come together, the firsr assumption requires the electrons to be delocalised, which in turn forces the system to become one molecule. If you cannot end up with what you assumed in the first place, your theoretical work is not exactly competent, let alone inspired.

Unfortunately, these calculations involve what are called quantum mechanics. Quantum mechanics is one of the most predictive theories ever, and almost all your electronic devices have parts that would not have been developed but for knowledge of quantum mechanics. The problem is that for any meaningful problem there is usually no analytical solution from the formal quantum theory generally used, and any actual answer requires some rather complicated mathematics, and in chemistry, because of the number of particles, some approximations. Not everyone agreed. The same computer code in different hands sometimes produced opposite results with no explanation of why the results differed. If there were no differences in the implied physics between methods that gave opposing results, then the calculation method was not physical. If there were differences in the physics, then these should have been clearly explained. The average computational paper gives very little insight to what is done and these papers were actually somewhat worse than usual. It was, “Trust me, I know what I’m doing.” In general, they did not.

So, what was it? Essentially, ordinary water with a lot of dissolved silica, i.e. option (a) above. Deryagin was unfortunate in suffering in logic from the fallacy of the accident. Water at 100 degrees C does not dissolve quartz. If you don’t believe me, try boiling water it in a pot with a piece of silica. It does dissolve it at supercritical temperatures, but these were not involved. So what happened? Seemingly, water condensing in quartz capillaries does dissolve it. However, now I come to the worst part. Here we had an effect that was totally unexpected, so what happened? After the debacle, nobody was prepared to touch the area. We still do not know why silica in capillaries is so eroded, yet perhaps there is some important information here, after all water flows through capillaries in your body.

One of the last papers written on “anomalous water” was in 1973, and one of the authors was John Pople, who went on to win a Nobel Prize for his work in computational chemistry. I doubt that paper is one that he is most proud of. The good news is the co-author, who I assume was a post-doc and can remain anonymous because she almost certainly had little control on what was published, had a good career following this.

The bad news was for me. My PhD project involved whether electrons were delocalized from cyclopropane rings. My work showed they were not however computations from the same type of computational code said it did. Accordingly, everybody ignored my efforts to show what was really going on. More on this later.

Venus with a Watery Past?

In a recent edition of Science magazine (372, p1136-7) there is an outline of two NASA probes to determine whether Venus had water. One argument is that Venus and Earth formed from the same material, so they should have started off very much the same, in which case Venus should have had about the same amount of water as Earth. That logic is false because it omits the issue of how planets get water. However, it argued that Venus would have had a serious climatic difference. A computer model showed that when planets rotate very slowly the near absence of a Coriolis force would mean that winds would flow uniformly from equator to pole. On Earth, the Coriolis effect leads to the lower atmosphere air splitting into three  cells on each side of the equator: tropical, subtropical and polar circulations. Venus would have had a more uniform wind pattern.

A further model then argued that massive water clouds would form, blocking half the sunlight, then “in the perpetual twilight, liquid water could have survived for billions of years.”  Since Venus gets about twice the light intensity as Earth does, Venusian “perpetual twilight” would be a good sunny day here. The next part of the argument was that since water is considered to lubricate plates, the then Venus could have had plate tectonics. Thus NASA has a mission to map the surface in much greater detail. That, of course, is a legitimate mission irrespective of the issue of water.

A second aim of these missions is to search for reflectance spectra consistent with granite. Granite is thought to be accompanied by water, although that correlation could be suspect because it is based on Earth, the only planet where granite is known.

So what happened to the “vast oceans”? Their argument is that massive volcanism liberate huge amounts of CO2 into the atmosphere “causing a runaway greenhouse effect that boiled the planet dry.” Ultraviolet light now broke down the water, which would lead to the production of hydrogen, which gets lost to space. This is the conventional explanation for the very high ratio of deuterium to hydrogen in the atmosphere. The concept is the water with deuterium is heavier, and has a slightly higher boiling point, so it would be the least “boiled off”. The effect is real but it is a very small one, which is why a lot of water has to be postulated. The problem with this explanation is that while hydrogen easily gets lost to space there should be massive amounts of oxygen retained. Where is it? Their answer: the oxygen would be “purged” by more ash. No mention of how.

In my ebook “Planetary Formation and Biogenesis” I proposed that Venus probably never had any liquid water on its surface. The rocky planets accreted their water by binding to silicates, and in doing so helped cement aggregate together and get the planet growing. Earth accreted at a place that was hot enough during stellar accretion to form calcium aluminosilicates that make very good cements and would have absorbed their water from the gas disk. Mars got less water because the material that formed Mars had been too cool to separate out aluminosilicates so it had to settle for simple calcium silicate, which does not bind anywhere near as much water. Venus probably had the same aluminosilicates as Earth, but being closer to the star meant it was hotter and less water bonded, and consequently less aluminosilicates.

What about the deuterium enhancement? Surely that is evidence of a lot of water? Not necessarily. How did the gases accrete? My argument is they would accrete as solids such as carbides, nitrides, etc. and the gases would be liberated by reaction with water. Thus on the road to making ammonia from a metal nitride

M – N  + H2O   →  M – OH  +  N-H  ; then  M(OH)2    →  MO + H2O and this is repeated until ammonia is made. An important point is one hydrogen atom is transferred from each molecule of water while one is retained by the oxygen attached to the metal. Now the bond between deuterium and oxygen is stronger than that from hydrogen, the reason being that the hydrogen atom, being lighter, has its bond vibrate more strongly. Therefore the deuterium is more likely to remain on the oxygen atom and end up in further water. This is known as the chemical isotope effect, and it is much more effective at concentrating deuterium. Thus as I see it, too much of the water was used up making gas, and eventually also making carbon dioxide. Venus may never have had much surface water.

Free Will

You will see many discussions regarding free will. The question is, do you have it, or are we in some giant computer program. The problem is that classical physics is deterministic, and you will often see claims that Newtonian physics demands that the Universe works like some finely tuned machine, following precise laws of motion. And indeed, we can predict quite accurately when eclipses of the sun will occur, and where we should go to view them. The presence of eclipses in the future is determined now. Now let us extrapolate. If planets follow physical laws, and hence their behaviour can be determined, then so do snooker or pool balls, even if we cannot in practice calculate all that will happen on a given break. Let us take this further. Heat is merely random kinetic energy, but is it truly random? It seems that way, but the laws of motion are quite clear: we can calculate exactly what will happen in any collision and it is just in practice the calculations are too complicated to even consider doing it. You bring in chaos theory, but this does nothing for you; the calculations may be utterly impossible to carry out, but they are governed solely by deterministic physics, so ultimately what happens was determined and it is just that we do not know how to calculate it. Electrodynamics and quantum theory are deterministic, even if quantum theory has random probability distributions. Quantum behaviour always follows strict conservation laws and the Schrödinger equation is actually deterministic. If you know ψ and know the change of conditions, you know the new ψ. Further, all chemistry is deterministic. If I go into the lab, take some chemicals and mix them and if necessary heat them according to some procedure, every time I follow exactly the same procedures, I shall end up with the same result.

So far, so good. Every physical effect follows from a physical cause. Therefore, the argument goes, since our brain works on physical and chemical effects and these are deterministic, what our brains do is determined exactly by those conditions. But those conditions were determined by what went before, and those before that, and so on. Extrapolating, everything was predetermined at the time of the big bang! At this point the perceptive may feel that does not seem right, and it is not. Consider nuclear decay. We know that particles, say neutrons, are emitted with a certain probability over an extended period of time. They will be emitted, but we cannot say exactly, or even roughly, when. The nuclei have angular uncertainty, therefore it follows that you cannot know what direction it is emitted because according to the laws of physics that is not determined until it is emitted. You may say, so what? That is trivial. No, the so what is that when you find one exception, you falsify the overall premise that everythingwas determined at the big bang. Which means something else introduced causes. Also, the emitted neutron may now generate new causes that could not be predetermined.

Now we start to see a way out. Every physical effect follows from a physical cause, but where do the causes come from? Consider stretching a wire with ever increasing force; eventually it breaks. It usually breaks at the weakest point, which in principle is predictable, but suppose we have a perfect wire with no point weaker than any other. It must still break, but where? At the instant of breaking some quantum effect, such as molecular vibration, will offer momentarily weaker and stronger spots. One with the greatest weakness will go, but due to the Uncertainty Principle that the given spot is unpredictable.

Take evolution. This proceeds by variation in the nucleic acids, but where in the chain is almost certainly random because each phosphate ester linkage that has to be broken is equivalent, just like the points in the “ideal wire”. Most resultant mutations die out. Some survive, and those that survive long enough to reproduce contribute to an evolutionary change. But again, which survives depends on where it is. Thus a change that provides better heat insulation at the expense of mobility may survive in polar regions, but it offers nothing in the equatorial rain forest. There is nothing that determines where what mutation will arise; it is a random event.Once you cannot determine everything, even in principle, it follows you must accept that not every cause is determined by previous events. Once you accept that, since we have no idea how the mind works, you cannot insist the way my mind works was determined at the time of the big bang. The Universe is mechanical and predictable in terms of properties obeying the conservation laws, but not necessarily anything else. I have free will, and so do you. Use it well.