About ianmillerblog

I am a semi-retired professional scientist who has taken up writing futuristic thrillers, which are being published by myself as ebooks on Amazon and Smashwords, and a number of other sites. The intention is to publish a sequence, each of which is stand-alone, but when taken together there is a further story through combining the backgrounds. This blog will be largely about my views on science in fiction, and about the future, including what we should be doing about it, but in my opinion, are not. In the science area, I have been working on products from marine algae, and on biofuels. I also have an interest in scientific theory, which is usually alternative to what others think. This work is also being published as ebooks under the series "Elements of Theory".

More on Disruptive Science

No sooner do I publish a blog on disruptive science than what does Nature do? It publishes an editorial https://doi.org/10.1038/d41586-023-00183-1 questioning whether it is (which is fair enough) and then, rather bizarrely, does it matter? Does the journal not actually want science to advance, but merely restrict itself to the comfortable? Interestingly, their disruptive examples they cite come from Crick (1953, structure of DNA) and the first planet orbiting another star. In my opinion, these are not disruptive. In Crick’s case, he merely used Rosalind Franklin’s data, and in the second case, this had been expected for years, and indeed I had seen a claim about twenty years earlier for a Jupiter-style planet around Epsilon Eridani. (Unfortunately, I did not write down the reference because I was not involved in that yet.) This result was rubbished because it was claimed the data was too inaccurate, yet the result that I wrote down complied quite well with what we now accept. I am always suspicious of discounting a result when it not only got a good value for the semimajor axis but also proposed a significantly eccentric orbit. For me, these two papers are merely obvious advances on previous theory or logic.

The proposed test by Nature for a disruptive paper is based on citations, the idea being that if a disruptive paper is cited, it is less likely for its predecessors to be cited. If the paper is consolidating, the previously disruptive papers continue to be cited. If this were to be a criterion, probably one of the most disruptive papers would be on the EPR paradox (Einstein, A., Podolsky, B., Rosen, N. 1935. Can quantum-mechanical description of physical reality be considered complete?  Phys. Rev. 47: 777-780.) Yet the remarkable thing about this paper is that people fall over themselves to point out that Einstein “got it wrong”. (That they do not actually answer Einstein’s point seems to be irrelevant to them.)

Nature spoke to a number of scholars who study science and innovation. Some were worried by Park’s paper, one of the worries being that declining disruptiveness could be linked to sluggish productivity and economic growth being seen in many parts of the world. Sorry, but I find that quite strange. It is true that an absence of discoveries is not exactly helpful, but economic use of a scientific discovery usually takes decades after the discovery. There is prolonged engineering, and if it is novel, a market for the product has to be developed. Then they usually displace something else. Very little economic growth follows quickly from scientific discovery. No need to go down this rabbit hole.

Information overload was considered a reason, and it was suggested that artificial intelligence sift and sort useful information, to identify projects with potential for a breakthrough. I completely disagree with this regarding disruption. Anyone who has done a computer search of scientific papers will know that unless you have a very clear idea of what you are looking for, you get a bewildering amount of irrelevant stuff. Thus, if I want to know the specific value of some measurement, the computer search will give me in seconds what previously could have taken days. But if the search constraints are abstract, almost anything can come out, including the erroneous material, examples being in my previous post. The computer, so far, cannot make value judgments because it has no criteria for so doing. What it will do is to comply with established thinking because they will be the constraints for the search. Disruption is something that you did not expect. How can a computer search for what is neither expected nor known? Particularly if that which is unexpected is usually mentioned as an uncomfortable aside in papers and not mentioned in abstracts or keywords. The computer will have to thoroughly understand the entire subject to appreciate the anomaly, and artificial intelligence is still a long way from that.

In a similar vein, Nature published a news item dated January 18. Apparently, people have been analysing advertisements and have come across something both remarkable and depressing: there are apparently hundreds of advertisements offering the sale of authorship in a reputable journal for a scientific paper. Prices range from hundreds to thousands of USD depending on the research area and the journal’s prestige, and the advertisement often cites the title of the paper, the journal, when it will be published (how do they know that) and the position of the authorship slots. This is apparently a multimillion-dollar industry. Interestingly, this advertising that specifies what title in what journal immediately raises suspicion, and a number of papers have been retracted. Another flag is that following peer review, further authors are added. If the authors actually contributed to the paper, they should have been known at the start. The question then is, why would anyone pay good coin for that? Unfortunately, the reason is depressingly simple: you need more citations to get more money, promotion, prizes, tenure, etc. It is a scheme to make money from those whose desire for position exceeds their skill level. And it works because nobody ever reads these papers anyway. The chances of being asked by anyone for details is so low it would be extremely unlucky to be caught out that way. Such an industry, of course, will be anything but disruptive. It only works as long as nobody with enough skill to recognize an anomaly actually reads the papers, because then the paper becomes famous, and thoroughly examined. This industry works because of the citation-counting but not understanding the content is the method of evaluating science. In short, evaluation by ignorant committee.

Advertisement

Why is there no Disruptive Science Being Published?

One paper (Park et al. Nature 613: pp 138) that caught my attention over the post-Christmas period made the proposition that scientific papers are getting less disruptive over time, until now, in the physical sciences, there is essentially very little disruption currently. First, what do I mean by disruption? To me, this is a publication that is at cross-purposes with established thought. Thus the recent claim there is no sterile neutrino is at best barely disruptive because the existence of it was merely a “maybe” solution to another problem. So why has this happened? One answer might be we know everything so there is neither room nor need for disruption. I can’t accept that. I feel that scientists do not wish to change: they wish to keep the current supply of funds coming their way. Disruptive papers keep getting rejected because what reviewer who has spent decades on research want to let through a paper that essentially says he is wrong? Who is the peer reviewer for a disruptive paper?

Let me give a personal example. I made a few efforts to publish my theory of planetary formation in scientific journals. The standard theory is that the accretion disk dust formed planetesimals by some totally unknown mechanism, and these eventually collided to form planets. There is a small industry in running computer simulations of such collisions. My paper was usually rejected, the only stated reason being it did not have computer simulations. However, the proposition was that the growth was caused chemically and used the approximation there were no collisions. There was no evidence the reviewer read the paper past the absence of mention of simulations in the abstract. No comment about the fact here was the very first mechanism stated as to how accretion started and with a testable mathematical relationship regarding planetary spacing.

If that is bad, there is worse. The American Physical Society has published a report of a survey relating to ethics (Houle, F. H., Kirby, K. P. & Marder, M. P. Physics Today 76, 28 (2023). In a 2003 survey, 3.9% of early physicists admitted that they had been required to falsify data, or they did it anyway, to get to publication faster, to get more papers. By 2020, that number has risen to 7.3%. Now, falsifying data will only occur to get the result that fits in with standard thinking, because if it doesn’t, someone will check it.

There is an even worse problem: that of assertion. The correct data is obtained, any reasonable interpretation will say it contradicts the standard thinking, but it is reported in a way that makes it appear to comply. This will be a bit obscure for some, but I shall try to make it understandable. The paper is: Maerker, A.; Roberts, J. D. J. Am. Chem.Soc. 1966, 88, 1742-1759. At the time there was a debate whether cyclopropane could delocalize electrons. Strange effects were observed and there were two possible explanations: (1) it did delocalize electrons; (2) there were electric field effects. The difference was that both would stabilize positive charge on an adjacent centre, but the electric field effects would be opposite if the charge was opposite. So while it was known that putting a cyclopropyl ring adjacent to a cationic centre stabilized it, what happened to an anionic centre? The short answer is that most efforts to make R – (C-) – Δ, where Δ means cyclopropyl failed, whereas R – (C-) – H is easy to make. Does that look as if we are seeing stabilization? Nevertheless, if we put the cyclopropyl group on a benzylic carbon by changing R to a phenyl group φ so we have φ – (C-) – Δ an anion was just able to be made if potassium was the counter ion. Accordingly it was stated that the fact the anion was made was attributed to the stabilizing effect of cyclopropyl. No thought was given to the fact that any chemist who cannot make the benzyl anion φ – (C-) – H should be sent home in disgrace. One might at least compare like with like, but not apparently if you would get the answer you don’t want. What is even more interesting is that this rather bizarre conclusion has gone unremarked (apart from by me) since then.

This issue was once the source of strong debate, but a review came out and “settled” the issue. How? By ignoring every paper that disagreed with it, and citing the authority of “quantum mechanics”. I would not disagree that quantum mechanics is correct, but computations can be wrong. In this case, they used the same  computer programmes that “proved” the exceptional stability of polywater. Oops. As for the overlooked papers, I later wrote a review with a logic analysis. Chemistry journals do not publish logic analyses. So in my view, the reason there are no disruptive papers in the physical sciences is quite clear: nobody really wants them. Not enough to ask for them.

Finally, some examples of papers that in my opinion really should have  done better. Weihs et al. (1998) arXiv:quant-ph/9810080 v1 claimed to demonstrate clear violations of Bell’s inequality, but the analysis involved only 5% of the photons? What happened to the other 95% is not disclosed. The formation of life is critically dependent on reduced chemicals being available. A large proportion of ammonia was found in ancient seawater trapped in rocks at Barberton (de Ronde et al. Geochim.  Cosmochim. Acta 61: 4025-4042.) Thus information critical for an understanding of biogenesis was obtained, but the information was not even mentioned in the abstract or in keywords, so it is not searchable by computer. This would have disrupted the standard thinking of the ancient atmosphere, but nobody knew about it. In another paper, spectroscopy coupled with the standard theory predicted strong bathochromic shifts (to longer wavelengths) for a limited number of carbenium ions, but strong hypsochromic shifts were observed without comment (Schmitze, L. R.; Sorensen, T. S. J. Am. Chem. Soc. 1982, 104, 2600-2604.) So why was no fuss made about these things by the discoverers? Quite simply, they wanted to be in with the crowd. Be good, get papers, get funded. Don’t rock the boat! After all, nature does not care whether we understand or not.

Roman Concrete

I hope all of you had a Merry Christmas and a Happy New Year, and 2023 is shaping up well for you. They say the end of the year is a time to look back, so why not look really back? Quite some time ago, I visited Rome, and I have always been fascinated by the Roman civilization, so why not start this year by looking that far back?

Perhaps one of the more rather remarkable buildings is the Pantheon, which has the world’s largest unreinforced concrete dome. That was built under the direction of Marcus Vipsanius Agrippa, the “get-things-done” man for Augustus. No reinforcement, and it lasted that long. Take a look at modern concrete and as often as not you will find it cracks and breaks up. Concrete is a mix of aggregate (stones and sand) that provides the bulk, and a cement that binds the aggregate together. We use Portland cement, which is made by heating limestone and clay (usually with some other material but the other material is not important) in a kiln up to about 1450 degrees Centigrade. The product actually depends to some extent on what the clay is, but the main products are bellite (Ca2SiO4) and alite (Ca3SiO5). If the clays contain aluminium, which most clays do, various calcium aluminosilicates are formed. Most standard cement is mainly calcium silicate to which a little gypsum is added at the end, which makes the end surface smoother.

Exactly what happens during setting is unknown. The first thing to note is that stone does not have a smooth surface at close to the molecular level, and further, stones are silicates, in which the polymer structure is perforce terminated at the surface. That would mean there are incomplete bonds. An element like carbon would fix this problem by forming double bonds but silicon cannot do that so these “awkward” surface molecules react with water to form hydroxides. What I think happens is the water in the mix hydrolyses the calcium silicate and forms silica with surface hydroxyls, and these eliminate with hydroxyls on the stone, with the calcium hydroxide also taking part, in effect forming microscopic junctions between it and stone. All of this is slow, particularly when polymeric solids cannot move easily. So to make a good concrete, besides getting the correct mix you have to let it cure for quite some time before it is at its best.

So what did the Romans do? They could not make cement by heating clay and lime up to that temperature easily, but there were sources where it was done for them: the silicate around volcanoes like Vesuvius. The Roman architect and engineer Vitruvius used a hot mix of quicklime (calcium oxide) that was hydrated and mixed with volcanic tephra. Interestingly, this will also introduce some magnesiosilicates, which are themselves cements, but magnesium may fit better than calcium onto basaltic material. For aggregate Vitruvius used fist-sized pieces of rock, including “squared red stone or brick or lava laid down in courses”. In short, Vitruvius was selecting aggregate that was much better than ordinary stone in the sense of having surface hydroxyl groups to react. That Roman concrete lasted so long may in part be due to a better choice of aggregate.

A second point was the use of hot mixing. One possibility is they used a mix of freshly slaked lime and quicklime and by freshly slaking the mix became very hot. This speeds up chemical reactions, and also allows compound formation that is not possible at low temperatures. By reacting so hot it reduced setting times. But even more interestingly, it appears to allow self-healing. If cracks begin to form, they are more likely to form around lime clasts, which can then react with water to make a calcium-rich solution, which can react with pozzolanic components to strengthen the composite material. To support this, Admir Masic, who had been studying Roman cement, made concretes using the Roman recipe and a modern method. He then deliberately cracked the samples and ran water through them. The Roman cement self-healed completely within two weeks, while the cracks in the modern cement never healed.

Fusion Energy on the Horizon? Again?

The big news recently as reported in Nature (13 December) was that researchers at the US National Ignition Facility carried out a reaction that made more energy than was put in. That looks great, right? The energy crisis is solved. Not so fast. What actually happened was that 192 lasers delivered 2.05 MJ of energy onto a pea-sized gold cylinder containing a frozen pellet of deuterium and tritium. The reason for all the lasers was to ensure that the input energy was symmetrical, and that caused the capsule to collapse under pressures and temperatures only seen in stars, and thermonuclear weapons. The hydrogen isotopes fused into helium, releasing additional energy and creating a cascade of fusion reactions. The laboratory claimed the reaction released 3.15 MJ of energy, roughly 54% more than was delivered by the lasers, and double the previous record of 1.3 MJ.

Unfortunately, the situation is a little less rosy than that might appear. While the actual reaction was a net energy producer based on the energy input to the hydrogen, the lasers were consuming power even when not firing at the hydrogen, and between start-up and shut-down they consumed 322 MJ of energy. So while more energy came out of the target than went in to compress it, if we count the additional energy consumed elsewhere but necessary to do the experiment, then slightly less than 1% of what went in came out. That is not such a roaring success. However, before we get critical, the setup was not designed to produce power. Rather it was designed to produce data to better understand what is required to achieve fusion. That is the platitudinal answer. The real reason was to help nuclear weapons scientists understand what happens with the intense heat and pressure of a fusion reaction. So the first question is, “What next?” Weapons research, or contribute towards fusion energy for peaceful purposes?

Ther next question is, will this approach contribute to an energy program. If we stop and think, the gold pellet of frozen deuterium had to be inserted, then everything line up for a concentrated burst. You get a burst of heat, but we still only got 3 MJ of heat. You may be quite fortunate to convert that to 1 MJ of electricity. Now, if it takes, say, a thousand second before you can fire up the next capsule, you have 1 kW of power. Would what you sell that for pay for the gold capsule consumption?

That raises the question, how do you convert the heat to electricity? The most common answer offered appears to be to use it to boil water and use the steam to drive a turbine. A smarter way might be to use magnetohydrodynamics. The concept is the hot gas is made to generate a high velocity plasma, and as that is slowed down, the kinetic energy of the plasma is converted to electricity. The Russians tried to make electricity this way by burning coal in oxygen to make a plasma at about 4000 degrees K. The theoretical maximum energy U is given by

    U  =  (T – T*)/T

where T is the maximum temperature and T* is the temperature when the plasma degrades and the extraction of further electricity is impossible. As you can see, it was possible to get approximately 60% energy conversion. Ultimately, this power source failed, mainly because the cola produces a slag which damaged the electrodes. In theory, the energy could be drawn in almost 100 % efficiency.

Once the recovery of energy is solved, there remains then problem of increasing the burn rate. Waiting for everything to cool down then adding an additional pellet cannot work, but expecting a pellet of hydrogen to remain in the condensed form when inserted into a temperature of, say, a million degrees, is asking a lot.

This will be my last post for the year, so let me wish you all a Very Merry Christmas, and a prosperous and successful New Year. I shall post again in mid-January, after a summer vacation.

Meanwhile, for any who fell they have an interest in physics, in the Facebook Theoretical Physics group, I am posting a series that demonstrates why this year’s Nobel Prize was wrongly assigned as Alain Aspect did not demonstrate violations of Bell’s inequality. Your challenge, for the Christmas period, is to prove me wrong and stand up for the Swedish Academy. If it is too difficult to find, I may post the sequence here if there were interest.

Ebook Discount

From December 15 – January 1, the summer Smashwords sale,  my ebooks at Smashwords will be significantly discounted. The fictional ebooks include”

Puppeteer:  A technothriller where governance is breaking down due to government debt, and where a terrorist attack threatens to kill tens to hundreds of millions of people and destroy billions of dollars worth of infrastructure.

http://www.smashwords.com/books/view/69696

‘Bot War:  A technothriller set about 8 years later, a more concerted series of terrorist attacks made by stolen drones lead to partial governance breaking down.

Smashwords    https://www.smashwords.com/books/view/677836

Troubles. Dystopian, set about 10 years later still, the world is emerging from anarchy, and there is a scramble to control the assets. Some are just plain greedy, some think corporate efficiency should rule, some think the individual should have the right to thrive, some think democracy should prevail as long as they can rig it, while the gun is the final arbiter.

https://www.smashwords.com/books/view/174203

There is also the non-fictional “Biofuels”. This gives an overview of the issues involved in biofuels having an impact on climate change. Given that electric vehicles, over their lifetime probably have an environmental impact equivalent to or greater than the combustion motor, given that we might want to continue to fly, and given that the carbon from a combustion exhaust offers no increase in atmospheric carbon levels if it came from biofuel, you might be interested to see what potential this has. The author was involved in research on this intermittently (i.e. when there was a crisis and funding was available) for over thirty years. https://www.smashwords.com/books/view/454344

Finally, I am offering a discount on “Planetary Formation and Biogenesis”. This explains why our solar system has the properties it has, it has about 1280 scientific references, and this will probably be the only time it is discounted. https://www.smashwords.com/books/view/1148194

Solar Cycles

As you may know, our sun has a solar cycle of about 11 years, and during that time the sun’s magnetic field changes, oscillating between very strong then there is a minimum, then back to the next cycle. During the minimum, there are far fewer sunspots, and the power output is also at a minimum. The last minimum started about 2017, so now we can expect increased activity. It may come as something of a disappointment that some of the peak temperatures here happened during solar minima as we can expect that the next few years will be even hotter and the effects of climate change more dramatic, but that is not what this post is about. The question is, is out sun a typical star, or is it unusual?

That raises the question, if it were unusual, how can we tell?

The power output may vary, but not extremely. The output generally is reasonably constant. We can attribute the variation in the solar output we receive over different years of about 0.1% of a degree Kelvin (or Centigrade) to that. There may appear to be more greater changes as the frequency and strength of aurorae are more significant. So how do we tell whether other stars have similar cycles? As you might guess, the power input from other stars is trivial compared even with that small variation. Any variation in total power output would be extremely difficult to detect, especially over time since instrument calibration could easily vary by more. A non-scientist may have trouble with this statement, but it would be extremely difficult to make a sensitive instrument that would record a dead flat line for a tiny constant power source over an eleven-year period. Over shorter time periods the power input from a star does vary in a clearly detectable way, and has been the basis of the Kepler telescope detecting planets.

However, as outlined in Physics World (April 5) there is a way to detect changes in magnetic fields. Stars are so hot they ionize elements, and some absorption lines in the spectrum due to ionized calcium happen to be sensitive to the stellar magnetic field. One survey showed that about half the stars surveyed appeared to have such starspot cycles, and the periodic time could be measured for half of those with the cycles. It should be noted that the inability to detect the lines does not mean the star does not have such a cycle; it may mean that, working at the limits of detection anyway, the signals were too weak to be certain of their presence.

The average length of the length of such solar cycles was about ten years, which is similar to our sun’s eleven-year cycle, although one star had a cycle lasting four years. One star, HD 166620 had a cycle seventeen years long, although “had” is the operative tense. From somewhere between 1995 and 2004, HD 166620’s starspot cycle simply turned off. (The uncertainty in the timing was because the study was discontinued due to a change of observatories, and it changed to one receiving an upgrade that was not completed until 2004.) We now await it starting up again.

Maybe that could be a long wait. In 1645 the Sun entered what we call the Maunder minimum. During the bottom of a solar cycle we would expect at least a dozen or so sunspots per year, and at the maximum, over 100. Between 1672 and 1699 fewer than 50 sunspots were observed. It appeared that for about 70 years the sun’s magnetic field was mostly turned off. So maybe HD 166620 is sustaining a similar minimum. Maybe there is a planet with citizens complaining about the cold.

What causes that? Interestingly, (Metcalfe et al. Astrophys. J. Lett. 826 L2 2016) showed that by correlating stellar rotation with age for stars older than the sun, while stars start out spinning rapidly, magnetic braking gradually slows them down, and as they slow it is argued that Maunder Minimum events may become more regular, and eventually the star slows sufficiently that the dynamo effect is insufficient and they enter a grand minimum. So eventually the Sun’s magnetic dynamo may shut down completely. Apparently, some stars display somewhat chaotic activity, some have spells of lethargy, thus HD 101501 shut down between 1980 – 1990, before reactivating, a rather short Maunder Minimum.

So when you hear people say the sun is just an average sort of star, they are fairly close to the truth. But when you hear them say the power output will steadily increase, that may not be exactly correct.

Ancient Birds

You probably have never heard of Janavis finalidens. It was a bird with webbed feet and a body vaguely reminiscent of a wild hen. It would have been the size approximating to a grey heron and had a mass estimated to be 1.5 kg. I use the past tense because it lived in the late Cretaceous, so birds had evolved away from the therapods well before the extinction. This bird was, in shape, very similar to modern birds, which is hardly surprising because flight puts limits on them, but there is one notable difference: the beak was lined with teeth. We do not know a lot about them, in part because their bones are rather fragile and more difficult to fossilize, but maybe also because they are not so spectacular as the monster dinosaurs of the time. This particular example was found a couple of decades ago in a quarry, and when it was worked out what it was, it was filed.

However, more modern equipment, specifically micro-computed tomography, has re-examined the samples. Originally, they thought they had a handful of bones from the spine, wings, shoulders and legs. However, one of the bones they thought had been a shoulder bone was a pterygoid, a bone from the bony palate of the skull.

Most current birds belong to a group called neognaths, which means “new jaws”. The key bones here are mobile, and they allow the birds to move the upper beak independently of the skull. There is a small group of birds, the emu, cassowary, ostrich, kiwi and tinamous (47 species, ground dwelling, but some can fly) have the bones in the upper palate fused together. These are also called paleognaths, or “ancient jaws”. You will probably suspect from this naming that it was believed that birds originally came with these fused jaws, but most subsequently evolved the ability to move the upper beak. In this context, non-avian dinosaurs also have fused palates, and the last common ancestor of all modern birds lived some 80 million years ago, so it would be reasonable to assume that it had a fixed palate like the other dinosaurs. Unfortunately, this is one of those theories that is hard to test because the small delicate pterygoid is usually missing from the fossils.

However, a recent article in Nature (Benito et al., vol 612, pp 100 – 105) indicated that Janavis‘ pterygoid “probably formed part of an unfused bony palate”. That means the upper beak was probably mobile. Note the uncomfortable “probably”. The resemblance of the pterygoid to that of modern chickens now suggests that the mobile upper beak evolved first, and the fused beaks arose later. That, of course, raises the question, how did it evolve, and why did some birds revert to the fused palate.

How the beak functions is crucially dependent on the bones of the upper palate. By unfusing it, it increases the flexibility of the beak and improves the use of the beak. However, fused palates are not necessarily a drawback, and might give beaks of larger birds additional support. For the kiwi, the beak is extremely long compared with the bird, and fusing the upper beak to the skull might give it more strength as it probes logs for food (often grubs in decaying logs). It might also be of interest that these birds, being flightless, tend to get most of their food from the ground, including plant material, but then again so do hens.

Accordingly, if you are concerned with the evolution of birds, admittedly not a common concern, you now have a problem, and the question is, how do you solve it? One way is to find plenty of fossils, but the difficulty is first, they are rare, and secondly, we have many samples from times when both were present, including now. How do you know you are not being misled? An important aspect of science is that once you have a reasonably well-defined problem and a possible solution, you can arrive at ways of testing it. One of the peculiarities of evolution is that as an egg grows to the adult, it often gives clues as to the evolutionary path it took. The most obvious example is the frog, first going through the tadpole stage. In the case of modern paleognaths, one approach being considered is look at their development stages. If there are differences, this would be a clue that the trait arose independently more than once.

A Solution to the Elements Shortage for Lithium Ion Batteries?

Global warming together with a possible energy crisis gives us problems for transport. One of the alleged solutions is battery powered cars. This gives three potential problems. One of these is how to generate sufficient electricity to power the batteries, but I shall leave that for the moment. The other two relate to chemistry. A battery (or fuel cell) has two basic electrodes: a cathode, which is positive, and an anode, which is negative. The difference in potential between these is the voltage, and is usually taken as the voltage at half discharge. The potential is caused by the ability to supply electrons to the anode while taking them at the cathode. At each there is a chemical oxidation/reduction reaction going on. The anode is most easily answered by oxidising a metal. Thus if we oxidise lithium we get Li  ➝ Li+  + e-. The electron disappears off to the circuit. We need something to accept an electron at the cathode, and that is Co+++, which gets reduced to the more stable Co++.  (Everything is a bit more complicated – I am just trying to highlight the problem.) Now superficially the cobalt could be replaced by a variety of elements, but the problem is the cobalt is embedded in a matrix. Most other ions have a substantial volume change of the ions, and if they are embedded in some cathode matrix, the stresses lead it to fall to bits. Cobalt seems to give the least stress, hence will give the batteries a longer life. So we have a problem of sorts: not enough easily accessible lithium, and not enough cobalt. There are also problems that can reduce the voltage or current, including side reactions and polarization.

In a fuel cell we can partly get over that. We need something at the cathode that will convert an input gas into an anion by accepting an electron, thus oxygen and water forms hydroxide. At the anode we need something that “burns”, i.e. goes to a higher valence state and gives up an electron. In my ebook “Red Gold”, a science friction story relating to the first attempt at permanent settlement of Mars, a portable power source was necessary. With no hidden oil fields on Mars, and no oxygen in the air to burn it if there were, I resorted to the fuel cell. The fuel cell chemistry I chose for Mars was to oxidize aluminium, which generates three electrons, and to reduce chlorine. The reason for these was that the settlement on Mars needed to make things from Martian resources, and the most available resource was the regolith, which is powdered rock. This was torn apart by nuclear fusion power, and the elements separated by magnetohydrodynamics, similar to what happens in a mass spectrometer. The net result is you get piles of elements. I chose aluminium because it has three electrons and hence more power capacity, and I chose chlorine because it is a liquid at Martian temperatures so no pressure vessel was required. Also, while oxygen might produce a slightly higher voltage, oxygen forms a protective coating on aluminium, and that stops that reaction.

An aluminium battery would have aluminium at the anode, and might have something in the electrolyte that could deposit more aluminium on it. Thus during a charge, you might get, if chlorine is the oxidiser,

4(Al2Cl7)-   + 3e-  → Al  +  7(AlCl4)-   

which deposits aluminium on the anode. During discharge the opposite happens and you burn aluminium off. Notice here the chlorine is actually tied up in chemical complexes and the battery has no free chlorine. Here, the electrolyte is aluminium chloride (Al2Cl6). For the fuel cell, we would be converting the gas to a complex at the cathode. That is not very practical on Earth, but the enclosed battery would be fine.

The main advantage of aluminium is that it gets rid of the supply problem. Aluminium is extremely common on Earth, as the continents are essentially made of aluminosilicates. The cathode can be simple carbon. A battery with this technology was proposed in 2015 (Nature 520: 325 – 328) that used graphite cathodes. It was claimed to manage 7,500 cycles without capacity decay, which looks good, but so far nobody seems to be taking this up.

Now, for an oddity. For discharge, we need to supply (AlCl4)- to the anode as it effectively supplies chlorine. Rather than have complicated chemistry at the cathode we can have an excess of AlCl4– from the start, and during charging, store it in the cathode structure. During discharge it is released. So now we need something to store it in. The graphite used for lithium-ion batteries comes to mind, but here is an oddity: you get twice the specific capacity, twice the cell efficiency and a 25% increase in voltage by using human hair! Next time you go to the hair dresser, note that in the long term that might be valuable. Of course, before we get too excited, we still need such batteries to be constructed and tested because so far we have no idea how such hair stands up to repeated cycles.

What we do not know about such batteries is how much dead weight has to be carried around and how small they can be made for a given charge. The point about cars is that eventually the critical point is how far will it go on one charge, how long does it take to safely recharge, how much volume of the vehicle does it take, and is it flammable? The advantage of the aluminium chloride system described above is that there are probably no side reactions, and a fire is somewhat unlikely. The materials are cheap. So, the question is, why hasn’t more been done on this system? My guess is that the current manufacturers know that lithium is working, so why change? The fact that eventually they will have to does not bother them. The accountants in charge think beyond the next quarter is long-term. Next year can look after itself. Except we know that when the problem strikes, it takes years to solve it. We should get prepared, but our economic system does  not encourage that.

This and That from the Scientific World

One of the consequences of writing blogs like this is that one tends to be on the lookout for things to write about. This ends up with a collection of curiosities, some of which can be used, some of which eventually get thrown away, and a few I don’t know what to do about. They tend to be too short to write a blog post, but too interesting, at least to me, to ignore. So here is a “Miscellaneous” post.

COP 27.

They agreed that some will pay the poorer nations for damage so far, although we have yet to see the money. There was NO promise by anyone to reduce emissions, and from my point of view, even worse o promise to investigate which technologies are worth going after. Finally, while at the conference there were a number of electric buses offering free rides, at the end of the conference these buses simply disappeared. Usual service (or lack thereof) resumed.

Fighting!

You may think that humans alone fight by throwing things at each other but you would be wrong. A film has been recorded ( https://doi.org/10.1038/d41586-022-03592-w) of two gloomy octopuses throwing things at each other, including clam shells. Octopuses are generally solitary animals, but in Jervis Bay, Australia, the gloomy octopus lives at very high densities, and it appears they irritate each other. When an object was thrown at another one, the throw was far stronger than when just clearing stuff out of the way and it tended to come from specific tentacles, the throwing ones. Further, octopuses on the receiving end ducked! A particularly interesting tactic was to throw silt over the other octopus. I have no idea what the outcome of these encounters were.

Exoplanets

The star HD 23472 has a mass of about 0.67 times that of our sun, and has a surface temperature of about 4,800 degrees K. Accordingly, it is a mid-range K type star, and it has at least five planets. Some of the properties of these include the semi-major axis a (distance from the star if the orbit is circular), the eccentricity e, the mass relative to Earth (M), the density ρ  and the inclination i. The following table gives some of the figures, taken from the NASA exoplanet archive.

Planet     a              e            M        ρ           i

b           0.116      0.07       8.32      6.15      88.9

c           0.165      0.06       3.41      3.10      89.1

d           0.043      0.07       0.55      7.50      88.0

e           0.068      0.07       0.72      7.50      88.6

f           0.091      0.07       0.77       3.0        88.1

The question then is, what to make of all that? The first thing to notice is all the planets are much closer to the star than Earth is to the sun. Is that significant? Maybe not, because another awkward point is that the inclinations are all approaching 90 degrees. The inclination is the angle the orbital plane of the planet makes with the equatorial plane of the star. Now planets usually lie on the equatorial plane because that was also the plane of the accretion disk, so something has either moved the planets, or moved the star. Moving the planets is most probable, and the reason the inclinations are all very similar is because they are close together, and they will probably be in some gravitational resonance with each other. What we see are two super Earths (b and c), two small planets closest to the star, which are small, but very dense. Technically, they are denser than Mercury in our system. There are also two planets (c and f) with densities a little lower than that of Mars.

The innermost part of the habitable zone of that star is calculated to be at 0.364 AU, the Earth-equivalent (where it gets the same radiation as Earth) at 0.5 AU, and the outer boundary of the habitable zone is at 0.767 AU. All of these planets lie well inside the habitable zone. The authors who characterised these planets (Barros, S. C. C. et al. Astron. Astrophys. 665, A154 (2022).) considered the two inner planets to be Mercury equivalents, presumably based on their densities, which approximate to pure iron. My guess is the densities are over-estimated, as the scope for error is fairly large, but they certainly look like Mercury equivalents that are somewhat bigger than our Mercury

Laughing Gas on Exoplanets

One of the targets of the search for exoplanets is to try and find planets that might carry life. The question is, how can you tell? At present, all we can do is to examine the spectra of atmospheres around the planet, and this is not without its difficulties. The most obvious problem is signal intensity. What we look for is specific signals in the infrared spectrum and these will arise from the vibrations of molecules. This can be done from absorptions if the planet transits across the star’s face or better (because the stellar intensity is less a problem) from starlight that passes through the planet’s atmosphere.

The next problem is to decide on what could signify life. Something like carbon dioxide or methane will be at best ambiguous. Carbon dioxide makes up a huge atmosphere on Venus, but we do not expect life there. Methane comes from anaerobic digestion (life) or geological activity (no life). So, the proposal is to look for laughing gas, better known as nitrous oxide. Nitrous oxide is made by some life forms, and oddly enough, it is a greenhouse gas that is becoming more of a problem from the overuse of agricultural fertilizer, as it is a decomposition product of ammonium nitrate. If nothing else, we might find planets with civilizations fighting climate change!

COP 27 – the 27th Cop-out??

Currently, a number of parties have descended on Sharm el-Sheikh for COP 27. This is the 27th “Conference of the Parties” to deal with climate change. Everybody, by now, should be aware that a major contributor to climate change is the increased levels of carbon dioxide in the atmosphere, and we have to reduce emissions. In the previous 26 conferences various pledges were made to reduce such emissions, but what has happened? According to Nature, CO2 emissions are set to reach a record high of 37.5 billion tonne in 2022. So much for a controlled reduction of emissions. In my opinion, the biggest effect from such conferences is the increased emissions due to getting all the participants to them. In this context there are apparently over 600 fossil fuel lobbyists at this conference. So the question then is, why has so little been achieved?

My answer is the politicians took the easy way out: to be seen to be doing something they implemented a carbon trading scheme. This allows money to flow, and the idea was that economics will make people transition to non-fossil-based energy by raising the price of the fossil fuels. To make this look less like a tax, they then allowed the trade of credits, and politicians issued credits to “deserving causes”. There were two reasons why this had to fail: the politicians had the ability to sabotage it by issuing credits to those it favoured, and secondly, there are no alternatives that everyone can switch to. Therein lies the first problem. Price does not alter activity much unless there is an alternative, and in most cases there is no easy alternative.

The second one is that even if you can develop alternatives, there is far too much installed capacity. The economies of just about every country are highly dependent on using fossil fuel. People are not going to discard their present vehicles and join a queue to purchase an electric one. They are still selling new petroleum-powered vehicles, and a lot of energy has been invested in making them. Like it or not, the electricity supply of many countries is dependent on coal-fired generation, and it costs a lot to construct a new plant to generate electricity. No country can afford to throw away their existing generation capacity.

In principle, the solution for electricity is simple: nuclear. So why not? Some say it is dangerous, and there remains the problems of storing wastes. It is true that people died at Chernobyl, but that was an example of crass incompetence. Further, in principle molten salt reactors cannot melt down while they also burn much of the waste. There is still waste that has to be stored somewhere, but the volume is very small in comparison. So why is this not used? Basically, the equipment has not been properly developed, the reason being that reactors were first designed so they could provide the raw material for making bombs. So, when the politicians recognized the problem at the end of the 1980s, what should have happened is that money was invested for developing such technology so that coal-fired power could be laid to rest. Instead, there was a lot of arm-waving and calls for solar and wind power. It is true these generate electricity, and in some cases they do it efficiently, however they cannot handle main load in most countries. Similarly with transport fuels. Alternative technologies for advanced biofuels were developed in the early 1980s, but were never taken to the next stage because the price of crude oil dropped to very low levels and nothing could compete. The net result was that technology was lost, and much had to be relearned. We cannot shut down the world industries and transport, and the politicians have refused to fund the development of alternative fuels.

So, what will happen? We shall continue on the way we are, while taking some trivial steps that make at least some of us, usually politicians, feel good because we are doing something. Unfortunately, greenhouse gas levels are still rising, and consider what is happening at the levels we are at. The Northeast Greenland Ice Stream is melting and the rate of melt is increasing because the protection from the Zachariae Isstrøm glacier that protected the coastal part of the ice stream broke off. Now, warmer seawater is penetrating up to 300 km under the ice stream. Global ocean levels are now predicted to rise up to a meter by the end of the century from the enhanced melting of Greenland ice. More important still is Antarctica. There is far more ice there, and it has been calculated that if the temperatures rose by four degrees Centigrade above pre-industrial levels up to two thirds of that ice could go.

Unfortunately, that is not the worst of the problems. If the climate heats, food becomes more difficult to provide. The most obvious problem is that most of the very best agricultural land is close to sea level, so we lose that. But additionally, there will be regions of greatly increased drought, and others with intense floods. Neither are good for agriculture. Yet there is an even worse problem: as soil gets hotter, it loses carbon and becomes less productive, while winds tend to blow soil away. So, what can we do about this? Unfortunately, it has to be everyone. We have to not only stop venting greenhouse gases into the atmosphere, but we have to work out ways to take it out. Stop and ask yourself, does your local politician understand this? My guess is no. Does your local politician understand what a partial differential equation means? My guess is no.