The Year of Elements, and a Crisis

This is the International Year of the Periodic Table, and since it is almost over, one can debate how useful it was. I wonder how many readers were aware of this, and how many really understand what the periodic table means. Basically, it is a means of ordering elements with respect to their atomic number in a way that allows you to make predictions of properties. Atomic number counts how many protons and electrons a neutral atom has. The number of electrons and the way they are arranged determines the atom’s chemical properties, and thanks to quantum mechanics, these properties repeat according to a given pattern. So, if it were that obvious, why did it take so long to discover it?

There are two basic reasons. The first is it took a long time to discover what were elements. John Dalton, who put the concept of atoms on a sound footing, made a list that contained twenty-one, and some of those, like potash, were not elements, although they did contain atoms that were different from the others, and he inferred there was a new element present. The problem is, some elements are difficult to isolate from the molecules they are in so Dalton, unable to break them down, but seeing from their effect on flames knew they were different, labelled them as elements. The second problem is although the electron configurations appear to have common features, and there are repeats in behaviour, they are not exact repeats and sometimes some quite small differences in electron behaviour makes very significant differences to chemical properties. The most obvious example is the very common elements carbon and silicon. Both form dioxides of formula XO2. Carbon dioxide is a gas; you see silicon dioxide as quartz. (Extreme high-pressure forces CO2 to form a quartz structure, though, so the similarity does emerge when forced.) Both are extremely stable, and silicon does not readily form a monoxide, while carbon monoxide has an anomalous electronic structure. At the other end of the “family”, lead does not behave particularly like carbon or silicon, and while it forms a dioxide, this is not at all colourless like the others. The main oxide of lead is the monoxide, and this instability is used to make the anode work in lead acid batteries.

The reason I have gone on like this is to explain that while elements have periodic properties, these are only indicative of the potential, and in detail each element is unique in many ways. If you number them on the way down the column, there may be significant changes depending on whether the number is odd or even that are superimposed on a general change. As an example: copper, silver, gold. Thus copper and gold are coloured; silver is not. The properties of silicon are wildly different from those of carbon; there is an equally dramatic change in properties from germanium to tin. What this means is that it is very difficult to find a substitute material for an element that is used for a very specific property. Further, the amounts of given elements on the planet depend partly on how the planet accreted, thus we do not have much helium or neon, despite these being extremely common elements in the Universe as a whole, and partly on the fact that nucleosynthesis gives variable yields for different elements. The heavier elements in a periodic column are generally formed in lower amounts, while elements with a greater number of stable isotopes, or particularly stable isotopes, tend to be made in greater amounts. On the other hand, their general availability tends to depend on what routes there are for their isolation during geochemical processing. Some elements such as lead form a very insoluble sulphide and that separates from the rock during geothermal processing, but others are much more resistant and remain distributed throughout the rock in highly dilute forms, so even though they are there, they are not available in concentrated forms. The problem arises when we need some of these more difficult to obtain elements, yet they have specific uses. Thus a typical mobile phone contains more than thirty different elements

The Royal Society of Chemistry has found that at least six elements used in mobile phones are going out be mined out in at least 100 years. These have other uses as well. Gallium is used in microchips, but also in LEDs and solar panels. Arsenic is also used in microchips, but also used in wood preservation and, believe it or not, poultry feed. Silver is used in microelectrical components, but also in photochromic lenses, antibacterial clothing, mirrors, and other uses. Indium is used on touchscreens and microchips, but also in solar panels and specialist ball bearings. Yttrium is used for screen colours and backlighting, but also used for white LED lights, camera lenses, and anticancer drugs, e.g. against liver cancer. Finally, there is tantalum, used for surgical implants, turbine blades, hearing aids, pacemakers, and nosescaps for supersonic aircraft. Thus mobile phones will put a lot of stress on other manufacturing. To add to the problems, cell phones tend to have a life averaging two years. (There is the odd dinosaur like me who keeps using them until technology makes it difficult to keep doing it. I am on my third mobile phone.)A couple of other facts. 23% of UK households have an unused mobile phone. While in the UK, 52% of 16 – 24 year olds have TEN or more electronic devices in their home. The RSC estimates that in the UK there are as many as 40 million old and unused such devices in people’s homes. I have no doubt that many other countries, including the US, have the same problem. So, is the obvious answer we should promote recycling? There are recycling schemes around the world, but it is not clear what is being done with what is collected. Recovering the above elements from such a mixture is anything but easy. I suspect that the recyclers go for the gold and one or two other materials, and then discard the rest. I hope I am wrong, but from the chemical point of view, getting such small mounts of so many different elements from such a mix is anything but easy. Different elements tend to be in different parts of the phone, so the phones can be dismantled and the parts chemically processed separately but this is labour intensive. They can be melted down and separated chemically, but that is a very complicated process. No matter how you do it, the recovered elements will be very expensive. My guess is most are still not recovered. All we can hope is they are discarded somewhere where they will lie inertly until they can be used economically.

An Ugly Turn for Science

I suspect that there is a commonly held view that science progresses inexorably onwards, with everyone assiduously seeking the truth. However, in 1962 Thomas Kuhn published a book “The structure of scientific revolutions” that suggested this view is somewhat incorrect. He suggested that what actually happens is that scientists spend most of their time solving puzzles for which they believe they know the answer before they begin, in other words their main objective is to add confirming evidence to current theory and beliefs. Results tend to be interpreted in terms of the current paradigm and if it cannot, it tends to be placed in the bottom drawer and is quietly forgotten. In my experience of science, I believe that is largely true, although there is an alternative: the result is reported in a very small section two-thirds through the published paper with no comment, where nobody will notice it, although I once saw a result that contradicted standard theory simply reported with an exclamation mark and no further comment. This is not good, but equally it is not especially bad; it is merely lazy and ducking the purpose of science as I see it, which is to find the truth. The actual purpose seems at times merely to get more grants and not annoy anyone who might sit on a funding panel.

That sort of behaviour is understandable. Most scientists are in it to get a good salary, promotion, awards, etc, and you don’t advance your career by rocking the boat and missing out on grants. I know! If they get the results they expect, more or less, they feel they know what is going on and they want to be comfortable. One can criticise that but it is not particularly wrong; merely not very ambitious. And in the physical sciences, as far as I am aware, that is as far as it goes wrong. 

The bad news is that much deeper rot is appearing, as highlighted by an article in the journal “Science”, vol 365, p 1362 (published by the American Association for the Advancement of Science, and generally recognised as one of the best scientific publications). The subject was the non-publication of a dissenting report following analysis on the attack at Khan Shaykhun, in which Assad was accused of killing about 80 people with sarin, and led, 2 days later, to Trump asserting that he knew unquestionably that Assad did it, so he fired 59 cruise missiles at a Syrian base.

It then appeared that a mathematician, Goong Chen of Texas A&M University, elected to do some mathematical modelling using publicly available data, and he got concerned with what he found. If his modelling was correct, the public statements were wrong. He came into contact with Theodore Postol, an emeritus Professor from MIT and a world expert on missile defence and after discussion he, Postol, and five other scientists carried out an investigation. The end result was that they wrote a paper essentially saying that the conclusions that Assad had deployed chemical weapons did not match the evidence. The paper was sent to the journal “Science and Global Security” (SGS), and following peer review was authorised for publication. So far, science working as it should. The next step is if people do not agree, they should either dispute the evidence by providing contrary evidence, or dispute the analysis of the evidence, but that is not what happened.

Apparently the manuscript was put online as an “advanced publication”, and this drew the attention of Tulsi Gabbard, a Presidential candidate. Gabbard was a major in the US military and had been deployed in Syria in a sufficiently senior position to have a realistic idea of what went on. She has stated she believed the evidence was that Assad did not use chemical weapons. She has apparently gone further and said that Assad should be properly investigated, and if evidence is found he should be accused of war crimes, but if evidence is not found he should be left alone. That, to me, is a sound position: the outcome should depend on evidence. She apparently found the preprint and put it on her blog, which she is using in her Presidential candidate run. Again, quite appropriate: resolve an issue by examining the evidence. That is what science is all about, and it is great that a politician is advocating that approach.

Then things started to go wrong. This preprint drew a detailed critique from Elliot Higgins, the boss of Bellingcat, which has a history of being anti-Assad, and there was also an attack from Gregory Koblentz, a chemical weapons expert who says Postol has a pro-Assad line. The net result is that SGS decided to pull the paper, and “Science” states this was “amid fierce criticism and warnings that the paper would help Syrian President Bashar al-Assad and the Russian government.” Postol argues that Koblentz’s criticism is beside the point. To quote Postol: “I find it troubling that his focus seems to be on his conclusion that I am biased. The question is: what’s wrong with the analysis I used?” I find that to be well said.

According to the Science article, Koblentz admitted he was not qualified to judge the mathematical modelling, but he wrote to the journal editor more than once, urging him not to publish. Comments included: “You must approach this latest analysis with great caution”, the paper would be “misused to cover up the [Assad] regime’s crimes” and “permanently stain the reputation of your journal”. The journal then pulled the paper off the publication rank, at first saying they would edit it, but then they backtracked completely. The editor of the journal is quoted in Science as saying, “In hindsight we probably should have sent it to a different set of reviewers.” I find this comment particularly abhorrent. The editor should not select reviewers on the grounds they will deliver the verdict that the editor wants, or the verdict that happens to be most convenient; reviewers should be restricted to finding errors in the paper.I find it extremely troubling that a scientific institution is prepared to consider repressing an analysis solely on grounds of political expediency with no interest in finding the truth. It is also true that I hold a similar view relating to the incident. I saw a TV clip that was taken within a day of the event where people were taking samples from the hole where the sarin was allegedly delivered without any protection. If the hole had been the source of large amounts of sarin, enough would remain at the primary site to still do serious damage, but nobody was affected. But whether sarin was there or not is not my main gripe. Instead, I find it shocking that a scientific journal should reject a paper simply because some “don’t approve”. The reason for rejection of a paper should be that it is demonstrably wrong, or it is unimportant. The importance cannot be disputed, and if it is demonstrably wrong, then it should be easy to demonstrate where it is wrong. What do you all think?

A Planet Destroyer

Probably everyone now knows that there are planets around other stars, and planet formation may very well be normal around developing stars. This, at least, takes such alien planets out of science fiction and into reality. In the standard theory of planetary formation, the assumption is that dust from the accretion disk somehow turns into planetesimals, which are objects of about asteroid size and then mutual gravity brings these together to form planets. A small industry has sprung up in the scientific community to do computerised simulations of this sort of thing, with the output of a very large number of scientific papers, which results in a number of grants to keep the industry going, lots of conferences to attend, and a strong “academic reputation”. The mere fact that nobody knows how to get to their initial position appears to be irrelevant and this is one of the things I believe is wrong with modern science. Because those who award prizes, grants, promotions, etc have no idea whether the work is right or wrong, they look for productivity. Lots of garbage usually easily defeats something novel that the establishment does not easily understand, or is prepared to give the time to try.

Initially, these simulations predicted solar systems similar to ours in that there were planets in circular orbits around their stars, although most simulations actually showed a different number of planets, usually more in the rocky planet zone. The outer zone has been strangely ignored, in part because simulations indicate that because of the greater separation of planetesimals, everything is extremely slow. The Grand Tack simulations indicate that planets cannot form further than about 10 A.U. from the star. That is actually demonstrably wrong, because giants larger than Jupiter and very much further out are observed. What some simulations have argued for is that there is planetary formation activity limited to around the ice point, where the disk was cold enough for water to form ice, and this led to Jupiter and Saturn. The idea behind the NICE model, or Grand Tack model (which is very close to being the same thing) is that Uranus and Neptune formed in this zone and moved out by throwing planetesimals inwards through gravity. However, all the models ended up with planets being in near circular motion around the star because whatever happened was more or less happening equally at all angles to some fixed background. The gas was spiralling into the star so there were models where the planets moved slightly inwards, and sometimes outwards, but with one exception there was never a directional preference. That one exception was when a star came by too close – a rather uncommon occurrence. 

Then, we started to see exoplanets, and there were three immediate problems. The first was the presence of “star-burners”; planets incredibly close to their star; so close they could not have formed there. Further, many of them were giants, and bigger than Jupiter. Models soon came out to accommodate this through density waves in the gas. On a personal level, I always found these difficult to swallow because the very earliest such models calculated the effects as minor and there were two such waves that tended to cancel out each other’s effects. That calculation was made to show why Jupiter did not move, which, for me, raises the problem, if it did not, why did others?

The next major problem was that giants started to appear in the middle of where you might expect the rocky planets to be. The obvious answer to that was, they moved in and stopped, but that begs the question, why did they stop? If we go back to the Grand Tack model, Jupiter was argued to migrate in towards Mars, and while doing so, throw a whole lot of planetesimals out, then Saturn did much the same, then for some reason Saturn turned around and began throwing planetesimals inwards, which Jupiter continued the act and moved out. One answer to our question might be that Jupiter ran out of planetesimals to throw out and stopped, although it is hard to see why. The reason Saturn began throwing planetesimals in was that Uranus and Neptune started life just beyond Saturn and moved out to where they are now by throwing planetesimals in, which fed Saturn’s and Jupiter’s outwards movement. Note that this does depend on a particular starting point, and it is not clear to me  that since planetesimals are supposed to collide and form planets, if there was an equivalent to the masses of Jupiter and Saturn, why did they not form a planet?

The final major problem was that we discovered that the great bulk of exoplanets, apart from those very close to the star, had quite significant elliptical orbits. If you draw a line through the major axis, on one side of the star the planet moves faster and closer to it than the other side. There is a directional preference. How did that come about? The answer appears to be simple. The circular orbit arises from a large number of small interactions that have no particular directional preference. Thus the planet might form from collecting a huge number of planetesimals, or a large amount of gas, and these occur more or less continuously as the planet orbits the star. The elliptical orbit occurs if there is on very big impact or interaction. What is believed to happen is when planets grow, if they get big enough their gravity alters their orbits and if they come quite close to another planet, they exchange energy and one goes outwards, usually leaving the system altogether, and the other moves towards the star, or even into the star. If it comes close enough to the star, the star’s tidal forces circularise the orbit and the planet remains close to the star, and if it is moving prograde, like our moon the tidal forces will push the planet out. Equally, if the orbit is highly elliptical, the planet might “flip”, and become circularised with a retrograde orbit. If so, eventually it is doomed because the tidal forces cause it to fall into the star.

All of which may seem somewhat speculative, but the more interesting point is we have now found evidence this happens, namely evidence that the star M67 Y2235 has ingested a “superearth”. The technique goes by the name “differential stellar spectroscopy”, and what happens is that provided you can realistically estimate what the composition should be, which can be done with reasonable confidence if stars have been formed in a cluster and can reasonably be assigned as having started from the same gas. M67 is a cluster with over 1200 known members and it is close enough that reasonable details can be obtained. Further, the stars have a metallicity (the amount of heavy elements) similar to the sun. A careful study has shown that when the stars are separated into subgroups, they all behave according to expectations, except for Y2235, which has far too high a metallicity. This enhancement corresponds to an amount of rocky planet 5.2 times the mass of the earth in the outer convective envelope. If a star swallows a planet, the impact will usually be tangential because the ingestion is a consequence of an elliptical orbit decaying through tidal interactions with the star such that the planet grazes the external region of the star a few times before its orbital energy is reduced enough for ingestion. If so, the planet should dissolve in the stellar medium and increase the metallicity of the outer envelope of the star. So, to the extent that these observations are correctly interpreted, we have the evidence that stars do ingest planets, at least sometimes.

For those who wish to go deeper, being biased I recommend my ebook “Planetary Formation and Biogenesis.” Besides showing what I think happened, it analyses over 600 scientific papers, most of which are about different aspects.

Gravitational Waves, or Not??

On February 11, 2016 LIGO reported that on September 14, 2015, they had verified the existence of gravitational waves, the “ripples in spacetime” predicted by General Relativity. In 2017, LIGO/Virgo laboratories announced the detection of a gravitational wave signal from merging neutron stars, which was verified by optical telescopes, and which led to the award of the Nobel Prize to three physicists. This was science in action and while I suspect most people had no real idea what this means, the items were big news. The detectors were then shut down for an upgrade to make them more sensitive and when they started up again it was apparently predicted that dozens of events would be observed by 2020, and with automated detection, information could be immediately relayed to optical telescopes. Lots of scientific papers were expected. So, with the program having been running for three months, or essentially half the time of the prediction, what have we found?

Er, despite a number of alerts, nothing has been confirmed by optical telescopes. This has led to some questions as to whether any gravitational waves have actually been detected and led to a group at the Neils Bohr Institute at Copenhagen to review the data so far. The detectors at LIGO correspond to two “arms” at right angles to each other running four kilometers from a central building. Lasers are beamed down each arm and reflected from a mirror and the use of wave interference effects lets the laboratory measure these distances to within (according to the LIGO website) 1/10,000 the width of a proton! Gravitational waves will change these lengths on this scale. So, of course, will local vibrations, so there are two laboratories 3,002 km apart, such that if both detect the same event, it should not be local. The first sign that something might be wrong was that besides the desired signals, a lot of additional vibrations are present, which we shall call noise. That is expected, but what was suspicious was that there seemed to be inexplicable correlations in the noise signals. Two labs that far apart should not have the “same” noise.

Then came a bit of embarrassment: it turned out that the figure published in Physical Review Letters that claimed the detection (and led to Nobel prize awards) was not actually the original data, but rather the figure was prepared for “illustrative purposes”, details added “by eye”.  Another piece of “trickery” claimed by that institute is that the data are analysed by comparison with a large database of theoretically expected signals, called templates. If so, for me there is a problem. If there is a large number of such templates, then the chances of fitting any data to one of them is starting to get uncomfortably large. I recall the comment attributed to the mathematician John von Neumann: “Give me four constants and I shall map your data to an elephant. Give me five and I shall make it wave its trunk.” When they start adjusting their best fitting template to fit the data better, I have real problems.

So apparently those at the Neils Bohr Institute made a statistical analysis of data allegedly seen by the two laboratories, and found no signal was verified by both, except the first. However, even the LIGO researchers were reported to be unhappy about that one. The problem: their signal was too perfect. In this context, when the system was set up, there was a procedure to deliver artificially produced dummy signals, just to check that the procedure following signal detection at both sites was working properly. In principle, this perfect signal could have been the accidental delivery of such an artifical signal, or even the deliberate insertion by someone. Now I am not saying that did happen, but it is uncomfortable that we have only one signal, and it is in “perfect” agreement with theory.

A further problem lies in the fact that the collision of two neutron stars as required by that one discovery and as a source of the gamma ray signals detected along with the gravitational waves is apparently unlikely in an old galaxy where star formation has long since ceased. One group of researchers claim the gamma ray signal is more consistent with the merging of white dwarfs and these should not produce gravitational waves of the right strength.

Suppose by the end of the year, no further gravitational waves are observed. Now what? There are three possibilities: there are no gravitational waves; there are such waves, but the detectors cannot detect them for some reason; there are such waves, but they are much less common than models predict. Apparently there have been attempts to find gravitational waves for the last sixty years, and with every failure it has been argued that they are weaker than predicted. The question then is, when do we stop spending increasingly large amounts of money on seeking something that may not be there? One issue that must be addressed, not only in this matter but in any scientific exercise, is how to get rid of the confirmation bias, that is, when looking for something we shall call A, and a signal is received that more or less fits the target, it is only so easy to say you have found it. In this case, when a very weak signal is received amidst a lot of noise and there is a very large number of templates to fit the data to, it is only too easy to assume that what is actually just unusually reinforced noise is the signal you seek. Modern science seems to have descended into a situation where exceptional evidence is required to persuade anyone that a standard theory might be wrong, but only quite a low standard of evidence to support an existing theory.

The Apollo Program – More Memories from Fifty Years Ago.

As most will know, it is fifty years ago since the first Moon landing. I was doing a post-doc in Australia at the time, and instead of doing any work that morning, when the word got around on that fateful day we all downed tools and headed to anyone with a TV set. The Parkes radio telescope had allowed what they received to be live-streamed to Australian TV stations. This was genuine reality TV. Leaving aside the set picture resolution, we were seeing what Houston was seeing, at exactly the same time. There was the Moon, in brilliant grey, and we could watch the terrain get better defined as the lander approached, then at some point it seemed as if the on-board computer crashed. (As computers go, it was primitive. A few years later I purchased a handheld calculator that would leave that computer for dead in processing power.) Anyway, Armstrong took control, and there was real tension amongst the viewers in that room because we all knew if anything else went wrong, those guys would be dead. There was no possible rescue. The ground got closer, Armstrong could not fix on a landing site, the fuel supply was getting lower, then, with little choice because of the fuel, the ground got closer faster, the velocity dropped, and to everyone’s relief the Eagle landed and stayed upright. Armstrong was clearly an excellent pilot with excellent nerves. Fortunately, the lander’s legs did not drop into a hole, and as far as we could tell, Armstrong chose a good site. Light relief somewhat later in the day to watch them bounce around on the lunar surface. (I think they were ordered to take a 4-hour rest. Why they hadn’t rested before trying to land I don’t know. I don’t know about you, but if I had just successfully landed on the Moon, and would be there for not very long, a four-hour rest would not seem desirable.)

In some ways that was one of America’s finest moments. The average person probably has no idea how much difficult engineering went into that, and how everything had to go right. This was followed up by six further successful landings, and the ill-fated Apollo 13, which nevertheless was a triumph in a different way in that despite a near-catastrophic situation, the astronauts returned to Earth.

According to the NASA website, the objectives of the Apollo program were:

  • Establishing the technology to meet other national interests in space.
  • Achieving preeminence in space for the United States.
  • Carrying out a program of scientific exploration of the Moon.
  • Developing human capability to work in the lunar environment.

The first two appear to have been met, but obviously there is an element of opinion there. It is debatable that the last one achieved much because there has been no effort to return to the Moon or to use it in any way, although that may well change now. Charles Duke turns 84 this year and he still claims the title of “youngest person to walk on the Moon”.

So how successful was the scientific program? In some ways, remarkably, yet in others there is a surprising reluctance to notice the significance of what was found. The astronauts brought back a large amount of lunar rocks, but there were some difficulties here in that until Apollo 17, the samples were collected by astronauts with no particular geological training. Apollo 17 changed that, but it was still one site, albeit with a remarkably varied geological variety. Of course, they did their best and selected for variety, but we do not know what was overlooked.

Perhaps the most fundamental discovery was that the isotopes from lunar rocks are essentially equivalent to earth rocks, and that means they came from the same place. To put this in context, the ratio of isotopes of oxygen, 16O/17O/18O varies in bodies seemingly according to distance from the star, although this cannot easily be represented as a function. The usual interpretation is that the Moon was formed when a small planet, maybe up to the size of Mars, called Theia crashed into Earth and sent a deluge of matter into space at a temperature well over ten thousand degrees Centigrade, and some of this eventually aggregated into the Moon. Mathematical modelling has some success at showing how this happened, but I for one am far from convinced. One of the big advantages of this scenario is that it shows why the Moon has no significant water, no atmosphere, and never had any, apart from some water and other volatiles frozen in deep craters at the South Pole that almost certainly arrived from comets and condensed there thanks to the cold. As an aside, you will often read that the lunar gravity is too weak to hold air. That is not exactly true; it cannot hold it indefinitely, but if it started with carbon dioxide proportional in mass, or even better in cross-sectional area, to what Earth has, it would still have an atmosphere.

One of the biggest disadvantages of this scenario is where did Theia come from? The models show that if the collision, which happened about 60 million years after the Earth formed, occurred from Theia having a velocity much above the escape velocity from Earth, the Moon cannot form. It gets the escape velocity from falling down the Earth’s gravitational field, but if it started far enough further out that would have permitted Theia to have lasted 60 million years, then its velocity would be increased by falling down the solar gravitational field, and that would be enhanced by the eccentricity of its trajectory (needed to collide). Then there is the question of why are the isotopes the same as on Earth when the models show that most of the Moon came from Theia. There has been one neat alternative: Theia accreted at the Earth-Sun fourth or fifth Lagrange point, which gives it indefinite stability as long as it is small. That Theia might have grown just too big to stay there explains why it took so long and starting at the same radial distance as Earth explains why the isotope ratios are the same.

So why did the missions stop? In part, the cost, but that is not a primary reason because most of the costs were already paid: the rockets had already been manufactured, the infrastructure was there and the astronauts had been trained. In my opinion, it was two-fold. First, the public no longer cared, and second, as far as science was concerned, all the easy stuff had been done. They had brought back rocks, and they had done some other experiments. There was nothing further to do that was original. This program had been a politically inspired race, the race was run, let’s find something more exciting. That eventually led to the shuttle program, which was supposed to be cheap but ended up being hideously expensive. There were also the deep space probes, and they were remarkably successful.

So overall? In my opinion, the Apollo program was an incredible technological program, bearing in mind from where it started. It established the US as firmly the leading scientific and engineering centre on Earth, at least at the time. Also, it got where it did because of a huge budget dedicated to one task. As for the science, more on that later.

The Electric Vehicle as a Solution to the Greenhouse Problem

Further to the discussion on climate change, in New Zealand now the argument is that we must reduce our greenhouse emissions by converting our vehicle fleet to electric vehicles. So, what about the world? Let us look at the details. Currently, there are estimated to be 1.2 billion vehicles on the roads, and by 2035 there will be two billion, assuming current trends continue. However, let us forget about such trends, and look at what it would take to switch 1.2 billion electric vehicles to electric. Obviously, at the price of them, that is not going to happen overnight, but how feasible is this in the long run?

For a scoping analysis, we need numbers, and the following is a “back of the envelope” type analysis. This is designed not to give answers, but at least to visualise the size of the problem. To start, we have to assume a battery size per vehicle, so I am going to assume each vehicle will have an 85 kWh battery assembly. A number of vehicles now have more than this, but equally many have less. However, for initial “back of the envelope” scoping, details are ignored. For the current purposes I shall assume an 85 kWh battery assembly and focus n the batteries.

First, we need a graphite anode, which, from web-provided data will require approximately 40 million t of graphite. Since Turkey alone has reserves of about 90 million t, strictly speaking, graphite is not a problem, although from a chemical point of view, what might be called graphite is not necessarily suitable. However, if there are impurities, they can be cleaned up. So far, not a limiting factor.

Next, each battery assembly will use about 6 kg of lithium, and using the best figures from Tesla, at least 17 kg of cobalt. This does not look too serious until we get to multiplying by 1.2 billion, which gets us to 7.2 million tonne of lithium, and 20.4 million t of cobalt. World production of lithium is 43,000 t/a, while that of cobalt is 110,000 t/a, and most of the cobalt goes to other uses already known. So overnight conversion is not possible. The world reserves of lithium are about 16 million t, so there is enough lithium, although since most of the reserves are not actually in production, presumably due to the difficulty in purifying the materials, we can assume a significant price increase would be required. Worse, the known reserves for cobalt are 7,100,000 so it is not possible to power these vehicles with our current “best battery technology”. There are alternatives, such as manganese based cathode additives, but with current technology they only have about 2/3 the power density and they can only last for about half the number of power cycles, so maybe this is not an answer.

Then comes the problem of how to power these vehicles. Let us suppose they use about ¼ of their energy on high-use days and they recharge for the next day. That requires about 24 billion kWhr of electricity generated that day for this purpose. World electricity production is currently a little over 21,000 TWh, Up to a point, that indicates “no problem”, except that over 1/3 of that came from coal, while gas and oil burning added to coal brought the fossil fuels contribution up to 2/3 of world energy production, and coal burning was the fastest growing contribution to energy demand. Also, of course, this is additional electricity we need. Global energy demand rose by 900 TWh in 2018. (Electricity statistics from the International Energy Agency.) So switching to electric vehicles will increase coal burning, which increases the emission of greenhouse gases, counter to the very problem you are trying to solve. Obviously, electricity supply is not a problem for transport, but it clearly overwhelms transport in contributing to the greenhouse gas problem. Germany closing its nuclear power stations is not a useful contribution to the problem.

It is frequently argued that solar power is the way to collect the necessary transport electricity. According to Wikipedia, the most productive solar power plant is in China’s Tengger desert, which produces 1.547 GW from 43 square kilometers. If we assume that it can operate like this for 6 hrs per day, we have 9.3 Gwh/day. The Earth has plenty of area, however, the 110,000 square km required is a significant fraction. Further, most places do not have such a friendly desert close by. Many have proposed that solar panels of the roof of houses could store power through the day and charge the vehicle at night, but to do that we have just doubled the battery requirements, and these are strained already. The solar panels could feed the grid through the day and charge the vehicles through the night when peak power demand has fallen away, so that would solve part of the problem, but now the solar panels have to make sense in terms of generating electricity for general purposes. Note that if we develop fusion power, which would solve a lot of energy requirements, it is most unlikely a fusion power plant could have its energy output varied too much, which would mean they would have run continuously through the night. At this point, charging electric cars would greatly assist the use of fusion power.

To summarise the use of electricity to power road transport using independent vehicles, there would need to be a significant increase in electricity production, but it is still a modest fraction of what we already generate. The reason it is so significant to New Zealand is that much of New Zealand electricity is renewable anyway, thanks to the heavy investment in hydropower. Unfortunately, that does not count because it was all installed prior to 1990. Those who turned off coal plants to switch to gas that had suddenly became available around 1990 did well out of these protocols, while those who had to resort to thermal because the hydro was fully utilised did not. However, in general the real greenhouse problem lies with the much bigger thermal power station emissions, especially the coal-fired stations. The limits to growth of electric vehicles currently lie with battery technology, and for electric vehicles to make more than a modest contribution to the transport problems, we need a fundamentally different form of battery or fuel cell. However, to power them, we need to develop far more productive electricity generation that does emit greenhouse gases.

Finally, I have yet to mention the contribution of biofuels. I shall do that later, but if you want a deeper perspective than in my blogs, my ebook “Biofuels” is 99c this week at Smashwords, in all formats. (https://www.smashwords.com/books/view/454344.)  Three other fictional ebooks are also on discount. (Go to https://www.smashwords.com/profile/view/IanMiller)

The Roman “Invisibility” Cloak – A Triumph for Roman Engineering

I guess the title of this post is designed to be a little misleading, because you might be thinking of Klingons and invisible space ships, but let us stop and consider what an “invisibility” cloak actually means. In the case of Klingons, light does not come from somewhere else and be reflected off their ship back to your eyes. One way to do that is to construct metamaterials, which involve creating structures in them to divert waves. The key involves matching wavelengths to structural variation, and it is easier to do this with longer wavelengths, which is why a certain amount of fuss has been made when microwaves have been diverted around objects to get the “invisibility” cloak. As you might gather, there is a general problem with overall invisibility because electromagnetic radiation has a huge range of wavelengths.

Sound is also a wave, and here it is easier to generate “invisibility” because we only generate sound over a reasonably narrow range of wavelengths from most sources. So, time for an experiment. In 2012 Stéphane Brûlé et al. demonstrated the potential by drilling a two-dimensional array of boreholes into topsoil, each 5 m deep. They then placed an accoustic source nearby, and found that much of the waves’ energy was reflected back towards the source by the first two rows of holes. What happens is that, depending on the spacing of the holes, when waves within a certain range of wavelengths pass through the lattice, there are multiple reflections. (Note this is of no value to Klingons, because you have just amplified the return radar signal.)

The reason is that when waves strike a different medium, some are reflected and some are refracted, and reflection tends to be more likely as the angle of incidence increases, and of course, the angle of incidence equals the angle of reflection. A round hole provides quite chaotic reflections, especially recalling that during refraction there is also a change of angle, and of course a change of medium occurs when the wave strikes the hole, and when it tries to leave the hole. If the holes are spaced properly with respect to the wavelength, there is considerable destructive wave interference. The net result of that is that in Brûlé’s experiment much of the wave energy was reflected back towards the source by the first two rows of holes. It is not necessary to have holes; it is merely necessary to have objects that have different wave impedance, i.e.the waves travel at different speeds through the different media, and the bigger the differences in such speeds, the better the effect. Brûlé apparently played around with holes, etc, and found the best positioning to get maximum reflection.

So, what has this got to do with Roman engineering? Apparently Brûlé went on holiday to Autun in central France, and while being touristy he saw a photograph of the foundations of a Gallo-Roman theatre, and while the image provided barely discernible foundation features, he had a spark of inspiration and postulated that the semi-circular structure bore an uncanny resemblance to half of an invisibility cloak. So he got a copy of the photo and superimposed it on one of his photos and found there was indeed a very close match.

The same thing apparently applied to the Coliseum in Rome, and a number of other amphitheatres. He found that the radii of neighbouring concentric circles (or more generally, ellipses) followed the required pattern very closely.

The relevance? Well, obviously we are not trying to defend against stray noise, but earthquakes are also wave motion. The hypothesis is that the Romans may have arrived at this structure by watching which structures survived in earthquakes and which did not, and then came up with the design most likely to withstand such earthquakes. The ancients did have surprising experience with earthquake design. The great temple at Karnak was built on materials that when sodden, which happened with the annual floods and was sufficient to hold the effect for a year, absorbed/reflected such shaking and acted as “shock absorbers”. The thrilling part of this study is that just maybe we could take advantage of this to design our cities such that they too reflect seismic energy away. And if you think earthquake wave reflection is silly, you should study the damage done in the Christchurch earthquakes. The quake centres were largely to the west, but the waves were reflected off Banks Peninsula, and there was significant wave interference. In places where the interference was constructive the damage was huge, but nearby, where interference was destructive, there was little or no damage. Just maybe we can still learn something from Roman civil engineering.