About ianmillerblog

I am a semi-retired professional scientist who has taken up writing futuristic thrillers, which are being published by myself as ebooks on Amazon and Smashwords, and a number of other sites. The intention is to publish a sequence, each of which is stand-alone, but when taken together there is a further story through combining the backgrounds. This blog will be largely about my views on science in fiction, and about the future, including what we should be doing about it, but in my opinion, are not. In the science area, I have been working on products from marine algae, and on biofuels. I also have an interest in scientific theory, which is usually alternative to what others think. This work is also being published as ebooks under the series "Elements of Theory".

Phlogiston – Early Science at Work

One of the earlier scientific concepts was phlogiston, and it is of interest to follow why this concept went wrong, if it did. One of the major problems for early theory was that nobody knew very much. Materials had properties, and these were referred to as principles, which tended to be viewed either as abstractions, or as physical but weightless entities. We would not have such difficulties, would we? Um, spacetime?? Anyway, they then observed that metals did something when heated in air:

M   + air +  heat        ÞM(calx) ±  ???  (A calx was what we call an oxide.)

They deduced there had to be a metallic principle that gives the metallic properties, such as ductility, lustre, malleability, etc., but they then noticed that gold refuses to make a calx, which suggested there was something else besides the metallic principle in metals. They also found that the calx was not a mixture, thus rust did not lead to iron being attached to a lodestone. This may seem obvious to us now, but conceptually this was significant. For example, if you mix blue and yellow paint, you get green and they cannot readily be unmixed, nevertheless it is a mixture. Chemical compounds are not mixtures, even though you might make them by mixing two materials. Even more important was the work by Paracelsus, the significance of which is generally overlooked. He noted there were a variety of metals, calces and salts, and he generalized that acid plus metal or acid plus metal calx gave salts, and each salt was specifically different, and depended only on the acid and metal used. He also recognized that what we call chemical compounds were individual entities, that could be, and should be, purified.

It was then that Georg Ernst Stahl introduced into chemistry the concept of phlogiston. It was well established that certain calces reacted with charcoal to produce metals (but some did not) and the calx was usually heavier than the metal. The theory was, the metal took something from the air, which made the calx heavier. This is where things became slightly misleading because burning zinc gave a calx that was lighter than the metal. For consistency, they asserted it should have gained but as evidence poured in that it had not, they put that evidence in a drawer and did not refer to it. Their belief that it should have was correct, and indeed it did, but this avoiding the “data you don’t like” leads to many problems, not the least of which include “inventing” reasons why observations do not fit the theory without taking the trouble to abandon the theory. This time they were right, but that only encourages the act. As to why there was the problem, zinc oxide is relatively volatile and would fume off, so they lost some of the material. Problems with experimental technique and equipment really led to a lot of difficulties, but who amongst us would do better, given what they had?

Stahl knew that various things combusted, so he proposed that flammable substances must contain a common principle, which he called phlogiston. Stahl then argued that metals forming calces was in principle the same as materials like carbon burning, which is correct. He then proposed that phlogiston was usually bound or trapped within solids such as metals and carbon, but in certain cases, could be removed. If so, it was taken up by a suitable air, but because the phlogiston wanted to get back to where it came from, it got as close as it could and took the air with it. It was the phlogiston trying to get back from where it came that held the new compound together. This offered a logical explanation for why the compound actually existed, and was a genuine strength of this theory. He then went wrong by arguing the more phlogiston, the more flammable the body, which is odd, because if he said some but not all such materials could release phlogiston, he might have thought that some might release it more easily than others. He also argued that carbon was particularly rich in phlogiston, which was why carbon turned calces into metals with heat. He also realized that respiration was essentially the same process, and fire or breathing releases phlogiston, to make phlogisticated air, and he also realized that plants absorbed such phlogiston, to make dephlogisticated air.

For those that know, this is all reasonable, but happens to be a strange mix of good and bad conclusions. The big problem for Stahl was he did not know that “air” was a mixture of gases. A lesson here is that very seldom does anyone single-handedly get everything right, and when they do, it is usually because everything covered can be reduced to a very few relationships for which numerical values can be attached, and at least some of these are known in advance. Stahl’s theory was interesting because it got chemistry going in a systemic way, but because we don’t believe in phlogiston, Stahl is essentially forgotten.

People have blind spots. Priestley also carried out Lavoisier’s experiment:  2HgO  + heat   ⇌   2Hg  + O2and found that mercury was lighter than the calx, so argued phlogiston was lighter than air. He knew there was a gas there, but the fact it must also have weight eluded him. Lavoisier’s explanation was that hot mercuric oxide decomposed to form metal and oxygen. This is clearly a simpler explanation. One of the most important points made by Lavoisier was that in combustion, the weight increase of the products exactly matched the loss of weight by the air, although there is some cause to wonder about the accuracy of his equipment to get “exactly”. Measuring the weight of a gas with a balance is not that easy. However, Lavoisier established the fact that matter is conserved, and that in chemical reactions, various species react according to equivalent weights. Actually, the conservation of mass was discovered much earlier by Mikhail Lomonosov, but because he was in Russia, nobody took any notice. The second assertion caused a lot of trouble because it is not true without a major correction to allow for valence. Lavoisier also disposed of the weightless substance phlogiston simply by ignoring the problem of what held compounds together. In some ways, particularly in the use of the analytical balance, Lavoisier advanced chemistry, but in disposing of phlogiston he significantly retarded chemistry.

So, looking back, did phlogiston have merit as a concept? Most certainly! The metal gives off a weightless substance that sticks to a particular gas can be replaced with the metal gives off an electron to form a cation, and the oxygen accepts the electron to form an anion. Opposite charges attract and try to bind together. This is, for the time, a fair description of the ionic bond. As for weightless, nobody at the time could determine the weight difference between a metal and a metal less one electron, if they could work out how to make it. Of course the next step is to say that the phlogiston is a discrete particle, and now valence falls into place and modern chemistry is around the corner. Part of the problem there was that nobody believed in atoms. Again, Lomonosov apparently did, but as I noted above, nobody took any notice of him. Of course, is it is far easier to see these things in retrospect. My guess is very few modern scientists, if stripped of their modern knowledge and put back in time would do any better. If you think you could, recall that Isaac Newton spent a lot of time trying to unravel chemistry and got nowhere. There are very few ever that are comparable to Newton.

Advertisements

Is Science in as Good a Place as it Might Be?

Most people probably think that science progresses through all scientists diligently seeking the truth but that illusion was was shattered when Thomas Kuhn published “The Structure of Scientific Revolutions.” Two quotes:

(a) “Under normal conditions the research scientist is not an innovator but a solver of puzzles, and the puzzles upon which he concentrates are just those which he believes can be both stated and solved within the existing scientific tradition.”

(b) “Almost always the men who achieve these fundamental inventions of a new paradigm have been either very young or very new to the field whose paradigm they change. And perhaps that point need not have been made explicit, for obviously these are the men who, being little committed by prior practice to the traditional rules of normal science, are particularly likely to see that those rules no longer define a playable game and to conceive another set that can replace them.”

Is that true, and if so, why? I think it follows from the way science is learned and then funded. In general, scientists gain their expertise by learning from a mentor, and if you do a PhD, you work for several years in a very narrow field, and most of the time the student follows the instructions of the supervisor. He will, of course, discuss issues with the supervisor, but basically the young scientist will have acquired a range of techniques when finished. He will then go on a series of post-doctoral fellowships, generally in the same area because he has to persuade the new team leaders he is sufficiently skilled to be worth hiring. So he gains more skill in the same area, but invariably he also becomes more deeply submerged in the standard paradigm. At this stage of his life, it is extremely unusual for the young scientist to question whether the foundations of what he is doing is right, and since most continue in this field, they have the various mentors’ paradigm well ingrained. To continue, either they find a company or other organization to get an income, or they stay in a research organization, where they need funding. When they apply for it they keep well within the paradigm; first, it is the easiest way for success, and also boat rockers generally get sunk right then. To get funding, you have to show you have been successful; success is measured mainly by the number of scientific papers and the number of citations. Accordingly, you choose projects that you know will work and shuld not upset any apple-carts. You cite those close to you, and they will cite you; accuse them of being wrong and you will be ignored, and with no funding, tough. What all this means is that the system seems to have been designed to generate papers that confirm what you already suspect. There will be exceptions, such as “discovering dark matter” but all that has done so far is to design a parking place for what we do not understand. Because we do  not understand, all we can do is make guesses as to what it is, and the guesses are guided by our current paradigm, and so far our guesses are wrong.

One small example follows to show what I mean. By itself, it may not seem important, and perhaps it isn’t. There is an emerging area of chemistry called molecular dynamics. What this tries to do is to work out is how energy is distributed in molecules as this distribution alters chemical reaction rates, and this can be important for some biological processes. One such feature is to try to relate how molecules, especially polymers, can bend in solution. I once went to hear a conference presentation where this was discussed, and the form of the bending vibrations was assumed to be simple harmonic because for that the maths are simple, and anyhting wrong gets buried in various “constants”. All question time was taken up by patsy questions from friends, but I got hold of the speaker later, and pointed out that I had published paper a long time previously that showed the vibrations were not simple harmonic, although that was a good approximation for small vibrations. The problem is that small vibrations are irrelevant if you want to see significant chemical effects; they come from large vibrations. Now the “errors” can be fixed with a sequence of anharmonicity terms, each with their own constant, and each constant is worked around until the desired answer is obtained. In short you get the asnswer you need by adjusting the constants.

The net result is, it is claimed that good agreement with observation is found once the “constants” are found for the given situation. The “constants” appear to be only constant for a given situation, so arguably they are not constant, and worse, it can be near impossible to find out what they are from the average paper. Now, there is nothing wrong with using empirical relationships since if they work, they make it a lot easier to carry out calculations. The problem starts when, if you do not know whyit works, you may use it under circumstances when it no longer works.

Now, before you say that surely scientists want to understand, consider the problem for the scientist: maybe there is a better relationship, but to change to use it would involve re-writing a huge amount of computer code. That may take a year or so, in which time no publications are generated, and when the time for applications for further funding comes up, besides having to explain the inactivity, you have to explain why you were wrong before. Who is going to do that? Better to keep cranking the handle because nobody is going to know the difference. Does this matter? In most cases, no, because most science involves making something or measuring something, and most of the time it makes no difference, and also most of the time the underpinning theory is actually well established. The NASA rockets that go to Mars very successfully go exactly where planned using nothing but good old Newtonian dynamics, some established chemistry, some established structural and material properties, and established electromagnetism. Your pharmaceuticals work because they have been empirically tested and found to work (at least most of the time).

The point I am making is that nobody has time to go back and check whether anything is wrong at the fundamental level. Over history, science has been marked by a number of debates, and a number of treasured ideas overthrown. As far as I can make out, since 1970, far more scientific output has been made than in all previous history, yet there have been no fundamental ideas generated during this period that have been accepted, nor have any older ones been overturned. Either we have reached a stage of perfection, or we have ceased looking for flaws. Guess which!

What is Dark Matter?

First, I don’t know what dark matter is, or even if it is, and while they might have ideas, neither does anyone else know. However, the popular press tells us that there is at least five times more of this mysterious stuff in the Universe than ordinary matter and we cannot see it. As an aside, it is not “dark”; rather it is transparent, like perfect glass. The reason is light does not interact with it, nevertheless we also know that there are good reasons for thinking that something is there because assuming our physics are correct, certain things should happen, and they do not happen as calculated. The following is a very oversimplified attempt at explaining the problem.

All mass exerts a force on other mass called gravity. Newton produced laws on how objects move according to forces, and he outlined an equation for how gravity operates. If we think about energy, follow Aristotle as he considered throwing a stone into the air. First we give the stone kinetic energy (that is the energy of motion) but as it goes up, it slows down, stops, and then falls back down. So what happened to the original energy? Aristotle simply said it passed away, but we now say it got converted to potential energy. That permits us to say that the energy always stayed the same. Note we can never see potential energy; we say it is there because in makes the conservation of energy work. The potential energy for a mass munder the gravitational effect of a mass Mis given by V = GmM/r. Gis the gravitational constant and ris the distance between them.

When we have three bodies, we cannot solve the equations of motion, so we have a problem. However, the French mathematician Lagrange showed that any such system has a function that we call a Lagrangian, in his honour, and this states that the difference between the total kinetic and potential energies equals this term. Further, provided we know the basic function for the potential energy, we can derive the virial theorem from this Lagrangian, and for gravitational interactions, the average kinetic energy has to be half the magnitude of the potential energy.

So, to the problem. As the potential energy drops off with distance from the centre of mass, so must the kinetic energy, which means that velocity of a body orbiting a central mass must slow down as the distance from the centre increases. In our solar system Jupiter travels much more slowly than Earth, and Neptune is far slower still. However, when measurements of the velocity of stars moving in galaxies were made, there was a huge surprise: the stars moving around the galaxy have an unexpected velocity distribution, being slowest near the centre of the galaxy, then speeding up and becoming constant in the outer regions. Sometimes the outer parts are not quite constant, and a plot of speed vs distance from the centre rises, then instead of flattening, has wiggles. Thus they have far too much velocity in the outer regions of the galactic disk. Then it was found that galaxies in clusters had too much kinetic energy for any reasonable account of the gravitational potential energy. There are other reasons why things could be considered to have gone wrong, for example, gravitational lensing with which we can discover new planets, and there is a problem with the cosmic microwave background, but I shall stick mainly with galactic motion.

The obvious answer to this problem is that the equation for the potential is wrong, but where? There are three possibilities. First, we add a term Xto the right hand side, then try to work out what Xis. Xwill include the next two alternatives, plus anything else, but since it is essentially empirical at this stage, I shall ignore it in its own right. The second is to say that the inverse dependency on ris wrong, which is effectively saying we need to modify our law of gravity. The problem for this is that Newton’s gravity works very well right out to the outer extensions of the solar system. The third possibility is there is more mass there than we expect, and it is distributed as a halo around the galactic centre. None of these are very attractive, but the third option does save the problem of why gravity does not vary from Newtonian law in our solar system (apart from Mercury). We call this additional mass dark matter.

If we consider modified Newtonian gravity (MOND), this starts with the proposition that with a certain acceleration, the force takes the form where the radial dependency on the potential contained a further term that was proportional to the distance rthen it reached a maximum. MOND has the advantage that it predicts naturally the form to the velocity distribution and its seeming constancy between galaxies. It also provides a relationship for the observed mass and the rate of rotation of a galaxy, and this appears to hold. Further, MOND predicts that for a star, when its acceleration reaches a certain level, the dynamics revert to Newtonian, and this has been observed. Dark matter has a problem with this. On the other hand, something like MOND has real trouble trying to explain the wiggly structure of velocity distributions in certain galaxies, it does not explain the dynamics of galaxy clusters, it has been claimed it offers a poor fit for velocities in globular clusters, the predicted rotations of galaxies are good, but they require different values of what should be constant, and it does not apply well to colliding galaxies. Of course we can modify gravity in other ways, but however we do it, it is difficult to fit it with General Relativity without a number of ad hocadditions, and there is no real theoretical reason for the extra terms required to make it work. General Relativity is based on ten equations, and to modify it, you need ten new terms to be self-consistent; the advantage of dark matter is you only need 1.

The theory that the changes are due to dark matter has to assume that each galaxy has to incorporate dark matter roughly proportional to its mass, and possibly has to do that by chance. That is probably it biggest weakness, but it has the benefit that it assumes all our physics are more or less right, and what has gone wrong is there is a whole lot of matter we cannot see. It predicts the way the stars rotate around the galaxy, but that is circular reasoning because it was designed to do that. It naturally predicts that not all galaxies rotate the same way, and it permits the “squiggles” in the orbital speed distribution, again because in each case you assume the right amount of dark matter is in the right place. However, for a given galaxy, you can use the same dark matter distribution to determine motion of galaxy clusters, the gas temperature and densities within clusters, and gravitational lensing, and these are all in accord with the assumed amount of dark matter. The very small anisotropy of the cosmic microwave background also fits in very well with the dark matter hypothesis, and not with modified gravity.

Dark matter has some properties that limit what it could be. We cannot see it, so it cannot interact with electromagnetic radiation, at least to any significant extent. Since it does not radiate energy, it cannot “cool” itself, therefore it does not collapse to the centre of a galaxy. We can also put constraints on the mass of the dark matter particle (assuming it exists) from other parts of physics, by how it has to behave. There is some danger in this because we are assuming the dark matter actually follows those relationships, and we cannot know that. However, with that kept in mind, the usual conclusions are that it must not collide frequently, and it should have a mass larger than about 1 keV. That is not a huge constraint, as the electron has a mass of a little over 0.5 MeV, but it says the dark matter cannot simply be neutrinos. There is a similar upper limit in that because the way gravitational lensing works, it cannot really be a collection of brown dwarfs. As can be seen, so far there are no real constraints on the mass of the dark matter constituent particles.

So what is the explanation? I don’t know. Both propositions have troubles, and strong points. The simplest means of going forward would be to detect and characterize dark matter, but unfortunately our inability to do this does not mean that there is no dark matter; merely that we did not detect it with that technique. The problem in detecting it is that it does not do anything, other than interact gravitationally. In principle we might detect it when it collides with something, as we would see an effect on the something. That is how we detect neutrinos, and in principle you might think dark matter would be easier because it has a considerably higher mass. Unfortunately, that is wrong, because the neutrino usually travels at near light speed; if dark matter were much larger, but much slower, it would be equally difficult to detect, if not more so. So, for now nobody knows.

Just to finish, a long shot guess. In the late 20th century, a German physicist B Heim came up with a theory of elementary particles. This is largely ignored in favour of the standard model, but Heim’s theory produces a number of equations that are surprisingly good at calculating the masses and lifetimes of elementary particles, both of which are seemingly outside the scope of the standard model. One oddity of his results is he predicts a “neutral electron” with a mass slightly greater than the electron and with an infinite lifetime. If matter and antimatter originally annihilated and left a slight preponderance of matter, and if this neutral electron is its own antiparticle, then it would survive, and although it is very light, there would be enough of it to explain why its total mass now is so much greater than matter. In short Heim predicted a particle that is exactly like dark matter. Was he right? Who knows? Maybe this problem will be solved very soon, but for now it is a mystery.

Fuel for Legacy Vehicles in a “Carbon-free” Environment

Electric vehicles will not solve our emissions problem: there are over a billion petroleum driven vehicles, and they will not go away any time soon. Additionally, people have a current investment, and while billionaires might throw away their vehicles, most ordinary people will not change unless they can sell what they have, which in turn means someone else is using it. This suggests the combustion motor is not yet finished, and the CO2emissions will continue for a long time yet. That gives us a rather awkward problem, and as noted in the previous posts on global warming, there is no quick fix. One of the more obvious contributions could be biofuels. Yes, you still burn carbon, but the carbon came from the atmosphere. There will also be processing energy, but often that can come from the byproducts of the process. At this point I should add a caveat: I have spent quite a bit of my professional life researching this route so perhaps I have a degree of bias.

The first point is that it will be wrong to take grain and make alcohol for fuel, other than as a way of getting rid of spare or spoiled grain. The world will also have a food shortage, especially if the sea levels start rising, because much of the most productive land is low-lying. If we want to grow biomass, we need an area of land roughly equivalent to the area used for food production, and that land is not there. There are wastelands, but they tend to be non-productive. However, that does not mean we cannot grow biomass for fuel; it merely states there is nowhere nearly enough. Again, there is no single fix.

What you get depends critically on how you do it, and what your biomass is. Of the various processes, I prefer hydrothermal processing, which involves heating the biomass in water up to supercritical temperatures with some additional conditions. In effect, this greatly accelerates the processes that formed oil naturally. Corresponding pyrolysis will break down plastics, and in general high quality fuel is obtainable. The organic fraction of municipal refuse could also be used to make fuel, and in my ebook “Biofuel” I calculated that refuse could produce roughly seven litres per week per person. Not huge, but still a contribution, and it helps solve the landfill problem. However, the best options that I can think of include macroalgae and microalgae. Macroalgae would have to be cultivated, but in the 1970s the US navy carried out an exercise that grew macroalgae on “submerged rafts” in the open Pacific, with nutrients from the sea floor brought up from wind and wave action. Currently there is work being carried out growing microalgae in tanks, etc, in various parts of the world. In principle, microalgae could be grown in the open ocean, if we knew how to harvest it.

I was involved in one project that used microalgae grown in sewage treatment plants. Here there should have been a double benefit – sewage has to be treated so the ponds are already there, and the process cleans up the nitrogen and phosphate that would otherwise be dumped into the sea, thus polluting it. The process could also use sewage sludge, and the phosphate, in principle, was recoverable. A downside was that the system would need more area than the average treatment plant because the residence time is somewhat longer than the current time, which seems designed to remove the worst of the oxygen demand then chuck everything out to sea, or wherever. This process went nowhere; the venture needed to refinance and unfortunately they left it too late, namely shortly after the Lehman collapse.

From the technical point of view, this hydrothermal technology is rather immature. What you get can critically depend on exactly how you do it. You end up with a thick brown fluid, from which you can obtain a number of products. Your petrol fraction is generally light aromatics, with a research octane number (RON) of about 140, and the diesel fraction can have a cetane number approaching 100 (because the main components are straight chain C15 or C17 saturated hydrocarbons. Cetane is the C16 equivalent.) These are superb fuels, however while current motors would run very well on them, they are not optimal.

We can consider ethanol as an example. It has an RON somewhere in the vicinity of 120 – 130. People say ethanol is not much of a fuel because its energy content is significantly lower than hydrocarbons, and that is correct, but energy is not the whole story because efficiency also counts. The average petrol motor is rather inefficient and most of the energy comes out as heat. The work you can get out depends on the change of pressure times volume, so the efficiency can be significantly improved by increasing the compression ratio. However, if the compression is too great, you get pre-ignition. The modern motor is designed to run well with an octane number of about 91, with some a bit higher. That is because they are designed to use the most of the distillate from crude oil. Another advantage of ethanol is you can blend in some water with it, which absorbs heat and dramatically increases the pressure. So ethanol and oxygenates can be used.

So the story with biofuels is very similar to the problems with electric vehicles; the best options badly need more research and development. At present, it looks as if they will not get it in time. Once you have your process, it usually takes at least ten years to get a demonstration plant operating. Not a good thought, is it?

Book Discounts and Video

From November 8 – 15, two ebooks will be 99c or 99p. These are:

Scaevola’s Triumph

The bizarre prophecy has worked, and Scaevola finds himself on an alien planet that is technically so advanced they consider him a primitive, yet it is losing a war. According to Pallas Athene, only he can save this civilization from extermination, and his use of strategy is needed to win this war. But what can he do, when at first he cannot even open the door to his apartment?  Book III of a series. http://www.amazon.com/dp/B00O0GS7LO

The Manganese Dilemma

Charles Burrowes, master hacker, is thrown into a ‘black op’ with the curvaceous Svetlana for company to validate new super stealth technology she has brought to the West. Can Burrowes provide what the CIA needs before Russian counterintelligence or a local criminal conspiracy blow the whole operation out of the water? https://www.amazon.com/dp/B077865V3L

Finally, for those who what to know what I look like: https://youtu.be/2z7lBTQ_nWY

A link to Red Gold:  http://www.amazon.com/dp/B009U0458Y

100th Anniversary: The Last Medieval Siege in the War to End All Wars.

Fake News! Of course, World War 1 did absolutely nothing to end all wars, and while everybody hoped it would, they all then went about carrying out actions that made the next one inevitable. However, that is not what this post will be about. Rather, this week, and more particularly the fourth of November, is also the 100th anniversary of the most successful day the New Zealand Army had on the Western Front, and probably in WW1, namely the liberation of Le Quesnoy. On the basis that most readers of this will never have heard of this, I thought it was worthwhile interrupting my transport sequence of posts.

Le Quesnoy in France is rather close to the Belgian border and had been overrun by the Germans very early in the war, so the citizens had been under occupation for over four years. However, at this very late point in the war, the Germans suffered a rather acute problem – they were running out of horses. Horses were needed for bringing supplies to the front, and for moving artillery, and since WW1 was fought on an industrial scale, very large amounts of supplies were needed. The horses they had had been overworked and many had simply died of exhaustion, and the rest were not in good condition. Accordingly, when the New Zealand Division made an advance, the Germans were a bit static, and the village of Le Quesnoy was not only encircled, but the Division advanced several miles towards Belgium. They captured 60 artillery pieces and 2000 Germans.

Le Quesnoy remained in the hands of Germans. It was a walled city, more or less unchanged from the middle ages. City walls would offer little resistance to artillery, but the New Zealand army elected not to use artillery. The actual orders were to encircle and hold, as there seemed to be little point in wasting lives when an Armistice was reputed to be being negotiated soon. However, the brass did not inform the troops about that, so the troops decided that they might as well try to take the town. Without artillery, they made what might well be the last medieval siege, a claim that might well be reinforced through the fake news on a local stained glass window in a church, designed, oddly enough, by a padre who was there. What the window is reported to show is the walls of the city being scaled by hordes of soldiers on ladders, straight out of the Hollywood movies.

The truth is actually a bit weirder. There were three walls, and the outer two were scaled, possibly with a number of ladders as per the stained glass window, but they were not really defended that well, and that left a moat and an inner higher wall. Scaling the walls might appear dangerous as defenders on the top should easily repel them, but it did not work out like that. The machine gun was a most effective defensive weapon in WW1; here it became an interesting offensive weapon. Any head appearing over the stonework received a burst of machine gun fire, which effectively made looking over the wall an act of suicide.

For the inner wall, the distance from the bottom of the moat to the top of the wall was greater than the length of their longest ladder. However, there was one place where a narrow ledge ran for a short distance along the side of the wall. The final wall could only be climbed with one ladder. Apparently there was worse; the ladder could only hold one soldier at a time. They managed to very quietly raise the ladder and while two soldiers steadied it, Lieutenant Leslie Averill climbed it. He got to the top, then saw two Germans. He fired two shots at them from his pistol, both of which missed, but the Germans did not realize there was only one of him and they fled. Very soon, the garrison surrendered. One interesting point is the sixth person up this ladder was a Major, dragging a cable and carrying a field telephone. He wanted to make sure he reported the surrender, and that his name was associated with it. That may have done him some good – he became a Major General in WW2, and was a Chief Justice in New Zealand.

The moral here, if you want one, is that if you wish to get ahead, get into a position to take the credit, then take it. Sometimes, like the judge/General, a highly competent person gets into the position, but sometimes the only competence is in taking the credit. But if don’t take the credit, even if you earned it, you languish.

Non-Battery Powered Electric Vehicles

If vehicles always drive on a given route, power can be provided externally. Trams and trains have done this for a long time, and it is also possible to embed an electric power source into roads and power vehicles by induction. My personal view is commercial interests will make this latter option rather untenable. So while external power canreplace quite a bit of fossil fuel consumption, self-contained portable sources are required.

In the previous posts, I have argued that transport cannot be totally filled by battery powered electric vehicles because there is insufficient material available to make the batteries, and it will not be very economically viable to own a recharging site for long distance driving. The obvious alternative is the fuel cell. The battery works by supplying electricity that separates ions and converts them to a form that can recombine the ions later, and hence supply electricity. The alternative is to simply provide the materials that will generate the ions and make the electricity. This is the fuel cell, and effectively you burn something, but instead of making heat, you generate electric current. The simplest such fuel cells include the conversion of hydrogen with air to water. To run this sort of vehicle, you would refill your hydrogen tank in much the same way you refill a CNG powered car with methane. There are various arguments about how safe that is. If you have ever worked with hydrogen, you will know it leaks faster than any other gas, and it explodes with a wide range of air mixtures, but on the other hand it also diffuses away faster. Since the product is water (also a greenhouse gas, but one that is quickly cycled away, thanks to rain, etc) this seems to solve everything. Once again, the range would not be very large because cylinders can only hold so much gas. On the other hand, work has been going on to lock the hydrogen into another form. One such form is ammonia. You could actually run a spark ignition motor on ammonia (but not what you buy at a store, which is 2 – 5% ammonia in water), but it also has considerable potential for a fuel cell. However, someone would still have to develop the fuel cell. The problem here is that fuel cells need a lot more work before they are satisfactory, and while the fuel refilling could be like the current service station, there may be serious compatibility problems and big changes would be required to suppliers’ stations.

Another problem is the fuel still has to be made. Hydrogen can be made by electrolysing water, but you are back to the electricity requirements noted for batteries. The other way we get hydrogen is to steam reform oil (or natural gas) and we are back to the same problem of making CO2. There is, of course, no problem if we have nuclear energy, but otherwise the energy issues of the previous post apply, and we may need even more electricity because with an additional intermediate, we have to allow for inefficiencies.

As it happens, hydrogen will also run spark ignition engines. As a fuel, it has problems, including a rather high air to fuel ratio (a minimum of 34/1, although because it runs well lean, it can be as high as 180/1) and because hydrogen is a gas, it occupies more volume prior to ignition. High-pressure fuel injection can overcome this. However there is also the danger of pre-ignition or backfires if there are hot spots. Another problem might include hydrogen getting by the rings into the crankcase, where ignition, if it were to occur, could be a real problem. My personal view is, if you are going to use hydrogen you are better off using it for a fuel cell, mainly because it is over three times more efficient, and in theory could approach five times more efficient. You should aim to get the most work out of your hydrogen.

A range of other fuel cells are potentially available, most of them “burning” metal in air to make the electricity. This has a big advantage because air is available everywhere so you do not need to compress it. In my novel Red Gold, set on Mars, I suggested an aluminium chlorine fuel cell. The reason for this was: there is no significant free oxygen in the thin Martian atmosphere; the method I suggested for refining metals, etc. would make a lot of aluminium and chlorine anyway; chlorine happens to be a liquid at Martian temperatures so no pressure vessels would be required; aluminium/air would not work because aluminium forms an oxide surface that stops it from oxidising, but no such protection is present with chlorine; aluminium gives up three electrons (lithium only 1) so it is theoretically more energy dense; finally, aluminium ions move very sluggishly in oxygenated solutions, but not so if chlorine is the underpinning negative ion. That, of course, would not be appropriate for Earth as the last thing you want would be chlorine escaping.

This leaves us with a problem. In principle, fuel cells can ease the battery problem, especially for heavy equipment, but a lot of work has to be done to ensure it is potentially a solution. Then you have to decide on what sort of fuel cells, which in turn depends on how you are going to make the fuel. We have to balance convenience for the user with convenience for the supplier. We would like to make the fewest changes possible, but that may not be possible. One advantage of the fuel cell is that the materials limitations noted for batteries probably do not apply to fuel cells, but that may be simply because we have not developed the cells properly yet, so we have yet to find the limitations. The simplest approach is to embark on research and development programs to solve this problem. It was quite remarkable how quickly nuclear bombs were developed once we got started. We could solve the technical problems, given urgency to be accepted by the politicians. But they do not seem to want to solve this now. There is no easy answer here.