A New Way of Mining?

One of the bigger problems our economies face is obtaining metals. Apparently the price of metals used in lithium-ion batteries is soaring because supply cannot expand sufficiently, and there appears to be no way current methodology can keep up.

 Ores are obtained by physically removing them from the subsurface, and this tends to mean that huge volumes of overburden have to be removed. Global mining is estimated to produce 100 billion t of overburden per year, and that usually has to be carted somewhere else and dumped.  This often leads to major disasters, such as mine tailing causing dams, and then collapsing, thus Brazil has had at least two such collapses that led to something like 140 million cubic meters of rubble moving and at least 256 deaths. The better ores are now worked out and we are resorting to poorer ores, most of which contain less than 1% is what you actually want. The rest, gangue, is often environmentally toxic and is quite difficult to dispose of safely. The whole process is energy intensive. Mining contributes about 10% of the energy-related greenhouse gas emissions. Yet if we take copper alone, it is estimated that by 2050 demand will increase by up to 350%. The ores we know about are becoming progressively lower grade and they are found at greater depths.

We have heard of the limits to growth. Well, mining is becoming increasingly looking like becoming unsustainable, but there is always the possibility of new technology to get the benefit from increasingly more difficult sources. One such possible technique involves first inserting acid or lixiviant into the rock to dissolve the target metal in the form of an ion then use a targeted electric field to transport the metal-rich solution to the surface. This is a variant of a technique used to obtain metals from fly ash, sludge, etc.

The objective is to place an electrode either within or surrounding the ore, then the acid is introduced from an external reservoir. There is an alternative reservoir with a second electrode with opposite charge to that of the metal-bearing ion. The metal usually bears a positive charge in the textbooks, so you would have your reservoir electrode negatively charged, but it is important to keep track of your chemistry. For example, if iron were dissolved in hydrochloric acid, the main ion would be FeCl4-, i.e. an anion.

Because transport occurs through electromigration, there is no need for permeability enhancement techniques, such as fracking. About 75% of copper ore reserves are as copper sulphide that lie beneath the water table. The proposed technique was demonstrated on a laboratory scale with a mix of chalcopyrite (CuFeS2) and quartz, each powdered. A solution of ferric chloride was added, and a direct current of 7 V was applied to electrodes at opposite ends of a 0.57 m path, over which there was a potential drop of about 5V, giving a maximal voltage gradient of 1.75 V/cm. The ferric chloride liberated copper as the cupric cation. The laboratory test extracted 57 weight per cent of the available copper from a 4 cm-wide sample over 94 days, although 80% was recovered in the first 50 days. The electric current decreased over the first ten days from 110 mA to 10 mA, suggestive of pore blocking. Computer simulations suggest that in the field, about 70% of the metal in a sample accessed by the electrodes could be recovered over a three year period. The process would have the odd hazard, thus a 5 meter spacing between electrodes employed, in the simulation, a 500 V difference. If the ore is several hundred meters down, this could require quite a voltage. Is this practical? I do not know, but it seems to me that at the moment the amount of dissolved material, the large voltages, the small areas and the time taken will count against it. On the other hand, the price of metals are starting to rise dramatically. I doubt this will be a final solution, but it may be part of one.

Living Near Ceres

Some will have heard of Gerard O’Neill’s book, “The High Frontier”. If not, see https://en.wikipedia.org/wiki/The_High_Frontier:_Human_Colonies_in_Space. The idea was to throw material up from the surface of the Moon to make giant cylinders that would get artificial gravity from rotation, and people could live their lives in the interior with energy being obtained in part by solar energy. The concept was partly employed in the TV series “Babylon 5”, but the original concept was to have open farmland as well. Looks like science fiction, you say, and in fairness I have included such a proposition in a science fiction novel I am currently writing, However, I have also read a scientific paper on this topic (arXiv:2011.07487v3) which appears to have been posted on the 14th January, 2021. The concept is to put such a space settlement using material obtained from the asteroid Ceres, and orbiting near Ceres.

The proposal is ambitious, if nothing else. The idea is to build a number of habitats, and to ensure such habitats are not too big but they stay together they are tethered to a megasatellite, which in turn will grow and new settlements are built. The habitats spin in such a way to attain a “gravity” of 1 g, and are attached to their tethers by magnetic bearings that have no physical contact between faces, and hence never wear. A system of travel between habitats proceeds along the tethers. Rockets would be unsustainable because the molecules they throw out to space would be lost forever.

The habitats would have a radius of 1 km, a length of 10 km, and have a population of 56,700, with 2,000 square meters per person, just under 45% of which would be urban. Slightly more scary would be the fact it has to rotate every 1.06 minutes. The total mass per person would be just under 10,000 t, requiring an energy to produce it of 1 MJ/kg, or about 10 GJ.

The design aims to produce an environment for the settlers that has Earth-like radiation shielding, gravity, and atmosphere. It will have day/night on a 24 hr cycle with 130 W/m^2 insolation, similar to southern Germany, and a population density of 500/km^2, similar to the Netherlands. There would be fields, parks, and forests, no adverse weather, no natural disasters and ultimately it could have a greater living area than Earth. It will be long-term sustainable. To achieve that, animals, birds and insects will be present, i.e.  a proper ecosystem. Ultimately it could provide more living area than Earth. As can be seen, that is ambitious. The radiation shielding involves 7600 kg/m^2, of which 20% is water and the rest silicate regolith. The rural spaces have a 1.5 m depth of soil, which is illuminated by the sunlight. The sunlight is collected and delivered from mirrors into light guides. Ceres is 2.77 times as far as Earth from the sun, which means the sunlight is only about 13% as strong as at Earth, so over eight times the mirror collecting are is required for every unit area to be illuminated to get equivalent energy. 

The reason cited for proposing this to be at Ceres is that Ceres has nitrogen. Actually, there are other carbonaceous asteroids, and one that is at least 100 km in size could be suitable. Because Ceres’ gravity is 0.029 times that of Earth, a space elevator could be feasible to bring material cheaply from the dwarf planet, while a settlement 100,000 km from the surface would be expected to have a stable orbit.

In principle, there could be any number of these habitats, all linked together. You could have more people living there than on Earth. Of course there are some issues with the calculation. The tethering of habitats, and of giving the habitats sufficient strength requires about 5% of the total mass in the form of steel. Where does the iron come from? The asteroids have plenty of iron, but the form is important. How will it be refined? If it is on the form of olivine or pyroxene, then with difficulty. Vesta apparently has an iron core, but Vesta is not close, and most of the time, because it has a different orbital period, it is very far away.But the real question is, would you want to live in such a place? How much would you pay for the privilege? The cost of all this was not estimated, but it would be enormous so most people could not afford it. In my opinion, cost alone is sufficient that this idea will not see the light of day.

Materials that Remember their Original Design

Recall in the movie Terminator 2 there was this robot that could turn into a liquid then return to its original shape and act as if it were solid metal. Well, according to Pu Zhang at Binghampton University in the US, something like that has been made, although not quite like the evil robot. What he has made is a solid that acts like a metal that, with sufficient force, can be crushed or variously deformed, then brought back to its original shape spontaneously by warming.

The metal part is a collection of small pieces of Field’s alloy, an alloy of bismuth, indium and tin. This has the rather unusual property of melting at 62 degrees Centigrade, which is the temperature reached by fairly warm water. The pieces have to be made with flat faces of the desired shape so that they effectively lock themselves together and it is this locking that at least partially gives the body its strength. The alloy pieces are then coated with a silicone shell using a process called conformal coating, a technique used to coat circuit boards to protect them from the environment and the whole is put together with 3D printing. How the system works (assuming it does) is that when force is applied that would crush or variously deform the fabricated object, as the metal pieces get deformed, the silicone coating gets stretched. The silicone is an elastomer, so as it gets stretched, just like a rubber band, it stores energy. Now, if the object is warmed the metal melts and can flow. At this point, like a rubber band let go, the silicone restores everything to the original shape, the when it cools the metal crystallizes and we are back where we started.

According to Physics World Zhang and his colleagues made several demonstration structures such as a honeycomb, a spider’s web-like structure and a hand, these were all crushed, and when warmed they sprang back to life in their original form. At first sight this might seem to be designed to put panel beaters out of business. You have a minor prang but do not worry: just get out the hair drier and all will be well. That, of course, is unlikely. As you may have noticed, one of the components is indium. There is not a lot of indium around and for its currently very restricted uses it costs about $US800/kg, which would make for a rather expensive bumper. Large-scale usage would make the cost astronomical. The cost of manufacturing would also always limit its use to rather specialist objects, irrespective of availabiity.One of the uses advocated by Zhang is in space missions. While weight has to be limited on space missions, volume is also a problem, especially for objects with awkward shapes, such as antennae or awkward shaped superstructures. The idea is they could be crushed down to a flat compact load for easy storage, then reassembled. The car bumper might be out of bounds because of cost and limited indium supply, but the cushioning effect arising from its ability to absorb a considerable amount of energy might be useful in space missions. Engineers usually use aluminium or steel for cushioning parts, but they are single use. A spacecraft with such landing cushions can be used once, but landing cushions made of this material could be restored simply by heating them. Zhang seems to favour the use in space engineering. He says he is contemplating building a liquid robot, but there is one thing, apart from behaviour, that such a robot could not do that the terminator robot did, and that is, if the robot has bits knocked off and the bits melt, they cannot reassemble into a whole. Leaving aside the fact there is no force to rejoin the bits, the individual bits will merely reassemble into whatever parts they were and cannot rejoin with the other bits. Think of it as held together by millions of rubber bands. Breaking into bits breaks a fraction of the rubber bands, which leaves no force to restore the original shape at the break.

Molten Salt Nuclear Reactors

In the previous post, I outlined two reasons why nuclear power is overlooked, if not shunned, despite the fact it will clearly reduce greenhouse gas emissions. I discussed wastes as a problem, and while they are a problem, as I tried to show they are in principle reasonably easily dealt with. There is a need for more work and there are difficulties, but there is no reason this problem cannot be overcome. The other reason is the danger of the Chernobyl/Fukushima type explosion. In the case of Chernobyl, it needed a frightening number of totally stupid decisions to be made, and you might expect that since it was a training exercise there would be people there who knew what they were doing to supervise. But no, and worse, the operating instructions were unintelligible, having been amended with strike-outs and hand-written “corrections” that nobody could understand. You might have thought the supervisor would check to see everything was available and correct before starting, but as I noted, there has never been a shortage of stupidity.

The nuclear reaction, which generates the heat, is initiated by a fissile nucleus absorbing a neutron and splitting, and then keeping going by providing more neutrons. These neutrons either split further fissile nuclei, such as 235U, or they get absorbed by something else, such as 238U, which converts that nucleus to something else, in this case eventually 239Pu. The splitting of nuclei produces the heat, and to run at constant temperature, it is necessary to have a means of removing that amount of heat continuously. The rate of neutron absorption is determined by the “concentration” of fissile material and the amount of neutrons absorbed by something else, such as water, graphite and a number of other materials. The disaster happens when the reaction goes too quickly, and there is too much heat generated for the cooling medium. The metal melts and drips to the bottom of the reactor, where it flows together to form a large blob that is out of the cooling circuit. As the amount builds up it gets hotter and hotter, and we have a disaster.

The idea of the molten salt reactor is there are no metal rods. The material can be put in as a salt in solution, so the concentration automatically determines the operating temperature. The reactor can be moderated with graphite, beryllium oxide, or a number of others, or it can be run unmoderated. Temperatures can get up to 1400 degrees C, which, from basic thermodynamics, gives exceptional power efficiency, and finally, reactors can be relatively small. The initial design was apparently for aircraft propulsion, and you guessed it: bombers. The salts are usually fluorides because low-valence fluorides boil at very high temperatures, they are poor neutron absorbers, and their chemical bonds are exceptionally strong, which limits corrosion, and they are exceptionally inert chemically. In one sense they are extremely safe, although since beryllium fluoride is often used, its extreme toxicity requires careful handling. But the big main advantage of this sort of reactor, besides avoiding the meltdown, is it burns actinides and so if it makes plutonium, that is added to the fuel. More energy! It also burns some of the fission wastes, and such burning of wastes also releases energy. It can be powered by thorium (with some uranium to get the starting neutrons) which does not make anything suitable for making bombs. Further, the fission products in the thorium cycle have far shorter half-lives. Research on this started in the 1960s and essentially stopped. Guess why! There are other fourth generation reactors being designed, and some nuclear engineers may well disagree with my preference, but it is imperative, in my opinion, that we adopt some. We badly need some means of generating large amounts of electricity without burning fossil fuels. Whatever we decide to do, while the physics is well understood, the engineering may not be, and this must be solved if we are to avoid a planet-wide overheating. The politicians have to ensure this job gets done.

Forests versus Fossil Fuels – a Debate on Effectiveness

The use of biomass for fuel has been advocated as a means of reducing carbon dioxide emissions, but some have argued it does nothing of the sort. There was a recent article in Physics World that discusses this issue, and here is a summary. First, the logic behind the case is simple. The carbon in trees all comes from the air. When the plant dies, it rots, releasing energy to the rotting agents, and much of the carbon is released back into the air. Burning it merely intercepts that cycle and gives the use of the energy to us as opposed to the microbes. A thermal power station in North Yorkshire is now burning enough biomass to generate 12% of the UK’s renewable energy. The power station claims it has changed from being one of the largest CO2 emitters in Europe to supporting the largest decarbonization project in Europe. So what could be wrong? 

My first response is that other than a short-term fix, burning it in a thermal power station is wrong because the biomass is more valuable for generating liquid fuels, for which there is no alternative. There are many alternative ways of generating electricity, and the electricity demand is so high that alternatives are going to be needed. There is no obvious replacement for liquid fuels in air transport, although the technology to make such fuels is yet to be developed properly. 

So, what can the critics carp on about? There were two criticisms about the calculated savings being based on the assumptions: (a) the CO2 released is immediately captured by growing plants, and (b) the biomass would have rotted and put its carbon back into the atmosphere anyway. The first is patently wrong, but so what? The critics claim it takes time for the CO2 to be reabsorbed, and that depends on fresh forest, or regrowth of the current forest. So replanting is obviously important, but equally there is quite some time used up in carbon reabsorption. According to the critics, this takes between 40 and a hundred years, then it is found that because biomass is a less energy-dense material during combustion, compared with coal you actually increase the CO2 emissions in the short-term. The reabsorption requires new forest to replace the old.

The next counter-argument was that the block should not be counted, but rather the landscape – if you only harvest 1% of the forest, the remaining 99% is busily absorbing carbon dioxide. The counter to that is that it would have been doing that anyway. The next objection is that older forests absorb carbon over a much longer period, and sequester more carbon than younger stands. Further, the wood that rots in the soil feeds microbes that otherwise will be eating their way through stored carbon in the soil. The problem is not so much that regrowth does not absorb carbon dioxide, but rather it does not reabsorb it fast enough to be meaningful for climate change.

Let us consider the options where we either do it or we do not. If we do, assume we replant the same area, and fresh vegetation is sufficient to maintain the soil carbon. In year 1 we release x t CO2. After year 40, say, it has been all absorbed, but we burn again and release x t CO2. By year 80, it is all reabsorbed, so we burn again. There is a net x t CO2 in the air. Had we not done this, in each of years 1, 40 and 80 we burn kx t CO2, giving us now 3kx t CO2, where k is some number <1 to counter the greater efficiency of burning coal. Within this scenario eventually the biofuel must save CO2. That we could burn coal and plant fresh forests is irrelevant because in the above scenario we only replace what was there. We can always plant fresh forest.

Planting more works in both options. This is a bit oversimplified, but it is aimed to show that you have to integrate what happens over sufficient time to eliminate the effect of non-smoothness in the functions, and count everything. In my example above it could be argued I do not know whether there will be a reduction in soil carbon, but if that is troublesome, at least we have focused attention on what we need to know. It is putting numbers on a closed system, even if idealized, that shows the key facts in their proper light. 

Transport System Fuel. Some passing Comments

In the previous series of posts, I have discussed the question of how we should power our transport systems that currently rely on fossil fuels, and since this will be a brief post, because I have been at a conference for most of this week, I thought it would be useful to have a summary. There are two basic objectives: ensure that there are economic transport options, and reduce the damage we have caused to the environment. The latter one is important in that we must not simply move the problem.

At this stage we can envisage two types of power: heat/combustion and electrical. The combustion source of power is what we have developed from oil, and many of the motors, especially the spark ignition motors, have been designed to optimise the amount of the oil that can be so used. The compression of most spark ignition engines is considerably lower than it could be if the octane rating was higher. These motors will be with us for some time; a car bought now will probably still be on the road in twenty years so what do we do? We shall probably continue with oil, but biofuels do offer an alternative. Some people say biofuels themselves have a net CO2 output in their manufacture. Maybe, but it is not necessary; the main reason would be that the emphasis is put onto producing the appropriate liquids because they are worth more than process heat. Process heating can be provided from a number of other sources. The advantages of biofuels are they power existing vehicles, they can be CO2 neutral, or fairly close to it, we can design the system so it produces aircraft fuel and there is really no alternative for air transport, and there are no recycling problems following usage. The major disadvantages are that the necessary technology has not really been scaled up so a lot of work is required, it will always be more expensive than oil until oil supplies run down so there is a poor economic reason to do this unless missions are taxed, and the use of the land for biofuels will put pressure on food production. The answers are straightforward: do the development work, use the tax system to change the economic bias, and use biomass from the oceans.

There are alternatives, mainly gases, but again, most of them involve carbon. These could be made by reducing CO2, presumably through using photolysis of water (thus a sort of synthetic photosynthesis) or through electricity and to get the scale we really need a very significant source of electricity. Nuclear power, or better still, fusion energy would work, but nuclear power has a relative disappointing reputation, and fusion power is still a dream. Hydrazine would make a truly interesting fuel, although its toxicity would not endear it to many. Hydrogen can work well for buses, etc, that have direct city routes.

Electricity can be delivered by direct lines (the preferred option for trains, trams, etc.), but otherwise it must be by batteries or fuel cells. The two are conceptually very similar. Both depend on a chemical reaction that can be very loosely described as “burning” something but generating electricity instead of heat. In the fuel cell, the material being “burnt” is added from somewhere else, and the oxidising agent, which may be air, must also be added. In the battery, nothing is added, and when what is there is used, it is regenerated by charging.

Something like lithium is almost certainly restricted to batteries because it is highly reactive. Lithium fires are very difficult to put out. The lithium ion battery is the only one that has been developed to a reasonable level, and part of the reason for that is that the original market was for mobile phones and laptops. There are potential shortages of materials for lithium ion batteries, but they would never cut in for those original uses. However, as shown in my previous post, recycling of lithium ion batteries will be very difficult to solve the problem for motor vehicle batteries. One alternative for batteries is sodium, obtainable from salt, and no chance of shortage.

The fuel cell offers some different options. A lot has been made of hydrogen as the fuel of the future, and some buses use it in California. It can be used in a combustion motor, but the efficiencies are much better for fuel cells. The technology is here, and hydrogen-powered fuel cell cars can be purchased, and these can manage 500 km on  single charge, and can totally refuel in about 5 minutes. The problem again is, hydrogen refuelling is harder to find. Methanol would be easier to distribute, but methanol fuel cells as of yet cannot sustain a high power take-off. Ammonia fuel cells are claimed to work almost as well as hydrogen and would be the cheapest to operate. Another possibility I advocated in one of my SF novels is the aluminium/chlorine cell, as aluminium is cheap, although chlorine is a little more dangerous.

My conclusions:

(a)  We need a lot more research because most options are not sufficiently well developed,

(b)  None will out-compete oil for price. For domestic transport, taxes on oil are already there, so the competitors need this tax to not apply

(c)  We need biofuels, if for no other reason that maintaining existing vehicles and air transport

(d)  Such biofuel must come at least partly from the ocean,

(e)  We need an alternative to the lithium ion battery,

(f)  We badly need more research on different fuel cells, especially something like the ammonia cell.

Yes, I gree that is a little superficial, but I have been at a conference, and gave two presentations. I need to come back down a little 🙂

The Apollo Program – More Memories from Fifty Years Ago.

As most will know, it is fifty years ago since the first Moon landing. I was doing a post-doc in Australia at the time, and instead of doing any work that morning, when the word got around on that fateful day we all downed tools and headed to anyone with a TV set. The Parkes radio telescope had allowed what they received to be live-streamed to Australian TV stations. This was genuine reality TV. Leaving aside the set picture resolution, we were seeing what Houston was seeing, at exactly the same time. There was the Moon, in brilliant grey, and we could watch the terrain get better defined as the lander approached, then at some point it seemed as if the on-board computer crashed. (As computers go, it was primitive. A few years later I purchased a handheld calculator that would leave that computer for dead in processing power.) Anyway, Armstrong took control, and there was real tension amongst the viewers in that room because we all knew if anything else went wrong, those guys would be dead. There was no possible rescue. The ground got closer, Armstrong could not fix on a landing site, the fuel supply was getting lower, then, with little choice because of the fuel, the ground got closer faster, the velocity dropped, and to everyone’s relief the Eagle landed and stayed upright. Armstrong was clearly an excellent pilot with excellent nerves. Fortunately, the lander’s legs did not drop into a hole, and as far as we could tell, Armstrong chose a good site. Light relief somewhat later in the day to watch them bounce around on the lunar surface. (I think they were ordered to take a 4-hour rest. Why they hadn’t rested before trying to land I don’t know. I don’t know about you, but if I had just successfully landed on the Moon, and would be there for not very long, a four-hour rest would not seem desirable.)

In some ways that was one of America’s finest moments. The average person probably has no idea how much difficult engineering went into that, and how everything had to go right. This was followed up by six further successful landings, and the ill-fated Apollo 13, which nevertheless was a triumph in a different way in that despite a near-catastrophic situation, the astronauts returned to Earth.

According to the NASA website, the objectives of the Apollo program were:

  • Establishing the technology to meet other national interests in space.
  • Achieving preeminence in space for the United States.
  • Carrying out a program of scientific exploration of the Moon.
  • Developing human capability to work in the lunar environment.

The first two appear to have been met, but obviously there is an element of opinion there. It is debatable that the last one achieved much because there has been no effort to return to the Moon or to use it in any way, although that may well change now. Charles Duke turns 84 this year and he still claims the title of “youngest person to walk on the Moon”.

So how successful was the scientific program? In some ways, remarkably, yet in others there is a surprising reluctance to notice the significance of what was found. The astronauts brought back a large amount of lunar rocks, but there were some difficulties here in that until Apollo 17, the samples were collected by astronauts with no particular geological training. Apollo 17 changed that, but it was still one site, albeit with a remarkably varied geological variety. Of course, they did their best and selected for variety, but we do not know what was overlooked.

Perhaps the most fundamental discovery was that the isotopes from lunar rocks are essentially equivalent to earth rocks, and that means they came from the same place. To put this in context, the ratio of isotopes of oxygen, 16O/17O/18O varies in bodies seemingly according to distance from the star, although this cannot easily be represented as a function. The usual interpretation is that the Moon was formed when a small planet, maybe up to the size of Mars, called Theia crashed into Earth and sent a deluge of matter into space at a temperature well over ten thousand degrees Centigrade, and some of this eventually aggregated into the Moon. Mathematical modelling has some success at showing how this happened, but I for one am far from convinced. One of the big advantages of this scenario is that it shows why the Moon has no significant water, no atmosphere, and never had any, apart from some water and other volatiles frozen in deep craters at the South Pole that almost certainly arrived from comets and condensed there thanks to the cold. As an aside, you will often read that the lunar gravity is too weak to hold air. That is not exactly true; it cannot hold it indefinitely, but if it started with carbon dioxide proportional in mass, or even better in cross-sectional area, to what Earth has, it would still have an atmosphere.

One of the biggest disadvantages of this scenario is where did Theia come from? The models show that if the collision, which happened about 60 million years after the Earth formed, occurred from Theia having a velocity much above the escape velocity from Earth, the Moon cannot form. It gets the escape velocity from falling down the Earth’s gravitational field, but if it started far enough further out that would have permitted Theia to have lasted 60 million years, then its velocity would be increased by falling down the solar gravitational field, and that would be enhanced by the eccentricity of its trajectory (needed to collide). Then there is the question of why are the isotopes the same as on Earth when the models show that most of the Moon came from Theia. There has been one neat alternative: Theia accreted at the Earth-Sun fourth or fifth Lagrange point, which gives it indefinite stability as long as it is small. That Theia might have grown just too big to stay there explains why it took so long and starting at the same radial distance as Earth explains why the isotope ratios are the same.

So why did the missions stop? In part, the cost, but that is not a primary reason because most of the costs were already paid: the rockets had already been manufactured, the infrastructure was there and the astronauts had been trained. In my opinion, it was two-fold. First, the public no longer cared, and second, as far as science was concerned, all the easy stuff had been done. They had brought back rocks, and they had done some other experiments. There was nothing further to do that was original. This program had been a politically inspired race, the race was run, let’s find something more exciting. That eventually led to the shuttle program, which was supposed to be cheap but ended up being hideously expensive. There were also the deep space probes, and they were remarkably successful.

So overall? In my opinion, the Apollo program was an incredible technological program, bearing in mind from where it started. It established the US as firmly the leading scientific and engineering centre on Earth, at least at the time. Also, it got where it did because of a huge budget dedicated to one task. As for the science, more on that later.

The Electric Vehicle as a Solution to the Greenhouse Problem

Further to the discussion on climate change, in New Zealand now the argument is that we must reduce our greenhouse emissions by converting our vehicle fleet to electric vehicles. So, what about the world? Let us look at the details. Currently, there are estimated to be 1.2 billion vehicles on the roads, and by 2035 there will be two billion, assuming current trends continue. However, let us forget about such trends, and look at what it would take to switch 1.2 billion electric vehicles to electric. Obviously, at the price of them, that is not going to happen overnight, but how feasible is this in the long run?

For a scoping analysis, we need numbers, and the following is a “back of the envelope” type analysis. This is designed not to give answers, but at least to visualise the size of the problem. To start, we have to assume a battery size per vehicle, so I am going to assume each vehicle will have an 85 kWh battery assembly. A number of vehicles now have more than this, but equally many have less. However, for initial “back of the envelope” scoping, details are ignored. For the current purposes I shall assume an 85 kWh battery assembly and focus n the batteries.

First, we need a graphite anode, which, from web-provided data will require approximately 40 million t of graphite. Since Turkey alone has reserves of about 90 million t, strictly speaking, graphite is not a problem, although from a chemical point of view, what might be called graphite is not necessarily suitable. However, if there are impurities, they can be cleaned up. So far, not a limiting factor.

Next, each battery assembly will use about 6 kg of lithium, and using the best figures from Tesla, at least 17 kg of cobalt. This does not look too serious until we get to multiplying by 1.2 billion, which gets us to 7.2 million tonne of lithium, and 20.4 million t of cobalt. World production of lithium is 43,000 t/a, while that of cobalt is 110,000 t/a, and most of the cobalt goes to other uses already known. So overnight conversion is not possible. The world reserves of lithium are about 16 million t, so there is enough lithium, although since most of the reserves are not actually in production, presumably due to the difficulty in purifying the materials, we can assume a significant price increase would be required. Worse, the known reserves for cobalt are 7,100,000 so it is not possible to power these vehicles with our current “best battery technology”. There are alternatives, such as manganese based cathode additives, but with current technology they only have about 2/3 the power density and they can only last for about half the number of power cycles, so maybe this is not an answer.

Then comes the problem of how to power these vehicles. Let us suppose they use about ¼ of their energy on high-use days and they recharge for the next day. That requires about 24 billion kWhr of electricity generated that day for this purpose. World electricity production is currently a little over 21,000 TWh, Up to a point, that indicates “no problem”, except that over 1/3 of that came from coal, while gas and oil burning added to coal brought the fossil fuels contribution up to 2/3 of world energy production, and coal burning was the fastest growing contribution to energy demand. Also, of course, this is additional electricity we need. Global energy demand rose by 900 TWh in 2018. (Electricity statistics from the International Energy Agency.) So switching to electric vehicles will increase coal burning, which increases the emission of greenhouse gases, counter to the very problem you are trying to solve. Obviously, electricity supply is not a problem for transport, but it clearly overwhelms transport in contributing to the greenhouse gas problem. Germany closing its nuclear power stations is not a useful contribution to the problem.

It is frequently argued that solar power is the way to collect the necessary transport electricity. According to Wikipedia, the most productive solar power plant is in China’s Tengger desert, which produces 1.547 GW from 43 square kilometers. If we assume that it can operate like this for 6 hrs per day, we have 9.3 Gwh/day. The Earth has plenty of area, however, the 110,000 square km required is a significant fraction. Further, most places do not have such a friendly desert close by. Many have proposed that solar panels of the roof of houses could store power through the day and charge the vehicle at night, but to do that we have just doubled the battery requirements, and these are strained already. The solar panels could feed the grid through the day and charge the vehicles through the night when peak power demand has fallen away, so that would solve part of the problem, but now the solar panels have to make sense in terms of generating electricity for general purposes. Note that if we develop fusion power, which would solve a lot of energy requirements, it is most unlikely a fusion power plant could have its energy output varied too much, which would mean they would have run continuously through the night. At this point, charging electric cars would greatly assist the use of fusion power.

To summarise the use of electricity to power road transport using independent vehicles, there would need to be a significant increase in electricity production, but it is still a modest fraction of what we already generate. The reason it is so significant to New Zealand is that much of New Zealand electricity is renewable anyway, thanks to the heavy investment in hydropower. Unfortunately, that does not count because it was all installed prior to 1990. Those who turned off coal plants to switch to gas that had suddenly became available around 1990 did well out of these protocols, while those who had to resort to thermal because the hydro was fully utilised did not. However, in general the real greenhouse problem lies with the much bigger thermal power station emissions, especially the coal-fired stations. The limits to growth of electric vehicles currently lie with battery technology, and for electric vehicles to make more than a modest contribution to the transport problems, we need a fundamentally different form of battery or fuel cell. However, to power them, we need to develop far more productive electricity generation that does emit greenhouse gases.

Finally, I have yet to mention the contribution of biofuels. I shall do that later, but if you want a deeper perspective than in my blogs, my ebook “Biofuels” is 99c this week at Smashwords, in all formats. (https://www.smashwords.com/books/view/454344.)  Three other fictional ebooks are also on discount. (Go to https://www.smashwords.com/profile/view/IanMiller)

Climate Change: the Potential for Electric Vehicles

In my last post, I discussed the need for action over climate change. Suppose we decide to be more responsible, what can we do? There are several issues, but the main ones include is a solution fit for purpose, which includes will the general population see it as such and does it achieve a useful goal, and is it actually possible? To illustrate what I mean, consider the “easy option”: scrap motor cars and replace with electric vehicles. At first sight, that is easy and you will probably think there is no technological advance needed. Well, think again on both of those. Let’s put numbers on the problem: according to Wikipedia, the number of motor vehicles in the world is 1.015 billion.

Now, to consider the issue, “fit for purpose”, in New Zealand, anyway, and I suspect North America will be worse, people drive fairly long distances at least some of the time. One solution to that problem is to make people stop doing that. This is from the “sacrifices have to be made” school. As it happens, energy consumption probably will have to be reduced, but that does not mean that we need some politicians to say which form of energy consumption is forbidden to you. If people must use less, they should have a choice in what form they give it up.

There are two “niches” of electric vehicle, and as examples I shall pick on the Tesla and the Nissan Leaf. The Tesla currently claims a 400 km range (and intends to provide a 500 km range) per charge, while what you get from the Leaf is highly dependent on driving conditions, but it reaches a little over 100 km with average city driving. Basically, the Leaf would be great for someone wishing to commute daily, but not use it for distance driving. As an aside, the dependency on conditions will affect all such cars; we know about this aspect of the Leaf because there is more information available as more Leafs have been sold. The difference in range is simply because the Leaf’s battery is much smaller (198 cells compared with Tesla’s 7,104).

So why doesn’t the Leaf put in more cells? That is partly because of the problem of charging, and partly because of price and suitability for a chosen niche. A review of electric vehicles in our local paper brought up these facts. There are statements that the 400 k type car be charged at home overnight, “just like your mobile phone”. Well, not quite. While that sounds easy enough, where are you going to do it? Your home may have a garage, so maybe there. The mobile connector comes with adaptors that permit charging at 40 amps. Um, does your house have 40 amp rating to your garage, or maybe 50 amp to be on the safe side because you don’t want to accidentally throw the fuse and be walking to wherever next morning? Our reviewer found that to fully charge such a vehicle with 400 km range using his garage power rating took the best part of two days. Using a fast charger as available here, it took 75 minutes. Yes, you can charge these batteries relatively quickly if you can deliver the required current. The reason the Leaf has such a small battery capacity is so that it can be charged overnight with the average domestic power supply, and it can also be recharged while at work if the owner can “graze” on some power supply. Needless to say, once someone published figures like that, someone else challenged them, and pointed out that a steady 7 kW overnight would do it and “nearly two days” was wrong. Unfortunately, power itself is not the whole story because the current has to be rectified and voltage has to be kept to within a specific range. Apply an over-voltage, and different chemistry starts up in the battery that is not reversible, which means you greatly shorten your battery life.

There is some good news on batteries, though. The batteries do decay with time, and while details are not available, one estimate is that Tesla batteries should still be 90% effective after 8 years, which is quite respectable, while the Leaf claims its batteries should last ten years in a workable condition. Thus we have two types of vehicles: an expensive vehicle that can do anything a current vehicle can do on the open highway, provided there are adequate rapid charging sites. Here “adequate” takes on significance; refilling with petrol takes a few minutes and sometimes there is overcrowding. Will there be enough cables if it takes 75 minutes? How much will “site time” charge?

Then there is the question of how you use it. Do you carry big loads? Ferry lots of children? Go off road, or go camping? If so, the current electric vehicle is not for you. So the question then is, for those who see the electric vehicle as all you have to do to solve the transport problem, are they advocating no off-road activity, no camping, no serious loads? The answer is probably, yes. So, do we want to give up our lifestyle? If the answer is no. are there options? Of course not everyone wants to do those sort of things, so there will most certainly be quite sizable niches that can be filled with electric vehicles. Finally, there will be one further problem: the poorer people cannot afford new Teslas, or even new Leafs. They own second hand cars and cannot afford to simply throw that investment away. The liquid fuel transport economy will be with us for a lot longer yet.

The next question is, is it feasible to replace all cars with electric vehicles? For the purpose of analysis, I shall assume everyone wants a Tesla type driving capacity, as the next step is to put numbers on the problem. The battery weight is listed as 540 kg, which means to do the replacement, we would need something approaching half a billion tonne of batteries. That is not all lithium, but it includes “a small amount of cobalt and nickel”. If we interpret that as about 2% the weight each of the batteries, we need about ten million tonne of cobalt and nickel. World production of cobalt in 2017 was about 110,000 tonne, while nickel was over ten times this. Both metals, however, are fully used now, and the cobalt supply is deficient by about two orders of magnitude if all cobalt was devoted to electric vehicles. Unlikely. Oops! That is more than a small problem. It is not a problem right now because electric vehicles comprise only a very small fraction of the market, but it is insoluble. There is a strict limit on the possible supply of cobalt because as far as I know, there are no cobalt ores. Most cobalt comes from the Democratic Republic of Congo, as a by-product of copper mining. There would also be a significant demand for copper. The Tesla has two motors, one of which is 300 kW, so considerable amount of copper would be used, but world production of copper is about 24,000 Mt annually, so that is not an immediate problem, but may be in the long term. The annual supply of graphite is 126,000 t. Given that there will be more graphite used than lithium, this is a serious problem, however there is no shortage of carbon; the problem is converting carbon to graphite. That is quite a subtle problem; as it happens I know how to get close to the required fraction of graphite, but as yet, not economically.

So there are technological problems. Maybe they are soluble, but doing so introduces another problem, as exemplified by finding an alternative to cobalt. Cobalt is needed to give the non-graphitic electrode enough strength that the battery will have adequate lifetimes with good charging rates. So that is probably non-negotiable. There are alternatives, but so far none match the current battery type used by Tesla. Further, to develop a new battery and test its lifetime over ten years takes: you guessed it; the last part alone takes ten years, assuming your first pick works. Therein lies the overall problem; politicians have wasted nearly 30 years on the basis that it was not urgent. However, technical development does take a long time. For that reason it is wrong to lazily say, electric vehicles, or some other solution, will solve the problem. They will most certainly help, but we have to back many more options.

Science and Climate Change

In the previous post, I questioned whether science is being carried out properly. You may well wonder, then, when this week the Intergovernmental Panel on Climate Change issued a rather depressing report, and a rather awkward challenge: according to their report, the world needed to limit the temperature rise to 1.5 degrees C between now and 2050, and to do that, it needed to cut carbon emissions by 45% by 2030, and net zero by 2050. Even then significant amounts of carbon have to be removed from the atmosphere. The first question is, then, is this real, and if so, why has the IPCC suddenly reduced the tolerable emissions? If their scientists previously predicted seriously lower requirements, why should these be considered better? There are two simple answers. The first is the lesser requirements were based on the assumption that nations would promptly reduce emissions. Most actually increased them. The second is more complicated.

The physics have been verified many times. However, predicting the effects is another matter. The qualitative effects are easily predicted, but to put numbers on them requires very complicated modelling. The planet is not an ideal object, and the calculation is best thought of as an estimate. What has probably happened is their modelling made a projection of what would happen, and they did this long enough ago that now that they can compare prediction with where we are now. That tells them how good the various constants they put into the model were. Such a comparison is somewhat difficult, but there are clear signs in our observations, and things are worse than we might hope for.

So, what are we going to do? Nothing dramatic is going to happen on 2040, or 2050. Change will be gradual, but its progress will be unstoppable unless very dramatic changes in our behaviour are made. The technical challenges here are immense. However, there are a number of important decisions to be taken because we are running short of time due to previous inaction. Do we want to defend what we have? Do we want to attempt to do it through sacrificing our life style, or do we want to attempt a more aggressive approach? Can we get sufficient agreement that anything we try will be properly implemented? Worst of all, do we know what our options are? Of these questions, I am convinced that through inaction, and in part the structural defects of academic science, the answer to the last question is no.

The original factor of required emissions reduction was set at 1990 as a reference point. What eventuated was that very few countries actually reduced any emissions, and most increased them. The few that did reduce them did that by closing coal-fired electricity generation and opted for burning natural gas. This really achieves little, and would have happened anyway. Europe did that, although France is a notable exception to this in that it has had significant nuclear power for a long time. Nuclear power has its problems, but carbon emissions are not one of them. The countries of the Soviet Union have also actually had emission reductions, although this is as much as anything due to the collapse of their economies as they made the rather stupid attempt to convert to “free market economics” which permitted a small number of oligarchs to cream the economy, sell off what they could, use what was usable, pay negligible wages and export their profits so they could purchase foreign football clubs. That reduced carbon emissions, but it is hardly a model to follow.

There is worse news. Most people by now have recognized that Donald Trump and the Republican party do not believe in global warming, while a number of other countries that are only beginning to industrialize want the right to emit their share of CO2 and are on a path to burn coal. Some equatorial countries are hell-bent on tearing down their rain forest, while warming in Siberia will release huge amounts of methane, which is about thirty times more potent than CO2. Further, if we are to totally change our way of life, we shall have to dismantle the energy-related infrastructure from the last fifty years or so (earlier material has probably already been retired) and replace it, which, at the very least will require billions of tonnes of carbon to make the required metals.

There will be some fairly predictable cries. Vegetarians will tell everyone to give up meat. Cyclists will tell everyone they should stop driving cars. In short, everyone will have ideas where someone else gives up whatever. One problem is that people tend to want to go for “the magic bullet”, the one fix to fix them all. Thus everyone should switch to driving electric vehicles. In the long term, yes, but you cannot take all those current vehicles off the road, and despite what some say, heavy trucks, major farm and construction equipment, and aircraft are going to run on hydrocarbons for the foreseeable future. People talk about hydrogen, but hydrogen currently requires massive steel bottles (unless you are NASA, or unless you can get hydrides to act reversibly). And, of course, there is a shortage of material to make enough batteries. Yes, electric vehicles, cycling, public transport and being a vegetarian are all noble contributions, but they are just that. Wind and solar power, together with some other sources, are highly desirable, but I suspect that something else, such as nuclear power must be adopted more aggressively. In this context, Germany closing down such reactors is not helpful either.

Removing CO2 from the atmosphere is not that easy either. There have been proposals to absorb it from the effluent gases of coal-fired power stations. Such scrubbing is not 100% efficient, but even if it were, it is not dealing with what is already there. My guess is, that can only be managed by plants in sufficient scale. While not extremely efficient, once going they look after themselves. Eventually you have to do something with the biomass, but restoring all the tropical rain forests would achieve something in the short term. My personal view is the best chances are to grow algae. The sea has a huge area and while we still have to learn how to do it, it is plausible, and the resultant biomass could be used to make biofuel.

No, it is not going to be easy. The real question is, can we be bothered trying to save what we have?