Banana-skin Science

Every now and again we find something that looks weird, but just maybe there is something in it. And while reading it, one wonders, how on Earth did they come up with this? The paper in question was Silva et. al. 2022. Chemical Science 13: 1774. What they did was to take dried biomass powder and exposed it to a flash of 14.5 ms duration from a high-power xenon flash lamp. That type of chemistry was first developed to study the very short-lived intermediates generated in photochemistry, when light excites the molecule to a high energy state, where it can decay through unusual rearrangements. This type of study has been going on since the 1960s and equipment has steadily been improving and being made more powerful. However, it is most unusual to find it used for something that ordinary heat would do far more cheaply. Anyway, 1 kg of such dried powder generated about 100 litres of hydrogen and 330 g of biochar. So, what else was weird? The biomass was dried banana skin! Ecuador, sit up and take notice. But before you do, note that flash xenon lamps are not going to be an exceptionally economical way of providing heat. That is the point; this very expensive source of light was actually merely providing heat.

There are three ways of doing pyrolysis. In the previous post I pointed out that if you took cellulose and eliminated all the oxygen in the form of water, you were left with carbon. If you eliminate the oxygen as carbon monoxide you are left with hydrogen. If you eliminate it as carbon dioxide you get hydrogen and hydrocarbon. In practice what you get depends on how you do it. Slow pyrolysis at moderate heat mainly makes charcoal and water, with some gas. It may come as a surprise to some but ordinary charcoal is not carbon; it is about 1/3 oxygen, some minor bits and pieces such as nitrogen, phosphorus, potassium, and sulphur, and the rest carbon.

If you do very fast pyrolysis, called ablative pyrolysis, you can get almost all liquids and gas. I once saw this done in a lab in Colorado where a tautly held (like a hacksaw blade) electrically heated hot wire cut through wood like butter, the wire continually moving so the uncondensed liquids (which most would call smoke) and gas were swept out. There was essentially no sign of “burnt wood”, and no black. The basic idea of ablative pyrolysis is you fire wood dust or small chips at a plate at an appropriate angle to the path so the wood sweeps across it and the gas is swept away by the gas stream (which can be recycled gas) propelling the wood. Now the paper I referenced above claimed much faster pyrolysis, but got much more charcoal. The question is, why? The simple answer, in my opinion, is nothing was sweeping the product away so it hung around and got charred.

The products varied depending on the power from the lamp, which depended on the applied voltage. At what I assume was maximum voltage the major products were (apart from carbon) hydrogen and carbon monoxide. 100 litres of hydrogen, and a bit more carbon monoxide were formed, which is a good synthesis gas mix. There were also 10 litres of methane, and about 40 litres of carbon dioxide that would have to be scrubbed out. The biomass had to be reduced to 20 μm size and placed on a surface as a layer 50 μm thick. My personal view is that is near impossible to scale this up to useful sizes. It uses light as an energy source, which is difficult to generate so almost certainly the process is a net energy consumer. In short, this so-called “breakthrough” could have been carried out to give better yields of whatever was required far more cheaply by people a hundred years ago.

Perhaps the idea of using light, however, is not so retrograde. The trick would be to devise apparatus that with pyrolyse wood ablatively (or not if you want charcoal) using light focused by large mirrors. The source, the sun, is free until it hits the mirrors. Most of us will have ignited paper with a magnifying glass. Keep the oxygen out and just maybe you have something that will make chemical intermediates that you can call “green”.

The Case for Hydrogen in Transport

In the last post I looked at the problem of generating electricity, and found that one of the problems is demand smoothing One approach to this is to look at the transport problem, the other major energy demand system. Currently we fill our tanks with petroleum derived products, and everything is set for that. However, battery-powered cars would remove the need for petrol, and if they were charged overnight, they would help this smoothing problem. The biggest single problem is that this cannot be done because there is not enough of some of the necessary elements to make it work. Poorer quality batteries could be made, but there is another possibility: the fuel cell.

The idea is simple. When electricity is not in high demand, the surplus is used to electrolyse water to hydrogen and oxygen. The hydrogen is stored, and when introduced to a fuel cell it burns to make water while generating electricity. Superficially, this is ideal, but there are problems. One is similar to the battery – the electrodes tend to be made of platinum, and platinum is neither cheap nor common. However, new electrodes may solve this problem. Platinum has the advantage that it is very unreactive, but the periodic servicing of the cell and the replacing of electrodes is realistic, and of course recycling can be carried out because unlike the battery, it would be possible to merely recycle the electrodes. (We could also use pressurised hydrogen in an internal combustion engine, with serious redesign, but the efficiency is simply too low.)

One major problem is storing the hydrogen. If we store it as a gas, very high pressures are needed to get a realistic mass to volume ratio, and hydrogen embrittles metals, so the tanks, etc., may need servicing as well. We could store it as a liquid, but the boiling point is -259 oC. Carting this stuff around would be a challenge, and to make matters worse, hydrogen occurs in two forms, ortho and para, which arise because the nuclear spins can be either aligned or not. Because the molecule is so small there is an energy difference between these, and the equilibrium ratio is different at liquid temperatures to room temperatures. The mix will slowly re-equilibrate at the low temperature, give off heat, boil off some hydrogen, and increase the pressure. This is less of a problem if you have a major user, because surplus pressure is relieved when hydrogen is drawn off for use, and if there is a good flow-through, no problem. It may be a problem if hydrogen is being shipped around.

The obvious alternative is not to ship it around, but ship the electricity instead. In such a scenario for smaller users, such as cars, the hydrogen is generated at the service station, stored under pressure, and more is generated to maintain the pressure. That would require a rather large tank, but it is doable. Toyota apparently think the problem can be overcome because they are now marketing the Mirai, a car powered by hydrogen fuel cells. Again, the take-up may be limited to fleet operators, who send the vehicles out of central sites. Apparently, the range is 500 km and it uses 4.6 kg of hydrogen. Hydrogen is the smallest atom so low weight is easy, except the vehicle will have a lot of weight and volume tied up with the gas pressurized storage. The question then is, how many fuel stations will have this very large hydrogen storage? If you are running a vehicle fleet or buses around the city, then your staff can refill as well, which gets them to and from work, but the vehicle will not be much use for holidays unless there are a lot of such stations.

Another possible use is in aircraft, but I don’t see that, except maybe small short-haul flights driven by electric motors with propellors. Hydrogen would burn well enough, but the secret of hydrocarbons for aircraft is they have a good energy density and they store the liquids in the wings. The tanks required to hold hydrogen would add so much weight to the wings they might fall off. If the main hull is used, where do the passengers and freight go? Another possibility is to power ships. Now you would have to use liquid hydrogen, which would require extremely powerful refrigeration. That is unlikely to be economic compared with nuclear propulsion that we have now.

The real problem is not so much how do you power a ship, or anything else for that matter, but rather what do you do with the current fleet? There are approximately 1.4 billion motor vehicles in the world and they run on oil. Let us say that in a hundred years everyone will use fuel cell-driven cars, say. What do we do in the meantime? Here, the cheapest new electric car costs about three times the cost of the cheapest petrol driven car. Trade vans and larger vehicles can come down to about 1.5 times the price, in part due to tax differences. But you may have noticed that government debt has become somewhat large of late, due to the printing of large amounts of money that governments have promptly spent. That sort of encouragement will probably be limited in the future, particularly as a consequence of shortages arising from sanctions. In terms of cost, I rather think that many people will be hanging on to their petrol-powered vehicles, even if the price of fuel increases, because the difference in the price of fuel is still a few tens of dollars a week tops, whereas discarding the vehicle and buying a new electric one involves tens of thousands of dollars, and with the current general price increases, most people will not have those spare dollars to throw away. Accordingly, in my opinion we should focus some attention on finding an alternative to fossil fuels to power our heritage fleet.

Solar Energy in India

There is currently a big urge to move to solar energy, and apparently India has decided that solar energy would greatly assist its plans to deal with climate change. However, according to a paper by Ghosh et al.in Environmental Research Letters, there is a minor problem: air pollution. It appears that while India is ranked fifth in the world for solar energy capacity, parts of it, and these tend to be the parts where you need the power, suffer from growing levels of particulate air pollution. There are two problems. First, the particles in the air block sunlight, thus reducing the power that strikes the panels. Second, the particles land on the panels and block the light until someone cleans the detritus off the panels.

I am not sure I understand why, but the impact on horizontal panels ranged from 10% to 16%, but the impact was much greater on panels that track the position of the sun (which is desirable to get the most power) as they suffered a 52% loss of power from pollution. Apparently if it were not for such pollution it was calculated (not sure on what basis – existing panels or proposed panels) to be able to generate somewhere between an additional six to sixteen TWh of solar electricity per year. That is a lot of power.

But if you are reducing the output of your panels by fifty percent, that means also you are doubling the real cost of the electricity from those panels prior to entering the grid because you are getting half the power from the same fixed cost installation. The loss of capacity translates into hundreds of millions of dollars annually. China has the same problem, with some regions twice as badly off as the Indian regions, although care must be taken with that comment because they are not necessarily measured the same way. In all cases, averaging down over area is carried out, but then different people may select different types of area.

So, what can be done about this? The most obvious approach is to alter the sources of the pollution, but this could be a problem. In India, the sources tend to be the use of kerosine to provide lighting and the use of dirty fuel for cooking and heating in rural villages.

The answer is to electrify them, but now the problem is there are 600,000 such villages. Problems in a country like India or China tend to be very large, although the good news is the number of people available to work on them is also very large. Unfortunately, these villages are not very wealthy. If you want to replace home cooking with electricity, and domestic heating with electricity, someone has to pay for electric ranges. One estimate is 80 million of them. Big business for the maker of electric cookers, but who pays for them when the rural people are fairly close to the poverty line. They cook with fuel like biomass that gets smoky because that is cheap or free. Their cookers may even be home-made, but even if not so, they would have to be discarded as they could not be used for electric cooking.

There are claimed to be other benefits for reducing such pollution. Thus reducing air pollution would reduce cloudiness, which means even better solar energy production. It is also claimed that precipitation is inhibited from polluted clouds, so it is concluded that with more precipitation that would wash more pollution from the air. I am not sure I follow that reasoning, because they have already concluded that they will have fewer clouds.

If they removed these sources of air pollution, they calculated that an extra three TWh per year could be generated from flat surface panels, or eight TWh per year could be generated from tracking panels. The immediate goal is apparently to have 100 GW solar installed. It will be interesting to see if this can be achieved. One problem is that while the economics look good in terms of money saved from increased solar energy, the infrastructure costs associated with it were neglected. My guess is the current air pollution will be around for a while. It also shows the weaknesses of many solar energy projects, such as setting up huge farms in the Sahara. How do you stop fine sand coating panels? An army of panel polishers?

Plastics and Rubbish

In the current “atmosphere” of climate change, politicians are taking more notice of the environment, to which as a sceptic I notice they are not prepared to do a lot about it. Part of the problem is following the “swing to the right” in the 1980s, politicians have taken notice of Reagan’s assertion that the government is the problem, so they have all settled down to not doing very much, and they have shown some skill at doing very little. “Leave it to the market” has a complication: the market is there to facilitate trade in which all the participants wish to offer something that customers want and they make a profit while doing it. The environment is not a customer in the usual sense and it does not pay, so “the market” has no direct interest in it.

There is no one answer to any of these problems. There is no silver bullet. What we have to do is chip away at these problems, and one that indicates the nature of the problem is plastics. In New Zealand the government has decided that plastic bags are bad for the environment, so the single use bags are no longer used in supermarkets. One can argue whether that is good for the environment, but it is clear that the wilful throwing away of plastics and their subsequent degradation is bad for it. And while the disposable bag has been banned here, rubbish still has a lot of plastics in it, and that will continue to degrade. If it were buried deep in some mine it probably would not matter, but it is not. So why don’t we recycle them?

Then first reason is there are so many variations of them and they do not dissolve in each other. You can emulsify a mix, but the material has poor strength because there is very little binding at the interface of the tiny droplets. That is because they have smooth surfaces, like the interface between oil and water. If the object is big enough this does not matter so much, thus you can make reasonable fence posts out of recycled plastics, but there really is a limit to the market for fence posts.

The reason they do not dissolve in each other comes from thermodynamics. For something to happen, such as polymer A dissolving in polymer B, the change (indicated by the symbol Δ) in what is called the free energy ΔG has to be negative. (The reason it is negative is convention; the reason it is called “free” has nothing to do with price – it is not free in that sense.) To account for the process, we use an equation

            ΔG = ΔH -T ΔS

ΔH reflects the change of energy between each molecule in its own material and in solution of the other material. As a general rule, molecules favour having their own kind nearby, especially if they are longer because the longer they are the interactions per atom are constant for other molecules of the same material, but other molecules do not pack as well. Thinking of oil and water, the big problem for solution is that water, the solvent, has hydrogen bonds that make water molecules stick together. The longer the polymer, per molecule that enhances the effect. Think of one polymer molecule has to dislodge a very large number of solvent molecules. ΔS is the entropy and it increases as the degree of randomness increases. Solution is more random per molecule, so whether something dissolves is a battle between whether the randomness per molecule can overcome the attractions between the same kind. The longer the polymer, the less randomness is introduced and the greater any difference in energy between same and dissolved. So the longer the polymers, the less likely they are to dissolve in each other which, as an aside, is why you get so much variety in minerals. Long chain silicates that can alter their associate ions like to phase separate.

So we cannot recycle, and they are useless? Well, no. At the very least we can use them for energy. My preference is to turn them, and all the organic material in municipal refuse, into hydrocarbons. During the 1970s oil crises the engineering was completed to build a demonstration plant for the city of Worcester in Massachusetts. It never went ahead because as the cartel broke ranks and oil prices dropped, converting wastes to hydrocarbon fuels made no economic sense. However, if we want to reduce the use of fossil fuels, it makes a lot of sense to the environment, IF we are prepared to pay the extra price. Every litre of fuel from waste we make is a litre of refined crude we do not have to use, and we will have to keep our vehicle fleet going for quite some time. The basic problem is we have to develop the technology because the engineering data for that previous attempt is presumably lost, and in any case, that was for a demonstration plant, which is always built on the basis that more engineering questions remain. As an aside, water at about 360 degrees Centigrade has lost its hydrogen bonding preference and the temperature increase means oil dissolves in water.

The alternative is to burn it and make electricity. I am less keen on this, even though we can purchase plants to do that right now. The reason is simple. The combustion will release more gases into the atmosphere. The CO2 is irrelevant as both do that, but the liquefaction approach sends nitrogen containing material out as water soluble material which could, if the liquids were treated appropriately, be used as a fertilizer, whereas in combustion they go out the chimney as nitric oxide or even worse, as cyanides. But it is still better to do something with it than simply fill up local valleys.

One final point. I saw an item where some environmentalist was condemning a UK thermal plant that used biomass arguing it put out MORE CO2 per MW of power than coal. That may be the case because you can make coal burn hotter and the second law of thermodynamics means you can extract more energy in the form of work. (Mind you, I have my doubts since the electricity is generated from steam.) However, the criticism shows the inability to understand calculus. What is important is not the emissions right now, but those integrated over time. The biomass got its carbon from the atmosphere say forty years ago, and if you wish to sustain this exercise you plant trees that recover that CO2 over the next forty years. Burn coal and you are burning carbon that has been locked away from the last few million years.

Thorium as a Nuclear Fuel

Apparently, China is constructing a molten salt nuclear reactor to be powered by thorium, and it should be undergoing trials about now. Being the first of its kind, it is, naturally, a small reactor that will produce 2 megawatt of thermal energy. This is not much, but it is important when scaling up technology not to make too great of leaps because if something in the engineering has to be corrected it is a lot easier if the unit is smaller. Further, while smaller is cheaper, it is also more likely to create fluctuations, especially with temperature, and when smaller they are far easier to control. The problem with a very large reactor is if something is going wrong it takes a long time to find out, but then it also becomes increasingly difficult to do anything about it.

Thorium is a weakly radioactive metal that has little current use. It occurs naturally as thorium-232 and that cannot undergo fission. However, in a reactor it absorbs neutrons and forms thorium-233, which has a half-life of 22 minutes and β-decays to protactinium-233. That has a half-life of 27 days, and then β-decays to uranium-233, which can undergo fission. Uranium-233 has a half-life of 160,000 years so weapons could be made and stored.  

Unfortunately, 1.6 tonne of thorium exposed to neutrons and if appropriate chemical processing were available, is sufficient to make 8 kg of uranium-233, and that is enough to produce a weapon. So thorium itself is not necessarily a form of fuel that is free of weapons production. However, to separate Uranium-233 in a form to make a bomb, major chemical plant is needed, and the separation needs to be done remotely because apparently contamination with Uranium-232 is possible, and its decay products include a powerful gamma emitter. However, to make bomb material, the process has to be aimed directly at that. The reason is, the first step is to separate the protactinium-233 from the thorium, and because of the short half-life, only a small amount of the thorium gets converted. Because a power station will be operating more or less continuously, it should not be practical to use it to make fissile material for bombs.

The idea of a molten salt reactor is that the fissile material is dissolved in a liquid salt in the reactor core. The liquid salt also takes away the heat which, when the salt is cycles through heat exchangers, converts water to steam, and electricity is obtained in the same way as any other thermal station. Indeed, China says it intends to continue using its coal-fired generators by taking away the furnaces and replacing them with a molten salt reactor. Much of the infrastructure would remain. Further, compared with the usual nuclear power stations, the molten salt reactors operate at a higher temperature, which means electricity can be generated more efficiently.

One advantage of a molten salt reactor is it operates at lower pressures, which greatly reduces the potential for explosions. Further, because the fuel is dissolved in the salt you cannot get a meltdown. That does not mean there cannot be problems, but they should be much easier to manage. The great advantage of the molten salt reactor is it burns its reaction products, and an advantage of a thorium reactor is that most of the fission products have shorter half-lives, and since each fission produces about 2.5 neutrons, a molten salt reactor further burns larger isotopes that might be a problem, such as those of neptunium or plutonium formed from further neutron capture. Accordingly, the waste products do not comprise such a potential problem.

The reason we don’t directly engage and make lots of such reactors is there is a lot of development work required. A typical molten salt mix might include lithium fluoride, beryllium fluoride, the thorium tetrafluoride and some uranium tetrafluoride to act as a starter. Now, suppose the thorium or uranium splits and produces, say, a strontium atom and a xenon atom. At this point there are two fluorine atoms as surplus, and fluorine is an extraordinarily corrosive gas. As it happens, xenon is not totally unreactive and it will react with fluorine, but so will the interior of the reactor. Whatever happens in there, it is critical that pumps, etc keep working. Such problems can be solved, but it does take operating time to be sure such problems are solved. Let’s hope they are successful.

The Fusion Energy Dream

One of the most attractive options for our energy future is nuclear fusion, where we can turn hydrogen into helium. Nuclear fusion works, even on Earth, as we can see when a hydrogen bomb goes off. The available energy is huge. Nuclear fusion will solve our energy crisis, we have been told, and it will be available in forty years. That is what we were told about 60 years ago, and you will usually hear the same forty year prediction now!

Nuclear fusion, you will be told, is what powers the sun, however we won’t be doing what the sun does any time soon. You may guess there is a problem in that the sun is not a spectacular hydrogen bomb. What the sun does is to squeeze hydrogen atoms together to make the lightest isotope of helium, i.e. 2He. This is extremely unstable, and the electric forces will push the protons apart in an extremely short time, like a billionth of a billionth of a second might be the longest it can last, and probably not that long. However, if it can acquire an electron, or eject a positron, before it decays it turns into deuterium, which is a proton and a neutron. (The sun also uses a carbon-oxygen cycle to convert hydrogen to helium.) The difficult thing that a star does, and what we will not do anytime soon, is to make neutrons (as opposed to freeing them).

The deuterium can then fuse to make helium, usually first with another proton to make 3He, and then maybe with another to make 4He. Each fusion makes a huge amount of energy, and the star works because the immense pressure at the centre allows the occasional making of deuterium in any small volume. You may be surprised by the use of the word “occasional”; the reason the sun gives off so much energy is simply that it is so big. Occasional is good. The huge amount of energy released relieves some of the pressure caused by the gravity, and this allows the star to live a very long time. At the end of a sufficiently large star’s life, the gravity allows the material to compress sufficiently that carbon and oxygen atoms fuse, and this gives of so much energy that the increase in pressure causes  the reaction  to go out of control and you have a supernova. A bang is not good.

The Lawrence Livermore National Laboratory has been working on fusion, and has claimed a breakthrough. Their process involves firing 192 laser beams onto a hollow target about 1 cm high and a diameter of a few millimeters, which is apparently called a hohlraum. This has an inner lining of gold, and contains helium gas, while at the centre is a tiny capsule filled with deuterium/tritium, the hydrogen atoms with one or two neutrons in addition to the required proton. The lasers heat the hohlraum so that the gold coating gives off a flux of Xrays. The Xrays heat the capsule causing material on the outside to fly off at speeds of hundreds of kilometers per second. Conservation of momentum leads to the implosion of the capsule, which gives, hopefully, high enough temperatures and pressures to fuse the hydrogen isotopes.

So what could go wrong? The problem is the symmetry of the pressure. Suppose you had a spherical-shaped bag of gel that was mainly water, and, say, the size of a football and you wanted to squeeze all the water out to get a sphere that only contained the gelling solid. The difficulty is that the pressure of a fluid inside a container is equal in all directions (leaving aside the effects of gravity). If you squeeze harder in one place than another, the pressure relays the extra force per unit area to one where the external pressure is weaker, and your ball expands in that direction. You are fighting jelly! Obviously, the physics of such fluids gets very complicated. Everyone knows what is required, but nobody knows how to fill the requirement. When something is unequal in different places, the effects are predictably undesirable, but stopping them from being unequal is not so easy.

The first progress was apparently to make the laser pulses more energetic at the beginning. The net result was to get up to 17 kJ of fusion energy per pulse, an improvement on their original 10 kJ. The latest success produced 1.3 MJ, which was equivalent to 10 quadrillion watts of fusion power for a 100 trillionth of a second. An energy generation of 1.3 MJ from such a small vessel may seem a genuine achievement, and it is, but there is further to go. The problem is that the energy input to the lasers was 1.9 MJ per pulse. It should be realised that that energy is not lost. It is still there so the actual output of a pulse would be 3.2 MJ of energy. The problem is that the output includes the kinetic energy of the neutrons etc produced, and it is always as heat whereas the input energy was from electricity, and we have not included the losses of power when converting electricity to laser output. Converting that heat to electricity will lose quite a bit, depending on how it is done. If you use the heat to boil water the losses are usually around 65%. In my novels I suggest using the magnetohydrodynamic effect that gets electricity out of the high velocity of the particles in the plasma. This has been made to work on plasmas made by burning fossil fuels, which doubles the efficiency of the usual approach, but controlling plasmas from nuclear fusion would be far more difficult. Again, very easy to do in theory; very much less so in practice. However, the challenge is there. If we can get sustained ignition, as opposed to such a short pulse, the amount of energy available is huge.

Sustained fusion means the energy emitted from the reaction is sufficient to keep it going with fresh material injected as opposed to having to set up containers in containers at the dead centre of a multiple laser pulse. Now, the plasma at over 100,000,000 degrees Centigrade should be sufficient to keep the fusion going. Of course that will involve even more problems: how to contain a plasma at that temperature; how to get the fuel into the reaction without melting then feed tubes or dissipating the hydrogen; how to get the energy out in a usable form; how to cool the plasma sufficiently? Many questions; few answers.

A New Way of Mining?

One of the bigger problems our economies face is obtaining metals. Apparently the price of metals used in lithium-ion batteries is soaring because supply cannot expand sufficiently, and there appears to be no way current methodology can keep up.

 Ores are obtained by physically removing them from the subsurface, and this tends to mean that huge volumes of overburden have to be removed. Global mining is estimated to produce 100 billion t of overburden per year, and that usually has to be carted somewhere else and dumped.  This often leads to major disasters, such as mine tailing causing dams, and then collapsing, thus Brazil has had at least two such collapses that led to something like 140 million cubic meters of rubble moving and at least 256 deaths. The better ores are now worked out and we are resorting to poorer ores, most of which contain less than 1% is what you actually want. The rest, gangue, is often environmentally toxic and is quite difficult to dispose of safely. The whole process is energy intensive. Mining contributes about 10% of the energy-related greenhouse gas emissions. Yet if we take copper alone, it is estimated that by 2050 demand will increase by up to 350%. The ores we know about are becoming progressively lower grade and they are found at greater depths.

We have heard of the limits to growth. Well, mining is becoming increasingly looking like becoming unsustainable, but there is always the possibility of new technology to get the benefit from increasingly more difficult sources. One such possible technique involves first inserting acid or lixiviant into the rock to dissolve the target metal in the form of an ion then use a targeted electric field to transport the metal-rich solution to the surface. This is a variant of a technique used to obtain metals from fly ash, sludge, etc.

The objective is to place an electrode either within or surrounding the ore, then the acid is introduced from an external reservoir. There is an alternative reservoir with a second electrode with opposite charge to that of the metal-bearing ion. The metal usually bears a positive charge in the textbooks, so you would have your reservoir electrode negatively charged, but it is important to keep track of your chemistry. For example, if iron were dissolved in hydrochloric acid, the main ion would be FeCl4-, i.e. an anion.

Because transport occurs through electromigration, there is no need for permeability enhancement techniques, such as fracking. About 75% of copper ore reserves are as copper sulphide that lie beneath the water table. The proposed technique was demonstrated on a laboratory scale with a mix of chalcopyrite (CuFeS2) and quartz, each powdered. A solution of ferric chloride was added, and a direct current of 7 V was applied to electrodes at opposite ends of a 0.57 m path, over which there was a potential drop of about 5V, giving a maximal voltage gradient of 1.75 V/cm. The ferric chloride liberated copper as the cupric cation. The laboratory test extracted 57 weight per cent of the available copper from a 4 cm-wide sample over 94 days, although 80% was recovered in the first 50 days. The electric current decreased over the first ten days from 110 mA to 10 mA, suggestive of pore blocking. Computer simulations suggest that in the field, about 70% of the metal in a sample accessed by the electrodes could be recovered over a three year period. The process would have the odd hazard, thus a 5 meter spacing between electrodes employed, in the simulation, a 500 V difference. If the ore is several hundred meters down, this could require quite a voltage. Is this practical? I do not know, but it seems to me that at the moment the amount of dissolved material, the large voltages, the small areas and the time taken will count against it. On the other hand, the price of metals are starting to rise dramatically. I doubt this will be a final solution, but it may be part of one.

Living Near Ceres

Some will have heard of Gerard O’Neill’s book, “The High Frontier”. If not, see https://en.wikipedia.org/wiki/The_High_Frontier:_Human_Colonies_in_Space. The idea was to throw material up from the surface of the Moon to make giant cylinders that would get artificial gravity from rotation, and people could live their lives in the interior with energy being obtained in part by solar energy. The concept was partly employed in the TV series “Babylon 5”, but the original concept was to have open farmland as well. Looks like science fiction, you say, and in fairness I have included such a proposition in a science fiction novel I am currently writing, However, I have also read a scientific paper on this topic (arXiv:2011.07487v3) which appears to have been posted on the 14th January, 2021. The concept is to put such a space settlement using material obtained from the asteroid Ceres, and orbiting near Ceres.

The proposal is ambitious, if nothing else. The idea is to build a number of habitats, and to ensure such habitats are not too big but they stay together they are tethered to a megasatellite, which in turn will grow and new settlements are built. The habitats spin in such a way to attain a “gravity” of 1 g, and are attached to their tethers by magnetic bearings that have no physical contact between faces, and hence never wear. A system of travel between habitats proceeds along the tethers. Rockets would be unsustainable because the molecules they throw out to space would be lost forever.

The habitats would have a radius of 1 km, a length of 10 km, and have a population of 56,700, with 2,000 square meters per person, just under 45% of which would be urban. Slightly more scary would be the fact it has to rotate every 1.06 minutes. The total mass per person would be just under 10,000 t, requiring an energy to produce it of 1 MJ/kg, or about 10 GJ.

The design aims to produce an environment for the settlers that has Earth-like radiation shielding, gravity, and atmosphere. It will have day/night on a 24 hr cycle with 130 W/m^2 insolation, similar to southern Germany, and a population density of 500/km^2, similar to the Netherlands. There would be fields, parks, and forests, no adverse weather, no natural disasters and ultimately it could have a greater living area than Earth. It will be long-term sustainable. To achieve that, animals, birds and insects will be present, i.e.  a proper ecosystem. Ultimately it could provide more living area than Earth. As can be seen, that is ambitious. The radiation shielding involves 7600 kg/m^2, of which 20% is water and the rest silicate regolith. The rural spaces have a 1.5 m depth of soil, which is illuminated by the sunlight. The sunlight is collected and delivered from mirrors into light guides. Ceres is 2.77 times as far as Earth from the sun, which means the sunlight is only about 13% as strong as at Earth, so over eight times the mirror collecting are is required for every unit area to be illuminated to get equivalent energy. 

The reason cited for proposing this to be at Ceres is that Ceres has nitrogen. Actually, there are other carbonaceous asteroids, and one that is at least 100 km in size could be suitable. Because Ceres’ gravity is 0.029 times that of Earth, a space elevator could be feasible to bring material cheaply from the dwarf planet, while a settlement 100,000 km from the surface would be expected to have a stable orbit.

In principle, there could be any number of these habitats, all linked together. You could have more people living there than on Earth. Of course there are some issues with the calculation. The tethering of habitats, and of giving the habitats sufficient strength requires about 5% of the total mass in the form of steel. Where does the iron come from? The asteroids have plenty of iron, but the form is important. How will it be refined? If it is on the form of olivine or pyroxene, then with difficulty. Vesta apparently has an iron core, but Vesta is not close, and most of the time, because it has a different orbital period, it is very far away.But the real question is, would you want to live in such a place? How much would you pay for the privilege? The cost of all this was not estimated, but it would be enormous so most people could not afford it. In my opinion, cost alone is sufficient that this idea will not see the light of day.

Materials that Remember their Original Design

Recall in the movie Terminator 2 there was this robot that could turn into a liquid then return to its original shape and act as if it were solid metal. Well, according to Pu Zhang at Binghampton University in the US, something like that has been made, although not quite like the evil robot. What he has made is a solid that acts like a metal that, with sufficient force, can be crushed or variously deformed, then brought back to its original shape spontaneously by warming.

The metal part is a collection of small pieces of Field’s alloy, an alloy of bismuth, indium and tin. This has the rather unusual property of melting at 62 degrees Centigrade, which is the temperature reached by fairly warm water. The pieces have to be made with flat faces of the desired shape so that they effectively lock themselves together and it is this locking that at least partially gives the body its strength. The alloy pieces are then coated with a silicone shell using a process called conformal coating, a technique used to coat circuit boards to protect them from the environment and the whole is put together with 3D printing. How the system works (assuming it does) is that when force is applied that would crush or variously deform the fabricated object, as the metal pieces get deformed, the silicone coating gets stretched. The silicone is an elastomer, so as it gets stretched, just like a rubber band, it stores energy. Now, if the object is warmed the metal melts and can flow. At this point, like a rubber band let go, the silicone restores everything to the original shape, the when it cools the metal crystallizes and we are back where we started.

According to Physics World Zhang and his colleagues made several demonstration structures such as a honeycomb, a spider’s web-like structure and a hand, these were all crushed, and when warmed they sprang back to life in their original form. At first sight this might seem to be designed to put panel beaters out of business. You have a minor prang but do not worry: just get out the hair drier and all will be well. That, of course, is unlikely. As you may have noticed, one of the components is indium. There is not a lot of indium around and for its currently very restricted uses it costs about $US800/kg, which would make for a rather expensive bumper. Large-scale usage would make the cost astronomical. The cost of manufacturing would also always limit its use to rather specialist objects, irrespective of availabiity.One of the uses advocated by Zhang is in space missions. While weight has to be limited on space missions, volume is also a problem, especially for objects with awkward shapes, such as antennae or awkward shaped superstructures. The idea is they could be crushed down to a flat compact load for easy storage, then reassembled. The car bumper might be out of bounds because of cost and limited indium supply, but the cushioning effect arising from its ability to absorb a considerable amount of energy might be useful in space missions. Engineers usually use aluminium or steel for cushioning parts, but they are single use. A spacecraft with such landing cushions can be used once, but landing cushions made of this material could be restored simply by heating them. Zhang seems to favour the use in space engineering. He says he is contemplating building a liquid robot, but there is one thing, apart from behaviour, that such a robot could not do that the terminator robot did, and that is, if the robot has bits knocked off and the bits melt, they cannot reassemble into a whole. Leaving aside the fact there is no force to rejoin the bits, the individual bits will merely reassemble into whatever parts they were and cannot rejoin with the other bits. Think of it as held together by millions of rubber bands. Breaking into bits breaks a fraction of the rubber bands, which leaves no force to restore the original shape at the break.

Molten Salt Nuclear Reactors

In the previous post, I outlined two reasons why nuclear power is overlooked, if not shunned, despite the fact it will clearly reduce greenhouse gas emissions. I discussed wastes as a problem, and while they are a problem, as I tried to show they are in principle reasonably easily dealt with. There is a need for more work and there are difficulties, but there is no reason this problem cannot be overcome. The other reason is the danger of the Chernobyl/Fukushima type explosion. In the case of Chernobyl, it needed a frightening number of totally stupid decisions to be made, and you might expect that since it was a training exercise there would be people there who knew what they were doing to supervise. But no, and worse, the operating instructions were unintelligible, having been amended with strike-outs and hand-written “corrections” that nobody could understand. You might have thought the supervisor would check to see everything was available and correct before starting, but as I noted, there has never been a shortage of stupidity.

The nuclear reaction, which generates the heat, is initiated by a fissile nucleus absorbing a neutron and splitting, and then keeping going by providing more neutrons. These neutrons either split further fissile nuclei, such as 235U, or they get absorbed by something else, such as 238U, which converts that nucleus to something else, in this case eventually 239Pu. The splitting of nuclei produces the heat, and to run at constant temperature, it is necessary to have a means of removing that amount of heat continuously. The rate of neutron absorption is determined by the “concentration” of fissile material and the amount of neutrons absorbed by something else, such as water, graphite and a number of other materials. The disaster happens when the reaction goes too quickly, and there is too much heat generated for the cooling medium. The metal melts and drips to the bottom of the reactor, where it flows together to form a large blob that is out of the cooling circuit. As the amount builds up it gets hotter and hotter, and we have a disaster.

The idea of the molten salt reactor is there are no metal rods. The material can be put in as a salt in solution, so the concentration automatically determines the operating temperature. The reactor can be moderated with graphite, beryllium oxide, or a number of others, or it can be run unmoderated. Temperatures can get up to 1400 degrees C, which, from basic thermodynamics, gives exceptional power efficiency, and finally, reactors can be relatively small. The initial design was apparently for aircraft propulsion, and you guessed it: bombers. The salts are usually fluorides because low-valence fluorides boil at very high temperatures, they are poor neutron absorbers, and their chemical bonds are exceptionally strong, which limits corrosion, and they are exceptionally inert chemically. In one sense they are extremely safe, although since beryllium fluoride is often used, its extreme toxicity requires careful handling. But the big main advantage of this sort of reactor, besides avoiding the meltdown, is it burns actinides and so if it makes plutonium, that is added to the fuel. More energy! It also burns some of the fission wastes, and such burning of wastes also releases energy. It can be powered by thorium (with some uranium to get the starting neutrons) which does not make anything suitable for making bombs. Further, the fission products in the thorium cycle have far shorter half-lives. Research on this started in the 1960s and essentially stopped. Guess why! There are other fourth generation reactors being designed, and some nuclear engineers may well disagree with my preference, but it is imperative, in my opinion, that we adopt some. We badly need some means of generating large amounts of electricity without burning fossil fuels. Whatever we decide to do, while the physics is well understood, the engineering may not be, and this must be solved if we are to avoid a planet-wide overheating. The politicians have to ensure this job gets done.