About ianmillerblog

I am a semi-retired professional scientist who has taken up writing futuristic thrillers, which are being published by myself as ebooks on Amazon and Smashwords, and a number of other sites. The intention is to publish a sequence, each of which is stand-alone, but when taken together there is a further story through combining the backgrounds. This blog will be largely about my views on science in fiction, and about the future, including what we should be doing about it, but in my opinion, are not. In the science area, I have been working on products from marine algae, and on biofuels. I also have an interest in scientific theory, which is usually alternative to what others think. This work is also being published as ebooks under the series "Elements of Theory".

Thorium as a Nuclear Fuel

Apparently, China is constructing a molten salt nuclear reactor to be powered by thorium, and it should be undergoing trials about now. Being the first of its kind, it is, naturally, a small reactor that will produce 2 megawatt of thermal energy. This is not much, but it is important when scaling up technology not to make too great of leaps because if something in the engineering has to be corrected it is a lot easier if the unit is smaller. Further, while smaller is cheaper, it is also more likely to create fluctuations, especially with temperature, and when smaller they are far easier to control. The problem with a very large reactor is if something is going wrong it takes a long time to find out, but then it also becomes increasingly difficult to do anything about it.

Thorium is a weakly radioactive metal that has little current use. It occurs naturally as thorium-232 and that cannot undergo fission. However, in a reactor it absorbs neutrons and forms thorium-233, which has a half-life of 22 minutes and β-decays to protactinium-233. That has a half-life of 27 days, and then β-decays to uranium-233, which can undergo fission. Uranium-233 has a half-life of 160,000 years so weapons could be made and stored.  

Unfortunately, 1.6 tonne of thorium exposed to neutrons and if appropriate chemical processing were available, is sufficient to make 8 kg of uranium-233, and that is enough to produce a weapon. So thorium itself is not necessarily a form of fuel that is free of weapons production. However, to separate Uranium-233 in a form to make a bomb, major chemical plant is needed, and the separation needs to be done remotely because apparently contamination with Uranium-232 is possible, and its decay products include a powerful gamma emitter. However, to make bomb material, the process has to be aimed directly at that. The reason is, the first step is to separate the protactinium-233 from the thorium, and because of the short half-life, only a small amount of the thorium gets converted. Because a power station will be operating more or less continuously, it should not be practical to use it to make fissile material for bombs.

The idea of a molten salt reactor is that the fissile material is dissolved in a liquid salt in the reactor core. The liquid salt also takes away the heat which, when the salt is cycles through heat exchangers, converts water to steam, and electricity is obtained in the same way as any other thermal station. Indeed, China says it intends to continue using its coal-fired generators by taking away the furnaces and replacing them with a molten salt reactor. Much of the infrastructure would remain. Further, compared with the usual nuclear power stations, the molten salt reactors operate at a higher temperature, which means electricity can be generated more efficiently.

One advantage of a molten salt reactor is it operates at lower pressures, which greatly reduces the potential for explosions. Further, because the fuel is dissolved in the salt you cannot get a meltdown. That does not mean there cannot be problems, but they should be much easier to manage. The great advantage of the molten salt reactor is it burns its reaction products, and an advantage of a thorium reactor is that most of the fission products have shorter half-lives, and since each fission produces about 2.5 neutrons, a molten salt reactor further burns larger isotopes that might be a problem, such as those of neptunium or plutonium formed from further neutron capture. Accordingly, the waste products do not comprise such a potential problem.

The reason we don’t directly engage and make lots of such reactors is there is a lot of development work required. A typical molten salt mix might include lithium fluoride, beryllium fluoride, the thorium tetrafluoride and some uranium tetrafluoride to act as a starter. Now, suppose the thorium or uranium splits and produces, say, a strontium atom and a xenon atom. At this point there are two fluorine atoms as surplus, and fluorine is an extraordinarily corrosive gas. As it happens, xenon is not totally unreactive and it will react with fluorine, but so will the interior of the reactor. Whatever happens in there, it is critical that pumps, etc keep working. Such problems can be solved, but it does take operating time to be sure such problems are solved. Let’s hope they are successful.

The Universe is Shrinking

Dark energy is one of the mysteries of modern science. It is supposed to amount to about 68% of the Universe, yet we have no idea what it is. Its discovery led to Nobel prizes, yet it is now considered possible that it does not even exist. To add or subtract 68% of the Universe seems a little excessive.

One of the early papers (Astrophys. J., 517, pp565-586) supported the concept. What they did was to assume type 1A supernovae always gave out the same light so by measuring the intensity of that light and comparing it with the red shift of the light, which indicates how fast it is going away, they could assess whether the rate of expansion of the universe was even over time. The standard theory at the time was that it was, and it was expanding at a rate given by the Hubble constant (named after Edwin Hubble, who first proposed this). What they did was to examine 42 type 1a supernovae with red shifts between 0.18 and 0.83, and compared their results on a graph with what they expected from the line drawn using the Hubble constant, which is what you expect with zero acceleration, i.e. uniform expansion. Their results at a distance were uniformly above the line, and while there were significant error bars, because instruments were being operated at their extremes, the result looked unambiguous. The far distant ones were going away faster than expected from the nearer ones, and that could only arise if the rate of expansion were accelerating.

For me, there was one fly in the ointment, so to speak. The value of the Hubble constant they used was 63 km/s/Mpc. The modern value is more like 68 or 72; there are two values, and they depend on how you measure them, but both are somewhat larger than this. Now it follows that if you have the speed wrong when you predict how far it travelled, it follows that the further away it is, the bigger the error, which means you think it has speeded up.

Over the last few years there have been questions as to exactly how accurate this determination of acceleration really is. There has been a question (arXiv:1912.04903) that the luminosity of these has evolved as the Universe ages, which has the effect that measuring the distance this way leads to overestimation of the distance. Different work (Milne et al. 2015.  Astrophys. J. 803: 20) showed that there are at least two classes of 1A supernovae, blue and red, and they have different ejecta velocities, and if the usual techniques are used the light intensity of the red ones will be underestimated, which makes them seem further away than they are.

My personal view is there could be a further problem. The type 1A occurs when a large star comes close to another star and begins stripping it of its mass until it gets big enough to ignite the supernova. That is why they are believed to have the same brightness: they ignite their explosion at the same mass so there are the same conditions, so there should be the same brightness. However, this is not necessarily the case because the outer layer, which generates the light we see, comes from the non-exploding star, and will absorb and re-emit energy from the explosion. Hydrogen and helium are poor radiators, but they will absorb energy. Nevertheless, the brightest light might be expected to come from the heavier elements, and the amount of them increases as the Universe ages and atoms are recycled. That too might lead to the appearance that the more distant ones are further away than expected, which in turn suggests the Universe is accelerating its expansion when it isn’t.

Now, to throw the spanner further into the works, Subir Sarkar has added his voice. He is unusual in that he is both an experimentalist and a theoretician, and he has noted that the 1A supernovae, while taken to be “standard candles”, do not all emit the same amount of light, and according to Sarkar, they vary by up to a factor of ten. Further, previously the fundamental data was not available, but in 1915 it became public. He did a statistical analysis and found that the data supported a cosmic acceleration but only with a statistical significance of three standard deviations, which, according to him, “is not worth getting out of bed for”.

There is a further problem. Apparently the Milky Way is heading off in some direction at 600 km/s, and this rather peculiar flow extends out to about a billion light years, and unfortunately most of the supernovae studied so far are in this region. This drops the statistical significance for cosmic expansion to two standard deviations. He then accuses the previous supporters of this cosmic expansion as confirmation bias: the initial workers chose an unfortunate direction to examine, but the subsequent ones “looked under the same lamppost”.

So, a little under 70% of what some claim is out there might not be. That is ugly. Worse, about 27% is supposed to be dark matter, and suppose that did not exist either, and the only reason we think it is there is because our understanding of gravity is wrong on a large scale? The Universe now shrinks to about 5% of what it was. That must be something of a record for the size of a loss.

Asteroid (16) Psyche – Again! Or Riches Evaporate, Again

Thanks to my latest novel “Spoliation”, I have had to take an interest in asteroid mining. I discussed this in a previous post (https://ianmillerblog.wordpress.com/2020/10/28/asteroid-mining/) in which I mentioned the asteroid (16) Psyche. As I wrote, there were statements saying the asteroid had almost unlimited mineral resources. Initially, it was estimated to have a density (g/cc) of about 7, which would make it more or less solid iron. It should be noted this might well be a consequence of extreme confirmation bias. The standard theory has it that certain asteroids differentiated and had iron cores, then collided and the rock was shattered off, leaving the iron cores. Iron meteorites are allegedly the result of collisions between such cores. If so, it has been estimated there have to be about 75 iron cores floating around out there, and since Psyche had a density so close to that of iron (about 7.87) it must be essentially solid iron. As I wrote in that post, “other papers have published values as low as 1.4 g/cm cubed, and the average value is about 3.5 g/cm cubed”. The latest value is 3.78 + 0.34.

These varied numbers show how difficult it is to make these observations. Density is mass per volume. We determine the volume by considering the size and we can measure the “diameter”, but the target is a very long way away, it is small, so it is difficult to get an accurate “diameter”. The next point is it is not a true sphere, so there are extra “bits” of volume with hills, or “bits missing” with craters. Further, the volume depends on a diameter cubed, so if you make a ten percent error in the “diameter” you have a 30% error overall. The mass has to be estimated from its gravitational effects on something else. That means you have to measure the distance to the asteroid, the distance to the other asteroid, and determine the difference from expected as they pass each other. This difference may be quite tiny. Astronomers are working at the very limit of their equipment.

A quick pause for some silicate chemistry. Apart from granitic/felsic rocks, which are aluminosilicates, most silicates come in two classes of general formula: A – olivines X2SiO4 or B – pyroxenes XSiO3, where X is some mix of divalent metals, usually mainly magnesium or iron (hence their name, mafic, the iron being ferrous). However, calcium is often present. Basically, these elements are the most common metals in the output of a supernova, with magnesium being the most. For olivines, if X is only magnesium, the density for A (forsterite) is 3.27 and for B (enstatite) 3.2. If X is only iron, the density for A (fayalite) is 4.39 and for B (ferrosilite) 4.00. Now we come to further confirmation bias: to maintain the iron content of Psyche, the density is compared to enstatite chondrites, and the difference made up with iron. Another way to maintain the concept of “free iron” is the proposition that the asteroid is made of “porous metal”. How do you make that? A porous rock, like pumice, is made by a volcano spitting out magma with water dissolved in it, and as the pressure drops the water turns to steam. However, you do not get any volatile to dissolve in molten iron.

Another reason to support the iron concept was that the reflectance spectrum was “essentially featureless”. The required features come from specific vibrations, and a metal does not have any. Neither does a rough surface that scatters light. The radar albedo (how bright it is with reflected light) is 0.34, which implies a surface density of 3.5, which is argued to indicate either metal with 50% porosity, or solid silicates (rock). It also means no core is predicted. The “featureless spectrum” was claimed to have an absorption at 3 μm, indicating hydroxyl, which indicates silicate. There is also a signal corresponding to an orthopyroxene. The emissivity indicates a metal content greater than 20% at the surface, but if this were metal, there should be a polarised emission, and that is completely absent. At this point, we should look more closely at what “metal” means. In many cases, while it is used to convey what we would consider as a metal, the actual use includes chemical compounds with a  metallic element. The iron levels may be as iron sulphide, the oxide, or, as what I believe the answer is, the silicate. I think we are looking at the iron content of average rock. Fortune does not await us there.

In short, the evidence is somewhat contradictory, in part because we are using spectroscopy at the limits of its usefulness. NASA intends to send a mission to evaluate the asteroid and we should wait for that data.

But what about iron cored asteroids? We know there are metallic iron meteorites so where did they come from? In my ebook “Planetary Formation and Biogenesis”, I note that the iron meteorites, from isotope dating, are amongst the oldest objects in the solar system, so I argue they were made before the planets, and there were a large number of them, most of which ended up in planetary cores. The meteorites we see, if that is correct, never got accreted, and finally struck a major body for the first time.

Food on Mars

Settlers on Mars will have needs, but the most obvious ones are breathing and eating, and both of these are likely to involve plants. Anyone thinking of going to Mars should think about these, and if you look at science fiction the answers vary. Most simply assume everything is taken care of, which is fair enough for a story. Then there is the occasional story with slightly more detail. Andy Weir’s “The Martian” is simple. He grows potatoes. Living on such a diet would be a little spartan, but his hero had no option, being essentially a Robinson Crusoe without a Man Friday. The oxygen seemed to be a given. The potatoes were grown in what seemed to be a pressurised plastic tent and to get water, he catalytically decomposed hydrazine to make hydrogen and then he burnt that. A plastic tent would not work. The UV radiation would first make the tent opaque so the necessary light would not get in very well, then the plastic would degrade. As for making water, burning hydrazine as it was is sufficient, but better still, would they not put their base where there was ice?

I also have a novel (“Red Gold”) where a settlement tries to get started. Its premise is there is a main settlement with fusion reactors and hence have the energy to make anything, but the main hero is “off on his own” and has to make do with less, but can bring things from the main settlement. He builds giant “glass houses” made with layers of zinc-rich glass that shield the inside from UV radiation. Stellar plasma ejections are diverted by a superconducting magnet at the L1 position between Mars and the sun (proposed years before NASA suggested it) and the hero lives in a cave. That would work well for everything except cosmic radiation, but is that going to be that bad? Initially everyone lives on hydroponically grown microalgae, but the domes permit ordinary crops. The plants grow in treated soil, but as another option a roof is put over a minor crater and water provided (with solar heating from space) in which macroalgae grow and marine microalgae, as well as fish and other species, like prawns. The atmosphere is nitrogen, separated from the Martian atmosphere, and some carbon dioxide, and the plants make oxygen. (There would have to be some oxygen to get started, but plants on Earth grew without oxygen initially.)

Since then there have been other quite dramatic proposals from more official sources that assume a lot of automation to begin with. One of the proposals involves constructing huge greenhouses by covering a crater or valley. (Hey, I suggested that!) but the roof is flat and made of plastic, the plastic being made from polyethylene 2,5-furandicarboxylate, a polyester made from carbohydrates grown by the plants. This is used as a bonding agent to make a concrete from Martian rock. (In my novel, I explained why a cement is very necessary, but there are limited uses.) The big greenhouse model has some limitations. In this, the roof is flat, and in essentially two layers, and in between are vertical stacks of algae growing in water. The extra value here is that water filters out the effect of cosmic rays, although you need several meters of it. Now we have a problem. The idea is that underneath this there is a huge habitat, and for every cubic meter of water, we have one tonne mass, and on Mars, about 0.4 tonne of force on the lower flat deck. If this bottom deck is the opaque concrete, then something bound by plastic adhesion will slip. (Our concrete on bridges is only inorganic, and the binding is chemical, not physical, and further there is steel reinforcing.) Below this there would need to be many weight-bearing pillars. And there would need to be light generation between the decks (to get the algae to grow) and down below. Nuclear power would make this easy. Food can be grown as algae in between decks, or in the ground down below.

As I see it, construction of this would take quite an effort and a huge amount of materials. The concept is the plants could be grown to make the cement to make the habitat, but hold on, where are the initial plants going to grow, and who/what does all the chemical processing? The plan is to have that in place from robots before anyone gets there but I think that is greatly overambitious. In “Red Gold” I had the glass made from regolith processed with the fusion energy. The advantage of glass over this new suggestion is weight; even on Mars with its lower gravity millions of tonnes remains a serious weight. The first people there will have to live somewhat more simply.

Another plan that I have seen involves finding a frozen lake in a crater, and excavating an “under-ice” habitat. No shortage of water, or screening from cosmic rays, but a problem as I see it is said ice will melt from the heat, erode the bottom of the sheet, and eventually it will collapse. Undesirable, that is.

All of these “official” options use artificial lighting. Assuming a nuclear reactor, that is not a problem in itself, although it would be for the settlement under the ice because heat control would be a problem. However, there is more to getting light than generating energy. What gives off the light, and what happens when its lifetime expires? Do you have to have a huge number of spares? Can they be made on Mars?

There is also the problem with heat. In my novel I solved this with mirrors in space focussing more sunlight on selected spots, and of course this provides light to help plants grow, but if you are going to heat from fission power a whole lot more electrical equipment is needed. Many more things to go wrong, and when it could take two years to get a replacement delivered, complicated is what you do not want. It is not going to be that easy.

The Fusion Energy Dream

One of the most attractive options for our energy future is nuclear fusion, where we can turn hydrogen into helium. Nuclear fusion works, even on Earth, as we can see when a hydrogen bomb goes off. The available energy is huge. Nuclear fusion will solve our energy crisis, we have been told, and it will be available in forty years. That is what we were told about 60 years ago, and you will usually hear the same forty year prediction now!

Nuclear fusion, you will be told, is what powers the sun, however we won’t be doing what the sun does any time soon. You may guess there is a problem in that the sun is not a spectacular hydrogen bomb. What the sun does is to squeeze hydrogen atoms together to make the lightest isotope of helium, i.e. 2He. This is extremely unstable, and the electric forces will push the protons apart in an extremely short time, like a billionth of a billionth of a second might be the longest it can last, and probably not that long. However, if it can acquire an electron, or eject a positron, before it decays it turns into deuterium, which is a proton and a neutron. (The sun also uses a carbon-oxygen cycle to convert hydrogen to helium.) The difficult thing that a star does, and what we will not do anytime soon, is to make neutrons (as opposed to freeing them).

The deuterium can then fuse to make helium, usually first with another proton to make 3He, and then maybe with another to make 4He. Each fusion makes a huge amount of energy, and the star works because the immense pressure at the centre allows the occasional making of deuterium in any small volume. You may be surprised by the use of the word “occasional”; the reason the sun gives off so much energy is simply that it is so big. Occasional is good. The huge amount of energy released relieves some of the pressure caused by the gravity, and this allows the star to live a very long time. At the end of a sufficiently large star’s life, the gravity allows the material to compress sufficiently that carbon and oxygen atoms fuse, and this gives of so much energy that the increase in pressure causes  the reaction  to go out of control and you have a supernova. A bang is not good.

The Lawrence Livermore National Laboratory has been working on fusion, and has claimed a breakthrough. Their process involves firing 192 laser beams onto a hollow target about 1 cm high and a diameter of a few millimeters, which is apparently called a hohlraum. This has an inner lining of gold, and contains helium gas, while at the centre is a tiny capsule filled with deuterium/tritium, the hydrogen atoms with one or two neutrons in addition to the required proton. The lasers heat the hohlraum so that the gold coating gives off a flux of Xrays. The Xrays heat the capsule causing material on the outside to fly off at speeds of hundreds of kilometers per second. Conservation of momentum leads to the implosion of the capsule, which gives, hopefully, high enough temperatures and pressures to fuse the hydrogen isotopes.

So what could go wrong? The problem is the symmetry of the pressure. Suppose you had a spherical-shaped bag of gel that was mainly water, and, say, the size of a football and you wanted to squeeze all the water out to get a sphere that only contained the gelling solid. The difficulty is that the pressure of a fluid inside a container is equal in all directions (leaving aside the effects of gravity). If you squeeze harder in one place than another, the pressure relays the extra force per unit area to one where the external pressure is weaker, and your ball expands in that direction. You are fighting jelly! Obviously, the physics of such fluids gets very complicated. Everyone knows what is required, but nobody knows how to fill the requirement. When something is unequal in different places, the effects are predictably undesirable, but stopping them from being unequal is not so easy.

The first progress was apparently to make the laser pulses more energetic at the beginning. The net result was to get up to 17 kJ of fusion energy per pulse, an improvement on their original 10 kJ. The latest success produced 1.3 MJ, which was equivalent to 10 quadrillion watts of fusion power for a 100 trillionth of a second. An energy generation of 1.3 MJ from such a small vessel may seem a genuine achievement, and it is, but there is further to go. The problem is that the energy input to the lasers was 1.9 MJ per pulse. It should be realised that that energy is not lost. It is still there so the actual output of a pulse would be 3.2 MJ of energy. The problem is that the output includes the kinetic energy of the neutrons etc produced, and it is always as heat whereas the input energy was from electricity, and we have not included the losses of power when converting electricity to laser output. Converting that heat to electricity will lose quite a bit, depending on how it is done. If you use the heat to boil water the losses are usually around 65%. In my novels I suggest using the magnetohydrodynamic effect that gets electricity out of the high velocity of the particles in the plasma. This has been made to work on plasmas made by burning fossil fuels, which doubles the efficiency of the usual approach, but controlling plasmas from nuclear fusion would be far more difficult. Again, very easy to do in theory; very much less so in practice. However, the challenge is there. If we can get sustained ignition, as opposed to such a short pulse, the amount of energy available is huge.

Sustained fusion means the energy emitted from the reaction is sufficient to keep it going with fresh material injected as opposed to having to set up containers in containers at the dead centre of a multiple laser pulse. Now, the plasma at over 100,000,000 degrees Centigrade should be sufficient to keep the fusion going. Of course that will involve even more problems: how to contain a plasma at that temperature; how to get the fuel into the reaction without melting then feed tubes or dissipating the hydrogen; how to get the energy out in a usable form; how to cool the plasma sufficiently? Many questions; few answers.

What to do about Climate Change

As noted in my previous post, the IPCC report on climate change is out. If you look at the technical report, it starts with pages of corrections. I would have thought that in these days the use of a word processor could permit the changes to be made immediately, but what do I know? Anyway, what are the conclusions? As far as I can make out, they have spent an enormous effort measuring greenhouse gas emissions and modelling, and have concluded that greenhouse gases are the cause of our problem and if we stopped emitting right now, totally, things would not get appreciably worse than they are now over the next century. As far as I can make out, that is it. They argue that CO2 emissions give a linear effect and for every trillion tonnes emitted, temperatures will rise by 0.45 Centigrade degrees, with a fairly high error margin. So we have to stop emitting.

The problem is, can we? In NZ we have a very high fraction of our electricity from renewable sources and we recently had a night of brown-outs in one region. It was the coldest night of the year, there was a storm over most of the country, but oddly enough there was hardly any wind at a wind farm. A large hydro station went out as well because the storm blew weeds into an intake and the station had to shut down and clean it out. The point is that when electricity generation is a commercial venture, it is not in the generating companies’ interests to have a whole lot of spare capacity and it make no sense to tear down what is working well and making money to spend a lot replacing it. So, the policy of using what we have means we are stuck where we are. China has announced, according to our news, that its coal-fired power stations will maximise and plateau their output of CO2 in about ten years. We have no chance of zero emissions in the foreseeable future. Politicians and environmentalists can dream on but there is too much inertia in an economy. Like a battleship steering straight for the wharf, the inevitable will happen.

Is there a solution? My opinion is, if you have to persist in reducing the heat being radiated to space, the best option is to stop letting so much energy from the sun into the system. The simplest experiment I can think of is to put huge amounts of finely dispersed white material, like the silica a volcano puts up, over the North Polar regions each summer to reflect sunlight back to space. If we can stop as much winter ice melting, we would be on the way to stop the potential overturn of the Gulf Stream and stop the Northern Siberian methane emissions. Just maybe this would also encourage more snow in the winter as the dust falls out.

Then obvious question is, how permanent would such a dispersion be? The short answer is, I don’t know, and it may be difficult to predict because of what is called the Arctic oscillation. When that is in a positive phase it appears that winds tend to circulate over the poles, so it may be possible to maintain dust over summer. It is less clear what happens in the negative phase. However, either way someone needs to calculate how much light has to be blocked to stop the Arctic (and Antarctic) warming. Maybe such a scheme would not be practical, but unless we at least make an effort to find out, we are in trouble.

This raises the question of who pays? In my opinion, every country with a port benefits if we can stop major sea level rising, so all should. Of course, we shall find that not all are cooperative. A further problem is that the outcome is somewhat unpredictable. The dust only has to last during the late spring and summer, because the objective is to reflect sunlight. For the period when the sun is absent it is irrelevant. We would also have to be sure the dust was not hazardous to health but we have lived through volcanic eruptions that have caused major lowering of the temperature world-wide so there will be suitable material.

There will always be some who lose on the deal. The suggestion of putting the dust over the Arctic would make the weather less pleasant in Murmansk, Fairbanks, Yukon, etc, but it would only return it to what it used to be. It is less clear what it would do elsewhere. If the arctic became colder, presumably there would be colder winter storms in more temperate regions. However, it might be better that we manage the climate than then planet does, thus if the Gulf Stream went, Europe would suffer both rising sea levels and temperatures and weather more like that of Kerguelen. In my opinion, it is worth trying.

But what is the betting any proposal for geoengineering has no show of getting off the ground? The politically correct want to solve the problem by everyone giving up something, they have not done the sums to estimate the consequences, and worse, some will give things up but enough won’t so that such sacrifices will be totally ineffective. We have the tragedy of the commons: if some are not going to cooperate and the scheme hence must fail, why should you even try? We need to find ways of reducing emissions other than by stopping an activity, as opposed to the emission.

Climate Change: Are We Baked?

The official position from the IPCC’s latest report is that the problem of climate change is getting worse. The fires and record high temperatures in Western United States and British Columbia, Greece and Turkey may be portents of what is coming. There have been terrible floods in Germany and New Zealand has had its bad time with floods as well. While Germany was getting flooded, the town of Westport was inundated, and the Buller river had flows about 30% higher than its previous record flood. There is the usual hand-wringing from politicians. Unfortunately, at least two serious threats that have been ignored.

The first is the release of additional methane. Methane is a greenhouse gas that is about 35 times more efficient at retaining heat than carbon dioxide. The reason is absorption of infrared depends on the change of dipole moment during absorption. CO2 is a linear molecule and has three vibrational modes. One involves the two oxygen atoms both moving the same way so there is no change of dipole moment, the two changes cancelling each other. Another is as if the oxygen atoms are stationary and the carbon atom wobbles between them. The two dipoles now do not cancel, so that absorbs, but the partial cancellation reduces the strength. The third involves molecular bending, but the very strong bond means the bend does not move that much, so again the absorption is weak. That is badly oversimplified, but I hope you get the picture.

Methane has four vibrations, and rather than describe them, try this link: http://www2.ess.ucla.edu/~schauble/MoleculeHTML/CH4_html/CH4_page.html

Worse, its vibrations are in regions totally different from carbon dioxide, which means it is different radiation that cannot escape directly to space.

This summer, average temperatures in parts of Siberia were 6 degrees Centigrade above the 1980 – 2000 average and methane is starting to be released from the permafrost. Methane forms a clathrate with ice, that is it rearranges the ice structure and inserts itself when under pressure, but the clathrate decomposes on warming to near the ice melting point. This methane has formed from the anaerobic digestion of plant material and been trapped by the cold, so if released we get delivered suddenly all methane that otherwise would have been released and destroyed over several million years. There are about eleven billion tonnes of methane estimated to be in clathrates that could be subject to decomposition, about the effect of over 35 years of all our carbon dioxide emissions, except that as I noted, this works in a totally fresh part of the spectrum. So methane is a problem; we all knew that.

What we did not know a new source has been identified as published in the Proceedings of the National Academy of Sciences recently. Apparently significantly increased methane concentrations were found in two areas of northern Siberia: the Tamyr fold belt and the rim of the Siberian Platform. These are limestone formations from the Paleozoic era. In both cases the methane increased significantly during heat waves. The soil there is very thin so there is very little vegetation to decay and it was claimed the methane was stored in and now emitted from fractures in the limestone.

The  second major problem concerns the Atlantic Meridional Overturning Circulation (AMOC), also known as the Atlantic conveyor. What it does is to take warm water that gets increasingly salty up the east Coast of the US, then switch to the Gulf Stream and warm Europe (and provide moisture for the floods). As it loses water it gets increasingly salty and with its increased density it dives to the Ocean floor and flows back towards the equator. Why this is a problem is that the melting Northern Polar and Greenland ice is providing a flood of fresh water that dilutes the salty water. When the density of the water is insufficient to cause it to sink this conveyer will simply stop. At that point the whole Atlantic circulation as it is now stops. Europe chills, but the ice continues to melt. Because this is a “stopped” circulation, it cannot be simply restarted because the ocean will go and do something else. So, what to do? The first thing is that simply stopping burning a little coal won’t be enough. If we stopped emitting CO2 now, the northern ice would keep melting at its current rate. All we would do is stop it melting faster.

Scientists Behaving Badly

You may think that science is a noble activity carried out by dedicated souls thinking only of the search for understanding and of improving the lot of society. Wrong! According to an item published in Nature ( https://doi.org/10.1038/d41586-021-02035-2) there is rot in the core. A survey of 64,000 researchers at 22 universities in the Netherlands was carried out, 6,813 actually filled out the form and returned it, and an estimated 8% of scientists who so returned their forms in the anonymous survey confessed to falsifying or fabricating data at least once between 2017 and 2020. Given that a fraudster is less likely to confess, that figure is probably a clear underestimate.

There is worse. More than half of respondents also reported frequently engaging in “questionable research practices”. These include using inadequate research designs, which can be due to poor funding and hence more understandable, and frankly this could be a matter of opinion. On the other hand, if you confess to doing it you are at best slothful. Much worse, in my opinion, was deliberately judging manuscripts or fund applications while peer reviewing unfairly. Questionable research practices are “considered lesser evils” than outright research misconduct, which includes plagiarism and data fabrication. I am not so sure of that. Dismissing someone else’s work or fund application hurts their career.

There was then the question of “sloppy work”, which included failing to “preregister experimental protocols (43%), make underlying data available (47%) or keep comprehensive research records (56%)” I might be in danger here. I had never heard about “preregistering protocols”. I suspect that is more for the medical research than for physical sciences. My research has always been of the sort where you plan the next step based on the last step you have taken. As for “comprehensive records, I must admit my lab books have always been cryptic. My plan was to write it down, and as long as I could understand it, that was fine. Of course, I have worked independently and records were so I could report more fully and to some extent for legal reasons.

If you think that is bad, there is worse in medicine. On July 5 an item appeared in the British Medical Journal with the title “Time to assume that health research is fraudulent until proven otherwise?” One example: a Professor of epidemiology apparently published a review paper that included a paper that showed mannitol halved the death rate from comparable injuries. It was pointed out to him that that paper that he reviewed was based on clinical trials that never happened! All the trials came from a lead author who “came from an institution” that never existed! There were a number of co-authors but none had ever contributed patients, and many did not even know they were co-authors. Interestingly, none of the trials had been retracted so the fake stuff is still out there.

Another person who carried out systematic reviews eventually realized that only too many related to “zombie trials”. This is serious because it is only by reviewing a lot of different work can some more important over-arching conclusions be drawn, and if a reasonable percentage of the data is just plain rubbish everyone can jump to the wrong conclusions. Another medical expert attached to the journal Anaesthesia found from 526 trials, 14% had false data and 8% were categorised as zombie trials. Remember, if you are ever operated on, anaesthetics are your first hurdle! One expert has guessed that 20% of clinical trials as reported are false.

So why doesn’t peer review catch this? The problem for a reviewer such as myself is that when someone reports numbers representing measurements, you naturally assume they were the results of measurement. I look to see that they “make sense” and if they do, there is no reason to suspect them. Further, to reject a paper because you accuse it of fraud is very serious to the other person’s career, so who will do this without some sort of evidence?

And why do they do it? That is easier to understand: money and reputation. You need papers to get research funding and to keep your position as a scientist. It is very hard to detect, unless someone repeats your work, and even then there is the question, did they truly repeat it? We tend to trust each other, as we should be able to. Published results get rewards, publishers make money, Universities get glamour (unless they get caught out). Proving fraud (as opposed to suspecting it) is a skilled, complicated and time-consuming process, and since it shows badly on institutions and publishers, they are hardly enthusiastic. Evil peer review, i.e. dumping someone’s work to promote your own is simply strategic, and nobody will do anything about it.

It is, apparently, not a case of “bad apples”, but as the BMJ article states, a case of rotten forests and orchards. As usual, as to why, follow the money.

Neanderthals: skilled or unskilled?

Recently, you may have seen images of a rather odd-looking bone carving, made 50,000 years ago by Neanderthals. One of the curious things about Neanderthals is that they have been portrayed as brutes, a sort of dead-end in the line of human evolution, probably wiped out by our ancestors. However, this is somewhat unfair for several reasons, one of which is this bone carving. It involved technology because apparently the bone was scraped and then seemingly boiling or some equivalent heat processing took place. Then two sets of three parallel lines, the sets normal to each other, were carved on it. What does this tell us? First, it appears they had abstract art, but a more interesting question is, did it mean anything more? We shall probably never know.

One thing that has led to the “brute” concept is they did not leave many artifacts, and those they did were stone tools that compared with our “later ancestors” appeared rather crude. But is that assessment fair? The refinement of a stone tool probably depends on the type of stone available. The Neanderthals lived more or less during an ice age, and while everything was not covered with glaciers, the glaciers would have inhibited trade. People had to use what was available. How many of you live in a place where high quality flint for knapping is available? Where I live, the most common rocks available are greywacke, basalt, and maybe some diorite, granodiorite or gabbro. You try making fine stone tools with these raw materials.

Another point, of course, is that while they lived in the “stone age”, most of their tools would actually be made of wood, with limited use of bone, antler and ivory. Stone tools were made because stone was the toughest material they could find, and they hoped to get a sharp edge which would make a useful cutting edge. Most of the wooden items will have long rotted, which is unfortunate, but some isolated items remain, including roughly 40 pieces of modified boxwood, which are interpreted as being used as digging sticks and were preserved in mudstone in Central Italy. These were 170,000 years old. Even older were nine well-preserved wooden spears is a coal mine at Schöningen, from 300,000 years ago. Making these would involve selecting and cutting a useful piece of spruce, shaping a handle, removing the bark (assumed to be done through fire) smoothing the handle with an abrasive stone, and sharpening the point, again with an abrasive stone.

Even more technically advanced, apparently stone objects were attached to wooden handles with a binding agent. The wooden parts have long rotted, but the production can be inferred from the traces of hafting wear and of adhesive material on the stones. Thus Neanderthals made stone-tipped wooden spears, hafted cutting and scraping tools, and they employed a variety of adhesives. Thus they made two different classes of artifacts each comprising at least three components. They were making objects more complex than some recent hunter-gatherers. There is a further point. The items require a number of steps to make them, and they require quite different skills. The better tools would be made quicker if there were different people making the various components, but that would require organization, and ensuring each knew what then others were doing. That involves language. We have also found a pit that contains many bones and tools for cutting meat from them, presumably a butchery where the results of a successful hunt were processed. That involves sharing the work, and presumably the yield. 

We have found graves. They must have endured pain because they invariably have the signs of at least one fracture that healed. To survive such injuries they must have had others care for them. Also found have been sharpened pieces of manganese dioxide, which is soft but very black. Presumably these were crayons, which implies decorating something, the somethings long rotted away. There are Neanderthal cave paintings in SpainFinally, there was jewellery, which largely involved shells and animals’ teeth with holes cut into them. Some shells were pigmented, which means decoration. Which raises the question, could you cut a hole in a tooth with the only available tools being what you made from stone, bone, or whatever is locally available naturally? Finally, there are the ”what were they” artifacts. One is the so-called Neanderthal flute – a 43,000 – 60,000- year-old bear femur with four holes drilled in it. The spacings does not match any carnivore’s tooth spacing, but they do match that of a musical scale, which, as an aside, indicate the use of a minor scale. There is also one carving of a pregnant woman attributed to them.  These guys were cleverer than we give them credit for.

Could Aliens Know We Are Here?

While an alien could not see us without coming here, why pick here as opposed to all the other stars? We see exoplanets and speculate on whether they could hold life, but how many exoplanets could see our planet, if they held life with technology like ours or a little better? When I wrote the first edition of my ebook “Planetary Formation and Biogenesis” I listed a few techniques to find planets. Then, the most had been found through detecting the wobble of stars through the frequency changes of their line spectra to which a Doppler shift was added. The wobble is caused by the gravity of the planets. Earth would be very difficult to see that way because it is too small. This works best with very large planets very close to stars.

While there are several methods for discovering planets that work occasionally, one is particularly productive, and that is to measure the light intensity coming from the star. If a planet crosses our line of sight, the light dims. Maybe not a lot, but it dims. If you have seen an eclipse of the sun you will get the idea, but if you have seen a transit of Venus or of Mercury you will know the effect is not strong. This is very geometry specific because you have to be able to draw a straight line between your eye, the planet and part of the star and the further the planet is from the star, the smaller the necessary angle. To give an idea of the problem, our planetary system was created more or less on the equatorial plane of the accretion disk that formed the sun, so we should at least see transits of our inner planets, right? Well, not exactly because the various orbits do not lie on one plane. My phrase “more or less” indicates the problem – we have to be exactly edge-on to the plane unless the planet is really close to the star, when geometry lends a hand because the star is so big that something small crossing in front can be seen from wider angles.

Nevertheless, the Kepler telescope has seen many such exoplanets. Interestingly, the Kepler telescope, besides finding a number of stars with multiple planets close to the star has also found a number of stars with only one planet at a good distance from the star. That does not mean there are no other planets; it may mean nothing more than that one is accidentally the only one whose orbital plane lies on our line of sight. The others may, like Venus, be on slightly different planes. When I wrote that ebook, it was obvious that suitable stars were not that common, and since we were looking at stars one at a time over an extended period, not many planets would be discovered. The Kepler telescope changed that because when it came into operation, it could view hundreds of thousands of stars simultaneously.

All of which raises the interesting question, how many aliens, if they had good astronomical techniques, could see us by this method, assuming they looked at our sun? Should we try to remain hidden or not? Can we, if we so desired?

In a recent paper from Nature (594, pp505 – 507 2021) it appears that 1,715 stars within 100 parsecs of the sun (i.e. our “nearest neighbours”) would have been in a position to spot us over the last 5,000 years, while an additional 319 stars will have the opportunity over the next 5,000 years. Stars might look as if they are fixed in position, but actually they are speedily moving, and not all in the same direction. 

Among this set of stars are seven known to have exoplanets, including Ross 128, which could have seen us in the past but no longer, and Teegarden’s star and Trappist-1, which will start to have the opportunity in 29 years and 1642 years respectively. Most of these are Red Dwarfs, and if you accept my analysis in my ebook, then they will not have technological life. The reason is the planets with composition suitable to generate biogenesis will be too close to the star so will be far too hot, and yet probably receive insufficient higher frequency light to drive anything like photosynthesis.

Currently, an Earth transit could be seen from 1402 stars, and this includes 128 G-type stars, like our sun. There are 73 K stars, which may also be suitable to house life. There are also 63 F-type stars. These stars are larger than the sun, from 1.07 to 1.4 times the size, and are much hotter than the sun. Accordingly, they turn out more UV, which might be problematical for life, although the smaller ones may be suitable and the Earth-equivalent planet will be a lot further from the star. However, they are also shorter-lived, so the bigger ones may not have had time. About 2/3 of these stars are in a more restricted transit zone, and could, from geometry, observe an Earth transit for ten hours. So there are a number of stars from which we cannot hide. Ten hours would give a dedicated astronomer with the correct equipment plenty of time to work out we have oxygen and an ozone layer, and that means life must be here.

Another option is to record our radio waves. We have been sending them out for about 100 years, and about 75 of our 1402 stars identified above are within that distance that could give visual confirmation via observing a transit. We cannot hide. However, that does not mean any of those stars could do anything about it. Even if planets around them have life, that does not mean it is technological, and even if it were, that does not mean they can travel through interstellar space. After all, we cannot. Nevertheless, it is an interesting matter to speculate about.