Ossified Science

There was an interesting paper in the Proceedings of the National Academy of Sciences (118,  e2021636118,  https://doi.org/10.1073/pnas.2021636118 ) which argued that science is becoming ossified, and new ideas simply cannot emerge. My question is, why has it taken them this long to recognize it? That may seem a strange thing to say, but over the life of my career I have seen no radically new ideas get acknowledgement.

The argument in the paper basically fell down to one simple fact: over this period there has been a huge expansion in the quantity of scientists, research funding, and the number of publications. Progress in the career of a scientist depends on the number of papers produced. However, the more papers produced, the more likely is the science to stagnate because nobody has the time to read everything. People pick and choose what to read, the selection biased by the need not to omit people who may read your funding application. Reading is thus focused on established thinking. As the number of papers increase, citations flow increasingly towards the already well-cited papers. Lesser known authors are unlikely to ever become highly cited; if they do it is not through a cumulative process of analysis. New material is extremely unlikely to disrupt existing work, with the result that progress in large established scientific fields may be trapped in existing canon. That is fairly stern stuff.

It is important to note there are at least three major objectives relating to science. The first is developing methods to gain information, or, if you prefer, developing new experimental or observational techniques. The second is using those techniques to record more facts. The more scientists there are, the more successful these are, and over the period we have most certainly been successful in these objectives. The rapid provision of new vaccines for SARS-CoV-2 shows that when pushed, we find ways of how to do it. When I started my career, a very large clunky computer that was incredibly slow and had internal memory measured in bytes occupied a room. Now we have memory that stores terrabytes in something you can hold in your hand. So yes, we have learned how to do it, and we have acquired a huge amount of knowledge. There is a glut of facts available.

The third objective is to analyse those facts and derive theories so we can understand nature, and do not have to examine that mountain of data for any reason other than to verify that we are on the right track. That is where little has happened.

As the PNAS paper points out, policy reflects the “more is better” approach. Rewards are for the number of articles, and citations reflect the quality of them. The number of publications are easily counted, but the citations are more problematical. To get the numbers up, people carry out work most likely to reach a fast result. The citations are the ones most easily found, which means those that get a good start gather citations like crazy. There are also “citation games”: you cite mine, I’ll cite yours. These citations may have nothing in particular to add in terms of the science or logic, but they do add to the career prospects.

What happens when a paper is published? As the PNAS paper says, “cognitively overloaded reviewers and readers process new work only in relationship to existing exemplars”. If a new paper does not fit the existing dynamic, it will be ignored. If  the young researcher wants to advance, he or she must avoid trying to rock the boat. You may feel that the authors of this are overplaying a non-problem. Not so. One example shows how the scientific hierarchy thinks. One of the two major advances in theoretical physics in the twentieth century was quantum mechanics. Basically, all our advanced electronic technology depends on that theory, and in turn the theory is based on one equation published by Erwin Schrödinger. This equation is effectively a statement that energy is conserved, and that the energy is determined by a wave function ψ. It is too much to go into here, but the immediate consequence was the problem, what exactly does ψ represent?

Louis de Broglie was the first to propose that quantum motion was represented by a wave, and he came up with a different equation which stated the product of the momentum and wavelength was Planck’s constant, or the quantum of action. De Broglie then proposed that ψ was a physical wave, which he called the pilot wave. This was promptly ignored in favour of a far more complicated mathematical procedure that we can ignore for the present. Then, in the early 1950s David Bohm more or less came up with the same idea as de Broglie, which was quite different from the standard paradigm. So how was that received? I found a 1953 quote from J. R. Oppenheimer: “We consider it juvenile deviationism .. we don’t waste our time … [by] actually read[ing] the paper. If we cannot disprove Bohm, then we must agree to ignore him.” So much for rational analysis.

The standard theory states that if an electron is fired at two slits it goes through BOTH of them then gives an interference pattern. The pilot wave says the electron has a trajectory, goes through one slit only, and while it forms the same interference pattern, an electron going through the left slit never ends up in the right hand pattern. Observations have proved this to be correct (Kocsis, S. and 6 others. 2011. Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer Science 332: 1170 – 1173.) Does that change anyone’s mind? Actually, no. The pilot wave is totally ignored, except for the odd character like me, although my version is a little different (called a guidance wave) and it is ignored more.

Unexpected Astronomical Discoveries.

This week, three unexpected astronomical discoveries. The first relates to white dwarfs. A star like our sun is argued to eventually run out of hydrogen, at which point its core collapses somewhat and it starts to burn helium, which it converts to carbon and oxygen, and gives off a lot more energy. This is a much more energetic process than burning hydrogen to helium, so although the core contracts, the star itself expands and becomes a red giant. When it runs out of that, it has two choices. If it is big enough, the core contracts further and it burns carbon and oxygen, rather rapidly, and we get a supernova. If it does not have enough mass, it tends to shed its outer matter and the rest collapses to a white dwarf, which glows mainly due to residual heat. It is extremely dense, and if it had the mass of the sun, it would have a volume roughly that of Earth.

Because it does not run fusion reactions, it cannot generate heat, so it will gradually cool, getting dimmer and dimmer, until eventually it becomes a black dwarf. It gets old and it dies. Or at least that was the theory up until very recently. Notice anything wrong with what I have written above?

The key is “runs out”. The problem is that all these fusion reactions occur in the core, but what is going on outside. It takes light formed in the core about 100,000 years to get to the surface. Strictly speaking, that is calculated because nobody has gone to the core of a star to measure it, but the point is made. It takes that long because it keeps running into atoms on the way out, getting absorbed and re-emitted. But if light runs into that many obstacles getting out, why do you think all the hydrogen would work its way to the core? Hydrogen is light, and it would prefer to stay right where it is. So even when a star goes supernova, there is still hydrogen in it. Similarly, when a red giant sheds outer matter and collapses, it does not necessarily shed all its hydrogen.

The relevance? The Hubble space telescope has made another discovery, namely that it has found white dwarfs burning hydrogen on their surfaces. A slightly different version of “forever young”. They need not run out at all because interstellar space, and even intergalactic space, still has vast masses of hydrogen that, while thinly dispersed, can still be gravitationally acquired. The surface of the dwarf, having such mass and so little size, will have an intense gravity to make up for the lack of exterior pressure. It would be interesting to know if they could determine the mechanism of the fusion. I would suspect it mainly involves the CNO cycle. What happens here is that protons (hydrogen nuclei) in sequence enter a nucleus that starts out as ordinary carbon 12 to make the element with one additional proton, which then decays to produce a gamma photon, and sometimes a positron and a neutrino until it gets to nitrogen 15 (having been in oxygen 15) after which if it absorbs a proton it spits out helium 4 and returns to carbon 12. The gamma spectrum (if it is there) should give us a clue.

The second is the discovery of a new Atira asteroid, which orbits the sun every 115 days and has a semi-major axis of 0.46 A.U. The only known object in the solar system with a smaller semimajor axis is Mercury, which orbits the sun in 89 days. Another peculiarity of its orbit is that it can only be seen when it is away from the line of the sun, and as it happens, these times are very difficult to see it from the Northern Hemisphere. It would be interesting to know its composition. Standard theory has it that all the asteroids we see have been dislodged from the asteroid belt, because the planets would have cleaned out any such bodies that were there from the time of the accretion disk. And, of course, we can show that many asteroids were so dislodged, but many does not mean all. The question then is, how reliable is that proposed cleanout? I suspect, not very. The idea is that numerous collisions would give the asteroids an eccentricity that would lead them to eventually collide with a planet, so the fact they are there means they have to be resupplied, and the asteroid belt is the only source. However, I see no reason why some could not have avoided this fate. In my ebook “Planetary Formation and Biogenesis” I argue that the two possibilities would have clear compositional differences, hence my interest. Of course, getting compositional information is easier said than done.

The third “discovery” is awkward. Two posts ago I wrote how the question of the nature of dark energy might not be a question because it may not exist. Well, no sooner had I posted, than someone came up with a claim for a second type of dark energy. The problem is, if the standard model is correct, the Universe should be expanding 5 – 10% faster than it appears to be doing. (Now, some would say that indicates the standard model is not quite right, but that is apparently not an option when we can add in a new type of “dark energy”.) This only applied for the first 300 million years or so, and if true, the Universe has suddenly got younger. While it is usually thought to be 13.8 billion years old, this model has it at 12.4 billion years old. So while the model has “invented” a new dark energy, it has also lost 1.4 billion years in age. I tend to be suspicious of this, especially when even the proposers are not confident of their findings. I shall try to keep you posted.

Thorium as a Nuclear Fuel

Apparently, China is constructing a molten salt nuclear reactor to be powered by thorium, and it should be undergoing trials about now. Being the first of its kind, it is, naturally, a small reactor that will produce 2 megawatt of thermal energy. This is not much, but it is important when scaling up technology not to make too great of leaps because if something in the engineering has to be corrected it is a lot easier if the unit is smaller. Further, while smaller is cheaper, it is also more likely to create fluctuations, especially with temperature, and when smaller they are far easier to control. The problem with a very large reactor is if something is going wrong it takes a long time to find out, but then it also becomes increasingly difficult to do anything about it.

Thorium is a weakly radioactive metal that has little current use. It occurs naturally as thorium-232 and that cannot undergo fission. However, in a reactor it absorbs neutrons and forms thorium-233, which has a half-life of 22 minutes and β-decays to protactinium-233. That has a half-life of 27 days, and then β-decays to uranium-233, which can undergo fission. Uranium-233 has a half-life of 160,000 years so weapons could be made and stored.  

Unfortunately, 1.6 tonne of thorium exposed to neutrons and if appropriate chemical processing were available, is sufficient to make 8 kg of uranium-233, and that is enough to produce a weapon. So thorium itself is not necessarily a form of fuel that is free of weapons production. However, to separate Uranium-233 in a form to make a bomb, major chemical plant is needed, and the separation needs to be done remotely because apparently contamination with Uranium-232 is possible, and its decay products include a powerful gamma emitter. However, to make bomb material, the process has to be aimed directly at that. The reason is, the first step is to separate the protactinium-233 from the thorium, and because of the short half-life, only a small amount of the thorium gets converted. Because a power station will be operating more or less continuously, it should not be practical to use it to make fissile material for bombs.

The idea of a molten salt reactor is that the fissile material is dissolved in a liquid salt in the reactor core. The liquid salt also takes away the heat which, when the salt is cycles through heat exchangers, converts water to steam, and electricity is obtained in the same way as any other thermal station. Indeed, China says it intends to continue using its coal-fired generators by taking away the furnaces and replacing them with a molten salt reactor. Much of the infrastructure would remain. Further, compared with the usual nuclear power stations, the molten salt reactors operate at a higher temperature, which means electricity can be generated more efficiently.

One advantage of a molten salt reactor is it operates at lower pressures, which greatly reduces the potential for explosions. Further, because the fuel is dissolved in the salt you cannot get a meltdown. That does not mean there cannot be problems, but they should be much easier to manage. The great advantage of the molten salt reactor is it burns its reaction products, and an advantage of a thorium reactor is that most of the fission products have shorter half-lives, and since each fission produces about 2.5 neutrons, a molten salt reactor further burns larger isotopes that might be a problem, such as those of neptunium or plutonium formed from further neutron capture. Accordingly, the waste products do not comprise such a potential problem.

The reason we don’t directly engage and make lots of such reactors is there is a lot of development work required. A typical molten salt mix might include lithium fluoride, beryllium fluoride, the thorium tetrafluoride and some uranium tetrafluoride to act as a starter. Now, suppose the thorium or uranium splits and produces, say, a strontium atom and a xenon atom. At this point there are two fluorine atoms as surplus, and fluorine is an extraordinarily corrosive gas. As it happens, xenon is not totally unreactive and it will react with fluorine, but so will the interior of the reactor. Whatever happens in there, it is critical that pumps, etc keep working. Such problems can be solved, but it does take operating time to be sure such problems are solved. Let’s hope they are successful.

The Universe is Shrinking

Dark energy is one of the mysteries of modern science. It is supposed to amount to about 68% of the Universe, yet we have no idea what it is. Its discovery led to Nobel prizes, yet it is now considered possible that it does not even exist. To add or subtract 68% of the Universe seems a little excessive.

One of the early papers (Astrophys. J., 517, pp565-586) supported the concept. What they did was to assume type 1A supernovae always gave out the same light so by measuring the intensity of that light and comparing it with the red shift of the light, which indicates how fast it is going away, they could assess whether the rate of expansion of the universe was even over time. The standard theory at the time was that it was, and it was expanding at a rate given by the Hubble constant (named after Edwin Hubble, who first proposed this). What they did was to examine 42 type 1a supernovae with red shifts between 0.18 and 0.83, and compared their results on a graph with what they expected from the line drawn using the Hubble constant, which is what you expect with zero acceleration, i.e. uniform expansion. Their results at a distance were uniformly above the line, and while there were significant error bars, because instruments were being operated at their extremes, the result looked unambiguous. The far distant ones were going away faster than expected from the nearer ones, and that could only arise if the rate of expansion were accelerating.

For me, there was one fly in the ointment, so to speak. The value of the Hubble constant they used was 63 km/s/Mpc. The modern value is more like 68 or 72; there are two values, and they depend on how you measure them, but both are somewhat larger than this. Now it follows that if you have the speed wrong when you predict how far it travelled, it follows that the further away it is, the bigger the error, which means you think it has speeded up.

Over the last few years there have been questions as to exactly how accurate this determination of acceleration really is. There has been a question (arXiv:1912.04903) that the luminosity of these has evolved as the Universe ages, which has the effect that measuring the distance this way leads to overestimation of the distance. Different work (Milne et al. 2015.  Astrophys. J. 803: 20) showed that there are at least two classes of 1A supernovae, blue and red, and they have different ejecta velocities, and if the usual techniques are used the light intensity of the red ones will be underestimated, which makes them seem further away than they are.

My personal view is there could be a further problem. The type 1A occurs when a large star comes close to another star and begins stripping it of its mass until it gets big enough to ignite the supernova. That is why they are believed to have the same brightness: they ignite their explosion at the same mass so there are the same conditions, so there should be the same brightness. However, this is not necessarily the case because the outer layer, which generates the light we see, comes from the non-exploding star, and will absorb and re-emit energy from the explosion. Hydrogen and helium are poor radiators, but they will absorb energy. Nevertheless, the brightest light might be expected to come from the heavier elements, and the amount of them increases as the Universe ages and atoms are recycled. That too might lead to the appearance that the more distant ones are further away than expected, which in turn suggests the Universe is accelerating its expansion when it isn’t.

Now, to throw the spanner further into the works, Subir Sarkar has added his voice. He is unusual in that he is both an experimentalist and a theoretician, and he has noted that the 1A supernovae, while taken to be “standard candles”, do not all emit the same amount of light, and according to Sarkar, they vary by up to a factor of ten. Further, previously the fundamental data was not available, but in 1915 it became public. He did a statistical analysis and found that the data supported a cosmic acceleration but only with a statistical significance of three standard deviations, which, according to him, “is not worth getting out of bed for”.

There is a further problem. Apparently the Milky Way is heading off in some direction at 600 km/s, and this rather peculiar flow extends out to about a billion light years, and unfortunately most of the supernovae studied so far are in this region. This drops the statistical significance for cosmic expansion to two standard deviations. He then accuses the previous supporters of this cosmic expansion as confirmation bias: the initial workers chose an unfortunate direction to examine, but the subsequent ones “looked under the same lamppost”.

So, a little under 70% of what some claim is out there might not be. That is ugly. Worse, about 27% is supposed to be dark matter, and suppose that did not exist either, and the only reason we think it is there is because our understanding of gravity is wrong on a large scale? The Universe now shrinks to about 5% of what it was. That must be something of a record for the size of a loss.

The Fusion Energy Dream

One of the most attractive options for our energy future is nuclear fusion, where we can turn hydrogen into helium. Nuclear fusion works, even on Earth, as we can see when a hydrogen bomb goes off. The available energy is huge. Nuclear fusion will solve our energy crisis, we have been told, and it will be available in forty years. That is what we were told about 60 years ago, and you will usually hear the same forty year prediction now!

Nuclear fusion, you will be told, is what powers the sun, however we won’t be doing what the sun does any time soon. You may guess there is a problem in that the sun is not a spectacular hydrogen bomb. What the sun does is to squeeze hydrogen atoms together to make the lightest isotope of helium, i.e. 2He. This is extremely unstable, and the electric forces will push the protons apart in an extremely short time, like a billionth of a billionth of a second might be the longest it can last, and probably not that long. However, if it can acquire an electron, or eject a positron, before it decays it turns into deuterium, which is a proton and a neutron. (The sun also uses a carbon-oxygen cycle to convert hydrogen to helium.) The difficult thing that a star does, and what we will not do anytime soon, is to make neutrons (as opposed to freeing them).

The deuterium can then fuse to make helium, usually first with another proton to make 3He, and then maybe with another to make 4He. Each fusion makes a huge amount of energy, and the star works because the immense pressure at the centre allows the occasional making of deuterium in any small volume. You may be surprised by the use of the word “occasional”; the reason the sun gives off so much energy is simply that it is so big. Occasional is good. The huge amount of energy released relieves some of the pressure caused by the gravity, and this allows the star to live a very long time. At the end of a sufficiently large star’s life, the gravity allows the material to compress sufficiently that carbon and oxygen atoms fuse, and this gives of so much energy that the increase in pressure causes  the reaction  to go out of control and you have a supernova. A bang is not good.

The Lawrence Livermore National Laboratory has been working on fusion, and has claimed a breakthrough. Their process involves firing 192 laser beams onto a hollow target about 1 cm high and a diameter of a few millimeters, which is apparently called a hohlraum. This has an inner lining of gold, and contains helium gas, while at the centre is a tiny capsule filled with deuterium/tritium, the hydrogen atoms with one or two neutrons in addition to the required proton. The lasers heat the hohlraum so that the gold coating gives off a flux of Xrays. The Xrays heat the capsule causing material on the outside to fly off at speeds of hundreds of kilometers per second. Conservation of momentum leads to the implosion of the capsule, which gives, hopefully, high enough temperatures and pressures to fuse the hydrogen isotopes.

So what could go wrong? The problem is the symmetry of the pressure. Suppose you had a spherical-shaped bag of gel that was mainly water, and, say, the size of a football and you wanted to squeeze all the water out to get a sphere that only contained the gelling solid. The difficulty is that the pressure of a fluid inside a container is equal in all directions (leaving aside the effects of gravity). If you squeeze harder in one place than another, the pressure relays the extra force per unit area to one where the external pressure is weaker, and your ball expands in that direction. You are fighting jelly! Obviously, the physics of such fluids gets very complicated. Everyone knows what is required, but nobody knows how to fill the requirement. When something is unequal in different places, the effects are predictably undesirable, but stopping them from being unequal is not so easy.

The first progress was apparently to make the laser pulses more energetic at the beginning. The net result was to get up to 17 kJ of fusion energy per pulse, an improvement on their original 10 kJ. The latest success produced 1.3 MJ, which was equivalent to 10 quadrillion watts of fusion power for a 100 trillionth of a second. An energy generation of 1.3 MJ from such a small vessel may seem a genuine achievement, and it is, but there is further to go. The problem is that the energy input to the lasers was 1.9 MJ per pulse. It should be realised that that energy is not lost. It is still there so the actual output of a pulse would be 3.2 MJ of energy. The problem is that the output includes the kinetic energy of the neutrons etc produced, and it is always as heat whereas the input energy was from electricity, and we have not included the losses of power when converting electricity to laser output. Converting that heat to electricity will lose quite a bit, depending on how it is done. If you use the heat to boil water the losses are usually around 65%. In my novels I suggest using the magnetohydrodynamic effect that gets electricity out of the high velocity of the particles in the plasma. This has been made to work on plasmas made by burning fossil fuels, which doubles the efficiency of the usual approach, but controlling plasmas from nuclear fusion would be far more difficult. Again, very easy to do in theory; very much less so in practice. However, the challenge is there. If we can get sustained ignition, as opposed to such a short pulse, the amount of energy available is huge.

Sustained fusion means the energy emitted from the reaction is sufficient to keep it going with fresh material injected as opposed to having to set up containers in containers at the dead centre of a multiple laser pulse. Now, the plasma at over 100,000,000 degrees Centigrade should be sufficient to keep the fusion going. Of course that will involve even more problems: how to contain a plasma at that temperature; how to get the fuel into the reaction without melting then feed tubes or dissipating the hydrogen; how to get the energy out in a usable form; how to cool the plasma sufficiently? Many questions; few answers.

Climate Change: Are We Baked?

The official position from the IPCC’s latest report is that the problem of climate change is getting worse. The fires and record high temperatures in Western United States and British Columbia, Greece and Turkey may be portents of what is coming. There have been terrible floods in Germany and New Zealand has had its bad time with floods as well. While Germany was getting flooded, the town of Westport was inundated, and the Buller river had flows about 30% higher than its previous record flood. There is the usual hand-wringing from politicians. Unfortunately, at least two serious threats that have been ignored.

The first is the release of additional methane. Methane is a greenhouse gas that is about 35 times more efficient at retaining heat than carbon dioxide. The reason is absorption of infrared depends on the change of dipole moment during absorption. CO2 is a linear molecule and has three vibrational modes. One involves the two oxygen atoms both moving the same way so there is no change of dipole moment, the two changes cancelling each other. Another is as if the oxygen atoms are stationary and the carbon atom wobbles between them. The two dipoles now do not cancel, so that absorbs, but the partial cancellation reduces the strength. The third involves molecular bending, but the very strong bond means the bend does not move that much, so again the absorption is weak. That is badly oversimplified, but I hope you get the picture.

Methane has four vibrations, and rather than describe them, try this link: http://www2.ess.ucla.edu/~schauble/MoleculeHTML/CH4_html/CH4_page.html

Worse, its vibrations are in regions totally different from carbon dioxide, which means it is different radiation that cannot escape directly to space.

This summer, average temperatures in parts of Siberia were 6 degrees Centigrade above the 1980 – 2000 average and methane is starting to be released from the permafrost. Methane forms a clathrate with ice, that is it rearranges the ice structure and inserts itself when under pressure, but the clathrate decomposes on warming to near the ice melting point. This methane has formed from the anaerobic digestion of plant material and been trapped by the cold, so if released we get delivered suddenly all methane that otherwise would have been released and destroyed over several million years. There are about eleven billion tonnes of methane estimated to be in clathrates that could be subject to decomposition, about the effect of over 35 years of all our carbon dioxide emissions, except that as I noted, this works in a totally fresh part of the spectrum. So methane is a problem; we all knew that.

What we did not know a new source has been identified as published in the Proceedings of the National Academy of Sciences recently. Apparently significantly increased methane concentrations were found in two areas of northern Siberia: the Tamyr fold belt and the rim of the Siberian Platform. These are limestone formations from the Paleozoic era. In both cases the methane increased significantly during heat waves. The soil there is very thin so there is very little vegetation to decay and it was claimed the methane was stored in and now emitted from fractures in the limestone.

The  second major problem concerns the Atlantic Meridional Overturning Circulation (AMOC), also known as the Atlantic conveyor. What it does is to take warm water that gets increasingly salty up the east Coast of the US, then switch to the Gulf Stream and warm Europe (and provide moisture for the floods). As it loses water it gets increasingly salty and with its increased density it dives to the Ocean floor and flows back towards the equator. Why this is a problem is that the melting Northern Polar and Greenland ice is providing a flood of fresh water that dilutes the salty water. When the density of the water is insufficient to cause it to sink this conveyer will simply stop. At that point the whole Atlantic circulation as it is now stops. Europe chills, but the ice continues to melt. Because this is a “stopped” circulation, it cannot be simply restarted because the ocean will go and do something else. So, what to do? The first thing is that simply stopping burning a little coal won’t be enough. If we stopped emitting CO2 now, the northern ice would keep melting at its current rate. All we would do is stop it melting faster.

Interpreting Observations

The ancients, with a few exceptions, thought the Earth was the centre of the Universe and everything rotated around it, thus giving day and night. Contrary to what many people think, this was not simply stupid; they reasoned that it could not be rotating. An obvious experiment that Aristotle performed was to throw a stone high into the air so that it reached its maximum height directly above. When it dropped, it landed directly underneath it and its path was vertical to the horizontal. Aristotle recognised that up at that height and dropped, if the Earth was rotating the angular momentum from the height should carry it eastwards, but it did not. Aristotle was a clever reasoner, but he was a poor experimenter. He also failed to consider consequences of some of his other reasoning. Thus he knew that the Earth was a sphere, and he knew the size of it and thanks to Eratosthenes this was a fairly accurate value. He had reasoned correctly why that was, which was that matter fell towards the centre. Accordingly, he should also have realised his stone should also fall slightly to the south. (He lived in Greece; if he lived here it would move slightly northwards.) When he failed to notice that he should have realized his technique was insufficiently accurate. What he failed to do was to put numbers onto his reasoning, and this is an error in reasoning we see all the time these days from politicians. As an aside, this is a difficult experiment to do. If you don’t believe me, try it. Exactly where is the point vertically below your drop point? You must not find it by dropping a stone!

He also realised that Earth could not orbit the sun, and there was plenty of evidence to show that it could not. First, there was the background. Put a stick in the ground and walk around it. What you see is the background moves and moves more the bigger the circle radius, and smaller the further away the object is. When Aristarchus proposed the heliocentric theory all he could do was make the rather unconvincing bleat that the stars in the background must be an enormous distance away. As it happens, they are. This illustrates another problem with reasoning – if you assume a statement in the reasoning chain, the value of the reasoning is only as good as the truth of the assumption. A further example was that Aristotle reasoned that if the earth was rotating or orbiting the sun, because air rises, the Universe must be full of air, and therefore we should be afflicted by persistent easterly winds. It is interesting to note that had he lived in the trade wind zone he might have come to the correct conclusion for entirely the wrong reason.

But if he did he would have a further problem because he had shown that Earth could not orbit the sun through another line of reasoning. As was “well known”, heavy things fall faster than light things, and orbiting involves an acceleration towards the centre. Therefore there should be a stream of light things hurling off into space. There isn’t, therefore Earth does not move. Further, you could see the tail of comets. They were moving, which proves the reasoning. Of course it doesn’t because the tail always goes away from the sun, and not behind the motion at least half the time. This was a simple thing to check and it was possible to carry out this checking far more easily than the other failed assumptions. Unfortunately, who bothers to check things that are “well known”? This shows a further aspect: a true proposition has everything that is relevant to it in accord with it. This is the basis of Popper’s falsification concept.

One of the hold-ups involved a rather unusual aspect. If you watch a planet, say Mars, it seems to travel across the background, then slow down, then turn around and go the other way, then eventually return to its previous path. Claudius Ptolemy explained this in terms of epicycles, but it is easily understood in term of both going around the sun provided the outer one is going slower. That is obvious because while Earth takes a year to complete an orbit, it takes Mars over two years to complete a cycle. So we had two theories that both give the correct answer, but one has two assignable constants to explain each observation, while the other relies on dynamical relationships that at the time were not understood. This shows another reasoning flaw: you should not reject a proposition simply because you are ignorant of how it could work.I went into a lot more detail of this in my ebook “Athene’s Prophecy”, where for perfectly good plot reasons a young Roman was ordered to prove Aristotle wrong. The key to settling the argument (as explained in more detail in the following novel, “Legatus Legionis”) is to prove the Earth moves. We can do this with the tides. The part closest to the external source of gravity has the water fall sideways a little towards it; the part furthest has more centrifugal force so it is trying to throw the water away. They may not have understood the mechanics of that, but they did know about the sling. Aristotle could not detect this because the tides where he lived are miniscule but in my ebook I had my Roman with the British invasion and hence had to study the tides to know when to sail. There you can get quite massive tides. If you simply assume the tide is cause by the Moon pulling the water towards it and Earth is stationary there would be only one tide per day; the fact that there are two is conclusive, even if you do not properly understand the mechanics.

Dark Matter Detection

Most people have heard of dark matter. Its existence is clear, at least so many so state. Actually, that is a bit of an exaggeration. All we know is that galaxies do not behave exactly as General Relativity would have us think. Thus the outer parts of galaxies orbit the centre faster than they should and galaxy clusters do not have the dynamics expected. Worse, if we look at gravitational lensing, where light is bent as it goes around a galaxy, it is bent as if there is additional mass there that we just cannot see. There are two possible explanations for this. One is there is additional matter there we cannot see, which we call dark matter. The other alternative is that our understanding of what gravity behaves like is wrong on a large scale. We understand it very well on the scale of our solar system, but that is incredibly small when compared with a galaxy so it is possible we simply cannot detect such anomalies with our experiments. As it happens, there are awkward aspects of each, although the modified gravity does have the advantage that one explanation is that we might simply not understand how it should be modified.

One way of settling this dispute is to actually detect dark matter. If we detect it, case over. Well, maybe. However, so far all attempts to detect it have failed. That is not critical because to detect something, we have to know what it is, or what its properties are. So far all we can say about dark matter is that its gravity affects galaxies. It is rather hard to do an experiment on a galaxy so that is not exactly helpful. So what physicists have done is to make a guess as to what it will be, and. not surprisingly, make the guess in the form they can do something about it if they are correct. The problem now is we know it has to have mass because it exerts a gravitational effect and we know it cannot interact with electromagnetic fields, otherwise we would see it. We can also say it does not clump because otherwise there would be observable effects on close stars. There will not be dark matter stars. That is not exactly much to work on, but the usual approach has been to try and detect collisions. If such a particle can transfer sufficient energy to the molecule or atom, it can get rid of the energy by giving off a photon. So one such detector had huge tanks containing 370 kg of liquid xenon. It was buried deep underground, and from theory massive particles of dark matter could be separated from occasional neutron events because a neutron would give multiple events. In the end, they found nothing. On the other hand, it is far from clear to me why dark matter could not give multiple events, so maybe they saw some and confused it with stray neutrons.

On the basis that a bigger detector would help, one proposal (Leane and Smirnov, Physical Review Letters 126: 161101 (2021) suggest using giant exoplanets. The idea is that as the dark matter particles collide with the planet, they will deposit energy as they scatter and with the scattering eventually annihilate within the planet. This additional energy will be detected as heat. The point of the giant is that its huge gravitational field will pull in extra dark matter.

Accordingly, they wish someone to measure the temperatures on the surface of old exoplanets of mass between Jupiter and 55 times Jupiter’s mass, and temperatures above that otherwise expected can be allocated to dark matter. Further, since dark matter density should be higher near the galactic centre, and collisional velocities higher, the difference in surface temperatures between comparable planets may signal the detection of dark matter.

Can you see problems? To me, the flaw lies in “what is expected?” In my opinion, one is the question of getting sufficient accuracy in the infrared detection. Gravitational collapse gives off excess heat. Once a planet gets to about 16 Jupiter masses it starts fusing deuterium. Another lies in estimating the heat given off by radioactive decay. That should be understandable from the age of the planet, but if it had accreted additional material from a later supernova the prediction could be wrong. However, for me the biggest assumption is that the dark matter will annihilate, as without this it is hard to see where sufficient energy will come from. If galaxies all behave the same way, irrespective of age (and we see some galaxies from a great distance, which means we see them as they were a long time ago) then this suggests the proposed dark matter does not annihilate. There is no reason why it should, and that our detection method needs it to will be totally ignored by nature. However, no doubt schemes to detect dark matter will generate many scientific papers in the near future and consume very substantial research grants. As for me, I would suggest one plausible approach, since so much else has failed by assuming large particles, is to look for small ones. Are there any unexplained momenta in collisions from the large hadron collider? What most people overlook is that about 99% of the data generated is trashed (because there is so much of it), but would it hurt to spend just a little effort on examining if fine detail that which you do not expect to see much?

How can we exist?

One of the more annoying questions in physics is why are we here? Bear with me for a minute, as this is a real question. The Universe is supposed to have started with what Fred Hoyle called “The Big Bang”. Fred was being derisory, but the name stuck. Anyway what happened is that a very highly intense burst of energy began expanding, and as it did, perforce the energy became less dense. As that happened, out condensed elementary particles. On an extremely small scale, that happens in high-energy collisions, such as in the Large Hadron Collider. So we are reasonably convinced we know what happened up to this point, but there is a very big fly in the ointment. When such particles condense out we get an equal amount of matter and what we call antimatter. (In principle, we should get dark matter too, but since we do not know what that is, I shall leave that.) 

Antimatter is, as you might guess, the opposite of matter. The most obvious example is the positron, which is exactly the same as the electron except it has positive electric charge, so when a positron is around an electron they attract. In principle, if they were to hit each other they would release an infinite amount of energy, but nature hates the infinities that come out of our equations so when they get so close they annihilate each other and you get two gamma ray photons that leave in opposite directions to conserve momentum. That is more or less what happens when antimatter generally meets matter – they annihilate each other, which is why, when we make antimatter in colliders, if we want to collect it we have to do it very carefully with magnetic traps and in a vacuum.

So now we get to the problem of why we are here: with all that antimatter made in equal proportions to matter, why do we have so much matter? As it happens, the symmetry is violated very slightly in kaon decay, but this is probably not particularly helpful because the effect is too slight. In the previous post on muon decay I mentioned that that could be a clue that there might be physics beyond the Standard Model to be unraveled. Right now, the fact that there is so much matter in the Universe should be a far stronger clue that something is wrong with the Standard Model. 

Or is it? One observation that throws that into doubt was published in the Physical Review, D, 103, 083016 in April this year. But before coming to that, some background. A little over ten years ago, colliding heavy ions made a small amount of anti helium-3, and a little later, antihelium-4. The antihelium has two antiprotons, and one or two antineutrons. To make this, the problem is to get enough antiprotons and antineutrons close enough. To give some idea of the trouble, a billion collisions of gold ions with energies of two hundred billion and sixty-two billion electron volts produced 18 atoms of antihelium 4, with masses of 3.73 billion electron volts. In such a collision, the energy requires a temperature of over 250,000 times that of the sun’s core. 

Such antihelium can be detected through gamma ray frequencies when the atoms decay on striking matter, and apparently also through the Alpha Magnetic Spectrometer on the International Space Station, which tracks cosmic rays. The important point is that antihelium-4 behaves exactly the same as an alpha particle, except that, because the antiprotons have negative charge, their trajectories bend in the opposite direction to ordinary nuclei. These antinuclei can be made through the energies of cosmic rays hitting something, however it has been calculated that the amount of antihelium-3 detected so far is 50 times too great to be explained by cosmic rays, and the amount of antihelium-4 detected is 100,000 times too much.

How can this be? The simple answer is that the antihelium is being made by antistars. If you accept them, gamma ray detection indicates 5787 sources, and it has been proposed that at least fourteen of these are antistars, and if we look at the oldest stars near the centre of the galaxy, then estimates suggest up to a fifth of the stars there could be antistars, possibly with antiplanets. If there were people on these, giving them a hug would be outright disastrous for each of you.Of course, caution here is required. It is always possible that this antihelium was made in a more mundane way that as yet we do not understand. On the other hand, if there are antistars, it solves automatically a huge problem, even if it creates a bigger one: how did the matter and antimatter separate? As is often the case in science, solving one problem creates even bigger problems. However, real antistars would alter our view of the universe and as long as the antimatter is at a good distance, we can accept them.

Much Ado About Muons

You may or may not have heard that the Standard Model, which explains “all of particle physics”, is in trouble and “new physics” may be around the corner. All of this arises from a troublesome result from the muon, a particle that is very similar to an electron except it is about 207 times more massive and has a mean lifetime of 2.2 microseconds. If you think that is not very important for your current lifestyle (and it isn’t) wait, there’s more. Like the electron it has a charge of -1, and a spin of ½, which means it acts like a small magnet. Now, if the particle is placed in a strong magnetic field, the direction of the spin wobbles (technically, precesses) and the strength of this interaction is described by a number called the g factor, which for a classical situation, g = 2. Needless to say, in the quantum world that is wrong. For the electron, it is roughly 2.002 319 304 362, the numbers here stop where uncertainty starts. If nothing else, this shows the remarkable precision achieved by experimental physicists. Why is it not 2? The basic reason is the particle interacts with the vacuum, which is not quite “nothing”. You will see quantum electrodynamics has got this number down fairly precisely, and quantum electrodynamics, which is part of the standard model, is considered to be the most accurate theoretical calculation ever, or the greatest agreement between calculation and observation. All was well, until this wretched muon misbehaved.

Now, the standard model predicts the vacuum comprises a “quantum foam” of virtual particles popping in and out of existence, and these short-lived particles affect the g-factor, causing the muon’s wobble to speed up or slow down very slightly, which in turn leads to what is called an “anomalous magnetic moment”. The standard model should calculate these to the same agreement as with the electron, and the calculations give:

  • g-factor: 2.00233183620
  • anomalous magnetic moment: 0.00116591810

The experimental values announced by Fermilab and Brookhaven are:

  • g-factor: 2.00233184122(82)
  • anomalous magnetic moment: 0.00116592061(41)

The brackets indicate uncertainty. Notice a difference? Would you say it is striking? Apparently there is only a one in 40,000 chance that it will be a statistical error. Nevertheless, apparently they will keep this experiment running at Fermilab for another two years to firm it up. That is persistence, if nothing else.

This result is what has excited a lot of physicists because it means the calculation of how this particle interacts with the vacuum has underestimated the actual effect for the muon. That suggests more physics beyond the standard model, and in particular, a new particle may be the cause of the additional effect. Of course, there has to be a fly in the ointment. One rather fearsome calculation claims to be a lot closer to the observational value. To me the real problem is how can the same theory come up with two different answers when there is no arithmetical mistake?

Anyway, if the second one is right, problem gone? Again, not necessarily. At the Large Hadron collider they have looked at B meson decay. This can produce electrons and positrons, or muons and antimuons. According to the standard model, these two particles are identical other than for mass, which means the rate of production of each should be identical, but it isn’t quite. Again, it appears we are looking at small deviations. The problem then is, hypothetical particles that might explain one experiment fail for the other. Worse, the calculations are fearsome, and can take years. The standard model has 19 parameters that have to be obtained from experiment, so the errors can mount up, and if you wish to give the three neutrinos mass, in come another eight parameters. If we introduce yet another particle, in come at least one more parameter, and probably more. Which raises the question, since adding a new assignable parameter will always answer one problem, how do we know we are even on the right track?

All of which raises the question, is the standard model, which is a part of quantum field theory, itself too complicated, and maybe not going along the right path? You might say, how could I possibly question quantum field theory, which gives such agreeable results for the electron magnetic moment, admittedly after including a series of interactions? The answer is that it also gives the world’s worst agreement with the cosmological constant. When you sum the effects of all these virtual particles over the cosmos, the expansion of the Universe is wrong by 10^120, that is, 10 followed by 120 zeros. Not exceptionally good agreement. To get the agreement it gets, something must be right, but as I see it, to get such a howling error, something must be wrong also. The problem is, what?