Some Scientific Curiosities

This week I thought I would try to be entertaining, to distract myself and others from what has happened in Ukraine. So to start with, how big is a bacterium? As you might guess, it depends on which one, but I bet you didn’t guess the biggest. According to a recent article in Science Magazine (doi: 10.1126/science.ada1620) a bacterium has been discovered that lives in Caribbean mangroves that, while it is a single cell, it is 2 cm long. You can see it (proposed name, Thiomargarita magnifica) with the naked eye.

More than that, think of the difference between prokaryotes (most bacteria and single-cell microbes) and eukaryotes (most everything else that is bigger). Prokaryotes have free-floating DNA while eukaryotes package their DNA nucleus and put various cell functions into separate vesicles and can move molecules between the vesicles. But this bacterium cell includes two membrane sacs, only one of which contains DNA. The other sac contains 73% of the total volume and seems to be filled with water. The genome was nearly three times bigger than those of most bacteria.

Now, from Chemistry World. You go to the Moon or Mars, and you need oxygen to breathe. Where do you get it from? One answer is electrolysis, so do you see any problems, assuming you have water and you have electricity? The answer is that it will be up to 11% less efficient. The reason is the lower gravity. If you try to electrolyse water at zero g, such as in the space station, we knew it was less efficient because the gas bubbles have no net force on them. The force arises through different densities generating a weight difference, and the lighter gas rises, but in zero g, there is no lighter gas – they might have different masses, but they all have no weight. So how do they know this effect will apply on Mars or the Moon? They carried out such experiments on board free-fall flights with the help of the European Space Agency. Of course, these free-fall experiments are somewhat brief as the pilot of the aircraft will have this desire not to fly into the Earth.

The reason the electrolysis is slower is because gas bubble desorption is hindered. Getting the gas off the electrodes occurs because there are density differences, and hence a force, but in zero gravity there is no such force. One possible solution being considered is a shaking electrolyser. Next thing we shall see is requests for funding to build different sorts of electrolysers. They have considered using them in centrifuges to construct models to compute what the lower gravity would do, but an alternative might be to have such a process operating within a centrifuge. It does not need to be a fast spinning centrifuge as all you are trying to do is to generate the equivalent of 1 g, Also, one suggestion is that people on Mars or the Moon might want to spend a reasonable fraction of their time inside one such large centrifuge, to help keep the bone density up.

The final oddity comes from Physics World. As you may be aware, according to Einstein’s relativity, time, or more specifically, clocks, run slower as the gravity increases. Apparently this was once tested by taking a clock up a mountain and comparing it with one kept at the base, and General Relativity was shown to predict the correct result. However, now we have improved clocks. Apparently the best atomic clocks are so stable they would be out by less than a second after running for the age of the universe. This precision is astonishing. In 2018 researchers at the US National Institute for Standards and Technology compared two such clocks and found their precision was about 1 part in ten to the power of eighteen. It permits a rather astonishing outcome: it is possible to detect the tiny frequency difference between the two clocks if one is a centimeter higher than the other one. This will permit “relativistic geodesy”, which could be used to more accurately measure the earth’s shape, and the nature of the interior, as variations in density outcrops would cause minute changes in gravitational potential. Needless to say, there is a catch: they may be very precise but they are not very robust. Taking them outside the lab leads to difficulties, like stopping.

Now they have done better – using strontium atoms, uncertainty to less that 1 part in ten to the power of twenty! They now claim they can test for quantum gravity. We shall see more in the not too distant future.

Did a Galactic-Scale Collision Lead to Us?

Why do we have a planet that we can walk around on, and generally mess up? As most of us know, the atoms we use, apart from hydrogen, will have originated in a nova or supernova, and some of the planet possibly even from collisions of neutron stars. These powerful events send clouds of dust into gas clouds, but then what? We call it dust, but the particle size is mainly like smoke. Telescopes like the Hubble space telescope have photographed huge clouds of gas and dust in space. These can be quite large, thus the Orion molecular cloud complex is hundreds of light years across. These giant clouds can sit there and do very little, or then start forming stars. The question then is, what starts it? The hydrogen and helium, which are by far the predominant components, with hydrogen masses about ten thousand times as much as anything else except helium, are always colliding with each other, and with dust molecules, but they always bounce back because there is no way to lose their kinetic energy. The gas has been around for 13.6 billion years, so why does it collapse suddenly?

To make things slightly more complicated, the cloud does not collapse on itself. Rather, sections collapse to form stars. The section that formed our solar system would probably have been a few thousand astronomical units across (an astronomical unit, AU, is the distance between Earth and the Sun), and this is a trivial fraction of such giant clouds. So what happens is sections collapse, leaving the cloud with “holes”, a little like a Swiss cheese.

For us, about 4.6 billion years ago such a piece of a gigantic gas cloud started to collapse upon itself, which eventually led to the formation of the solar system, and us. Perhaps we should thank whatever caused that collapse. A common explanation is that a nearby supernova sent a shockwave through the gas, and that may well be correct for a specific situation, but there is another source of disruption: galactic collisions. We have observed these elsewhere, and invariably such collisions lead to a good generation of stars. Major galaxies do not collide that often because they are so far away from each other. As an example, in about five billion years, Andromeda will collide with the Milky Way. That may well initiate a lot of formation of new stars as long as there is plenty of gas and dust clouds left.

However, there are some galactic collisions that are a bit more frequent. There is something called the Sagittarius Dwarf Spheroidal Galaxy which is approximately a tenth the diameter of the Milky Way. It comprises four main globular clusters and is spiralling around our galaxy on a polar orbit about 50,000 light years from the galactic core and passes through the plane of the Milky Way periodically. It apparently did this about five to six billion years ago, then about two billion years ago, and one billion years ago. Coupled with that, a team of astronomers have argued that star formation in the Milky Way peaked at around 5.7, 1.9 and 1 billion years ago. The argument appears to be that such star formation arose about the same time that the dwarf galaxy passed through the Milky Way. In this context, some of our nearest stars fit ths hypothesis. Thus Tau Ceti, EZ Aquarii,  and Alpha Centauri A and B are about 5.8 billion years old, Procyon is about 1.7 billion years old, while Epsilon Eridani is about 900 million years old.

However, if we look at other local stars, we find Earth, Lacaille 9352 and Proxima Centauri are about 4.5 billion years old, Epsilon Indi is about 1.3 years old, Alpha Ophiuchi A is about 750 million years old, Sirius is about 230 million years old, and Wolf 359 is between 100 – 300 million years old. Of course, a galaxy passing through another galaxy will consume a lot of time, so it is not clear what to make of this. There is always a temptation to correlate and assume causation, and that is unsound. On the other hand, the more massive Milky Way may have stripped some gas from the smaller galaxy, and a wave of gas and dust on a different orbit could have long term effects.

In case you think the stars in a galaxy are on well-behaved orbits around the centre, that is wrong. Because the galaxy formed from the collision and absorption of smaller galaxies the motion is actually quite chaotic, but because stars are so far apart by and large they ignore each other. Thus Kapteyn’s Star orbits the galactic centre and is quite close to our Sun, except it is going in the opposite direction. We “meet again” on the other side of the galaxy in about 120 million years. So to summarize, we still don’t know what caused this solar system to form but we should be thankful that we got what we did. Our system happens to be just about right for our life to form, but as you will see, when it comes out, from the second edition of my ebook “Planetary Formation and Biogenesis” there are a lot of things that could have gone wrong. Let’s not help more things to go wrong.

Warp Drives

“Warp drives” originated in the science fiction shows “Star Trek” in the 1960s, but in 1994, the Mexican Miguel Alcubierre published a paper arguing that under certain conditions exceeding light speed was not forbidden by Einstein’s General Relativity. Alcubierre reached his solution by assuming it was possible, then working backwards to see what was required while rejecting those awkward points that arose. The concept is that the ship sits in a bubble, and spacetime in front of the ship is contracted, while that behind the ship is expanded. In terms of geometry, that means the distance to your destination has got smaller, while the distance from where you started gets longer, i.e. you moved relative to the starting point and the destination. One of the oddities of being in such a bubble is you would not sense you are moving. There would be no accelerating forces because technically you are not moving; it is the space around you that is moving. Captain Kirk on the enterprise is not squashed to a film by the acceleration! Since then there have been a number of proposals. General relativity is a gold mine for academics wanting to publish papers because it is so difficult mathematically.

There is one small drawback to these proposals: you need negative energy. Now we run into definitions, and before you point out the gravitational field has negative energy it is generated by positive mass, and it contracts the distance between you and target, i.e. you fall towards it. If you like, that can be at the front of your drive. The real problem is at the other end – you need the repulsive field that sends you further from where you started, and if you think gravitationally, the opposite field, presumably generated from negative mass.

One objection often heard to negative energy is if quantum field theory were correct, the vacuum would collapse to negative energy, which would lead to the Universe collapsing on itself. My view is, not necessarily. The negative potential energy of the gravitational field causes mass to collapse onto itself, and while we do get black holes in accord with this, the Universe is actually expanding. Since quantum field theory assumes a vacuum energy density, calculations of the relativistic gravitational field arising from this are in error by ten multiplied by itself 120 times, so just maybe it is not a good guideline here. It predicts the Universe has long since collapsed, but here we are.

The only repulsive stuff we think might be there is dark energy, but we have no idea how to lay hands on it, let alone package it, or even if it exists. However, all may not be lost. I recently saw an article in Physics World that stated that a physicist, Erik Lentz, had claimed there was no need for negative energy. The concept is that energy could be capable of arranging the structure of space-time as a soliton. (A soliton is a wave packet that travels more like a bubble, it does not disperse or spread out, but otherwise behaves like a wave.) There is a minor problem. You may have heard that the biggest problem with rockets is the mass of fuel they have to carry before you get started. Well, don’t book a space flight yet. As Lentz has calculated it, a 100 m radius spacecraft would require the energy equivalent to hundreds of times the mass of Jupiter.

There will be other problems. It is one thing to have opposite energy densities on different sides of your bubble. You still have to convert those to motion and go exactly in the direction you wish. If you cannot steer as you go, or worse, you don’t even know for sure exactly where you are and the target is, is there a point? Finally, in my science fiction novels I have steered away from warp drives. The only times my characters went interstellar distances I limited myself to a little under light speed. Some say that lacks imagination, but stop and think. You set out to do something, but suppose where you are going will have aged 300 years before you get there. Come back, and your then associates have been dead for 600 years. That raises some very awkward problems that make a story different from the usual “space westerns”.

What Happens Inside Ice Giants?

Uranus and Neptune are a bit weird, although in fairness that may be because we don’t really know much about them. Our information is restricted to what we can see in telescopes (not a lot) and the Voyager fly-bys, which, of course, also devoted a lot of attention to the Moons, since a lot of effort was devoted to images. The planets are rather large featureless balls of gas and cloud and you can only do so much on a “zoom-past”. One of the odd things is the magnetic fields. On Earth, the magnetic field axis corresponds with the axis of rotation, more or less, but not so much there. Earth’s magnetic field is believed to be due to a molten iron core, but that could not occur there. That probably needs explaining. The iron in the dust that is accreted to form planets is a fine powder; the particles are in the micron size. The Earth’s core arises because the iron formed lumps, melted, and flowed to the core because it is denser. In my ebook “Planetary Formation and Biogenesis” I argue that the iron actually formed lumps in the accretion disk. While the star was accreting, the region around where Earth is reached something like 1600 degrees C, above the melting point of iron, so it formed globs. We see the residues of that in the iron-cored meteorites that sometimes fall to Earth. However, Mars does not appear to have an iron core. Within that model, the explanation is simple. While on Earth the large lumps of iron flowed towards the centre, on Mars, since the disk temperature falls off with distance from the star, at 1.5 AU the large lumps did not form. As a consequence, the fine iron particles could not move through the highly viscous silicates, and instead reacted with water and oxidised, or, if you prefer, rusted.

If the lumps that formed for Earth could not form at Mars because it was too far away from the star, the situation was worse for Uranus. As with Mars, the iron would be accreted as a fine dust and as the ice giants started to warm up from gravitational collapse, the iron, once it got to about 500 degrees Centigrade, would rapidly react with the water and oxidise to form iron oxides and hydrogen. Why did that not happen in the accretion disk? Maybe it did, and maybe at Mars it was always accreted as iron oxides, but by the time it got to where Earth is, there would be at least ten thousand times more hydrogen than iron, and hot hydrogen reduces iron oxide to iron. Anyway, Uranus and Neptune will not have an iron core, so what could generate the magnetic fields? Basically, you need moving electric charge. The planets are moving (rotating) so where does the charge come from?

The answer recently proposed is superionic ice. You will think that ice melts at 0 degrees Centigrade, and yes, it does, but only at atmospheric pressure. Increase the pressure and it melts at a lower temperature, which is how you make snowballs. But ice is weird. You may think ice is ice, but that is not exactly correct. There appear to be about twenty ices possible from water, although there are controversial aspects because high pressure work is very difficult and while you get information, it is not always clear about what it refers to. You may think that irrespective of that, ice will be liquid at the centre of these planets because it will be too hot for a solid. Maybe.

In a recent publication (Nature Physics, 17, 1233-1238 November 2021) authors studied ice in a diamond anvil cell at pressures up to 150 GPa (which is about 1.5 million times greater than our atmospheric pressure) and about 6,500 degrees K (near enough to Centigrade at this temperature). They interpret their observations as there being superionic ice there. The use of “about” is because there will be uncertainty due to the laser heating, and the relatively short times up there. (Recall diamond will also melt.)

A superionic ice is proposed wherein because of the pressure, the hydrogen nuclei can move about the lattice of oxygen atoms, and they are the cause of the electrical conduction. These conditions are what are expected deep in the interior but not at the centre of these two planets. There will presumably be zones where there is an equilibrium between the ice and liquid, and convection of the liquid coupled with the rotation will generate the movement of charge necessary to make the magnetism. At least, that is one theory. It may or may not be correct.

Your Water Came from Where?

One interesting question when considering why Earth has life is from where did we get our water? This is important because essentially it is the difference between Earth and Venus. Both are rocky planets of about the same size. They each have similar amounts of carbon dioxide, with Venus having about 50% more than Earth, and four times the amount of nitrogen, but Venus is extremely short of water. If we are interested in knowing about whether there is life on other planets elsewhere in the cosmos, we need to know about this water issue. The reason Venus is hell and Earth is not is not that Venus is closer to the Sun (although that would make Venus warmer than Earth) but rather it has no water. What happened on Earth is that the water dissolved the CO2 to make carbonic acid, which in turn weathered rocks to make the huge deposits of lime, dolomite, etc that we have on the planet, and to make the bicarbonates in the sea.

One of the more interesting scientific papers has just appeared in Nature Astronomy (https://doi.org/10.1038/s41550-021-01487-w) although the reason I find it interesting may not meet with the approval of the authors. What the authors did was to examine a grain of the dust retrieved from the asteroid Itokawa by the Japanese Space agency and “found it had water on its surface”. Note it had not evaporated after millions of years in a vacuum. The water is produced, so they say, by space weathering. What happens is that the sun sends out bursts of solar wind which contains high velocity protons. Space dust is made of silicates, which involve silica bound to four oxygen atoms in a tetrahedron, and each oxygen atom is bound to something else. Suppose, for sake of argument, the something else is a magnesium atom. A high energy hydrogen nucleus (a proton) strikes it and makes SiOH and, say Mg+, with the Mg ion and the silicon atom remaining bound to whatever else they were bound to. It is fairly standard chemistry that 2SiOH → SiOSi plus H2O, so we have made water. Maybe, because the difference between SiOH on a microscopic sample of dust and dust plus water is rather small, except, of course, Si-OH is chemically bound to and is part of the rock, and rock does not evaporate. However, the alleged “clincher”: the ratio of deuterium to hydrogen on this dust grain was the same as Earth’s water.

Earth’s water has about 5 times more deuterium than solar hydrogen, Venus about a hundred times. The enhancement arises because if anything is to break the bond in H-O-D, the hydrogen is slightly more probable to go because the deuterium has a slightly stronger bond to the oxygen. Also, being slightly heavier, H-O-D is slightly less likely to get to the top of the atmosphere.

So, a light bulb moment: Earth’s water came from space dust. They calculate that this would produce twenty litres of water for every cubic meter of rock. This dust is wet! If that dust rained down on Earth it would deliver a lot of water. The authors suggest about half the water here came that way, while the rest came from carbonaceous chondrites, which have the same D/H ratio.

So, notice anything? There are two problems when forming a theory. First, the theory should account for everything of relevance. In practice this might be a little much, but there should be no obvious problems. Second, the theory should have no obvious inconsistencies. First, let us look at the “everything”. If the dust rained down on the Earth, why did not the same amount rain down on Venus? There is a slight weakness in this argument because if it did, maybe the water was largely destroyed by the sunlight. If that happened a high D/H ratio would result, and that is found on Venus. However, if you accept that, why did Earth’s water not also have its D/H ratio increased? The simplest explanation would be that it did, but not to extent of Venus because Earth had more water to dilute it. Why did the dust not rain down on the Moon? If the answer is the dust had been blown away by the time the Moon was formed, that makes sense, except now we are asking the water to be delivered at the time of accretion, and the evidence on Mars was that water was not there until about 500 million years later. If it arrived before the disk dust was lost, then the strongest supply of water would come closest to the star, and by the time we got to Earth, it would be screened by inner dust. Venus would be the wettest and it isn’t.

Now the inconsistencies. The strongest flux of solar wind at this distance would be what bombards the Moon, and while the dust was only here for a few million years, the Moon has been there for 4.5 billion years. Plenty of time to get wet. Except it has not. The surface of the dust on the Moon shows this reaction, and there are signs of water on the Moon, especially in the more polar regions, and the average Moon rock has got some water. But the problem is these solar winds only hit the surface. Thus the top layer or so of atoms might react, but nothing inside that layer. We can see those SiOH bonds with infrared spectroscopy, but the Moon, while it has some such molecules, it cannot be described as wet. My view is this is another one of those publications where people have got carried away, more intent on getting a paper that gets cited for their CV than actually stopping and thinking about a problem.

Quantum weirdness, or not?

How can you test something without touching it with anything, even a single photon? Here is one of the weirder aspects of quantum mechanics. First, we need a tool, and we use the Mach-Zehnder interferometer, which is illustrated as follows:

There is a source that sends individual photons to a beam splitter (BS1), which divides the beam into two sub-beams, each of which proceed to a mirror that redirects them to meet at another beam splitter (BS2). The path lengths of the two sub-beams are exactly the same (in practice a little adjustment may be needed to get this to work). Each sub-beam (say R and T for reflectance and transmitted at BS1) is reflected once by a mirror. When reflected, they sustain a phase shift of π, and R sustains such a phase shift at BS1. At BS2, the waves going to D1 both have had two reflections, so both have had a phase shift of 2π and they interfere constructively, therefore D1 registers the photon arrival. However, it is a little more complicated at D2. The original beams T and R that would head towards D2 have a net phase difference of π within the beam splitter, so they destructively interfere and the original beam R continues in the direction of net constructive interference hence only detector D1 registers. Now, suppose we send through one photon. At BS1, it seems the wave goes both ways but the photon, which acts as a particle, can only go one way. You get exactly the same result because it does not matter which way the photon goes; the wave goes both ways but the phase shift means only D1 registers.

Now, suppose we block one of the paths? Now there is no interference at BS2 so both D1 and D2 register equally. That means we can detect an obstruction on the path R even if no photon goes along it.

Now, here is the weird conclusion proposed by Elitzer and Vaidman [Foundations of Physics 23, 987, (1992)]. Suppose you have a large supply of bombs, but you think some may be duds. You attach a sensor to each bomb wherein if one photon hits it, it explodes. (It would be desirable to have a high energy laser as a source, otherwise you will be working in the dark setting this up.) At first sight all you have to do is shine light on said bombs, but at the end all you will have are duds, the good ones having blown up. But suppose we put it in the arm of such an interferometer so that it blocks the photon. Half the time a photon will strike it and it will explode if it is good, but consider the other half. When the photon gets to the second beam splitter, the photon has a 50% chance of going to either D1 or D2. If it goes to D1 we know nothing, but if it goes to D2 we know the photon went to the bomb. If the bomb was any good it exploded, so if it did not explode we know it was a dud. So if the bomb is good, the probability is ¼ that we shall learn without destroying it, ½ that we destroy it, and ¼ that we don’t know. In this case we send a second photon and continue until we get a hit at D2, then stop. The probability that we can detect the bomb without sensing it with anything now ends up at 1/3. So we end up keeping 1/3 of our bombs and locate all the duds.

Of course, this is a theoretical prediction. As far as I know, nobody has ever tested bombs, or anything else for that matter, this way. In standard quantum mechanics this is just plain weird. Of course, if you accept the pilot wave approach of de Broglie or Bohm, or for that matter my guidance wave version, where there is actually a physical wave other than the wave being a calculating aid, it is rather straightforward. Can you separate these versions? Oddly enough, yes, if reports are correct. If you have a version of this with an electron, the end result is that any single electron has a 50% chance of firing each detector. Of course, one electron fires only one detector. What does this mean? The beam splitter (which is a bit different for particles) will send the electron either way with a 50% probability, but the wave appears to always follow the particle and is not split. Why would that happen? The mathematics of my guidance wave require the wave to be regenerated continuously. For light, this happens from the wave itself, from Maxwell’s theory of light being an oscillation of electromagnetic waves. The oscillation of the electric field causes the next magnetic oscillation, and vice versa. But an electron does not have this option, and the wave has to be tolerably localised in space around the particle.

Thus if the electron version of this Mach Zehnder interferometer does do what the reference I say claims it did (unfortunately, it did not cite a reference) then this odd behaviour of electrons shows that the wave function for particles at least cannot be non-local (or the beam splitter did not work. There is always an alternative conclusion to any single observation.)

Polymerised Water

In my opinion, probably the least distinguished moment in science in the last sixty years occurred in the late 1960s, and not for the seemingly obvious reason. It all started when Nikolai Fedyakin condensed steam in quartz capillaries and found it had unusual properties, including a viscosity approaching that of a syrup. Boris Deryagin improved production techniques (although he never produced more than very small amounts) and determined a freezing point of – 40 oC, a boiling point of » 150 oC, and a density of 1.1-1.2. Deryagin decided there were only two possible reasons for this anomalous behaviour:

(a) the water had dissolve quartz,

(b) the water had polymerised.

Since recently fused quartz is insoluble in water at atmospheric pressures, he concluded that the water must have polymerised. There was no other option. An infrared spectrum of the material was produced by a leading spectroscopist from which force constants were obtained, and a significant number of papers were published on the chemical theory of polywater. It was even predicted that an escape of polywater into the environment could catalytically convert the Earth’s oceans into polywater, thus extinguishing life. Then there was the inevitable wake-up call: the IR spectrum of the alleged material bore a remarkable resemblance to that of sweat. Oops. (Given what we know now, whatever they were measuring could not have been what everyone called polywater, and probably was sweat, and how that happened from a very respected scientist remains unknown.)

This material brought out some of the worst in logic. A large number of people wanted to work with it, because theory validated it existence. I gather the US navy even conducted or supported research into it. The mind boggles here: did they want to encase enemy vessels in toffee-like water, or were they concerned someone might do it to them? Or even worse, turn the oceans into toffee, and thus end all life on Earth? The fact that the military got interested, though, shows it was taken very seriously. I recall one paper that argued Venus was like it is because all its water polymerised!

Unfortunately, I think the theory validated the existence because, well, the experimentalists said it did exist, so the theoreticians could not restrain themselves from “proving” why it existed. The key to the existence is that they showed through molecular orbital theory that the electrons in water had to be delocalized. Most readers won’t see the immediate problem because we are getting a little technical here, but to put it in perspective, molecular orbital theory assumes the electrons are delocalized over the whole molecule. If you further assume water molecules come together, the firsr assumption requires the electrons to be delocalised, which in turn forces the system to become one molecule. If you cannot end up with what you assumed in the first place, your theoretical work is not exactly competent, let alone inspired.

Unfortunately, these calculations involve what are called quantum mechanics. Quantum mechanics is one of the most predictive theories ever, and almost all your electronic devices have parts that would not have been developed but for knowledge of quantum mechanics. The problem is that for any meaningful problem there is usually no analytical solution from the formal quantum theory generally used, and any actual answer requires some rather complicated mathematics, and in chemistry, because of the number of particles, some approximations. Not everyone agreed. The same computer code in different hands sometimes produced opposite results with no explanation of why the results differed. If there were no differences in the implied physics between methods that gave opposing results, then the calculation method was not physical. If there were differences in the physics, then these should have been clearly explained. The average computational paper gives very little insight to what is done and these papers were actually somewhat worse than usual. It was, “Trust me, I know what I’m doing.” In general, they did not.

So, what was it? Essentially, ordinary water with a lot of dissolved silica, i.e. option (a) above. Deryagin was unfortunate in suffering in logic from the fallacy of the accident. Water at 100 degrees C does not dissolve quartz. If you don’t believe me, try boiling water it in a pot with a piece of silica. It does dissolve it at supercritical temperatures, but these were not involved. So what happened? Seemingly, water condensing in quartz capillaries does dissolve it. However, now I come to the worst part. Here we had an effect that was totally unexpected, so what happened? After the debacle, nobody was prepared to touch the area. We still do not know why silica in capillaries is so eroded, yet perhaps there is some important information here, after all water flows through capillaries in your body.

One of the last papers written on “anomalous water” was in 1973, and one of the authors was John Pople, who went on to win a Nobel Prize for his work in computational chemistry. I doubt that paper is one that he is most proud of. The good news is the co-author, who I assume was a post-doc and can remain anonymous because she almost certainly had little control on what was published, had a good career following this.

The bad news was for me. My PhD project involved whether electrons were delocalized from cyclopropane rings. My work showed they were not however computations from the same type of computational code said it did. Accordingly, everybody ignored my efforts to show what was really going on. More on this later.

Ossified Science

There was an interesting paper in the Proceedings of the National Academy of Sciences (118,  e2021636118,  https://doi.org/10.1073/pnas.2021636118 ) which argued that science is becoming ossified, and new ideas simply cannot emerge. My question is, why has it taken them this long to recognize it? That may seem a strange thing to say, but over the life of my career I have seen no radically new ideas get acknowledgement.

The argument in the paper basically fell down to one simple fact: over this period there has been a huge expansion in the quantity of scientists, research funding, and the number of publications. Progress in the career of a scientist depends on the number of papers produced. However, the more papers produced, the more likely is the science to stagnate because nobody has the time to read everything. People pick and choose what to read, the selection biased by the need not to omit people who may read your funding application. Reading is thus focused on established thinking. As the number of papers increase, citations flow increasingly towards the already well-cited papers. Lesser known authors are unlikely to ever become highly cited; if they do it is not through a cumulative process of analysis. New material is extremely unlikely to disrupt existing work, with the result that progress in large established scientific fields may be trapped in existing canon. That is fairly stern stuff.

It is important to note there are at least three major objectives relating to science. The first is developing methods to gain information, or, if you prefer, developing new experimental or observational techniques. The second is using those techniques to record more facts. The more scientists there are, the more successful these are, and over the period we have most certainly been successful in these objectives. The rapid provision of new vaccines for SARS-CoV-2 shows that when pushed, we find ways of how to do it. When I started my career, a very large clunky computer that was incredibly slow and had internal memory measured in bytes occupied a room. Now we have memory that stores terrabytes in something you can hold in your hand. So yes, we have learned how to do it, and we have acquired a huge amount of knowledge. There is a glut of facts available.

The third objective is to analyse those facts and derive theories so we can understand nature, and do not have to examine that mountain of data for any reason other than to verify that we are on the right track. That is where little has happened.

As the PNAS paper points out, policy reflects the “more is better” approach. Rewards are for the number of articles, and citations reflect the quality of them. The number of publications are easily counted, but the citations are more problematical. To get the numbers up, people carry out work most likely to reach a fast result. The citations are the ones most easily found, which means those that get a good start gather citations like crazy. There are also “citation games”: you cite mine, I’ll cite yours. These citations may have nothing in particular to add in terms of the science or logic, but they do add to the career prospects.

What happens when a paper is published? As the PNAS paper says, “cognitively overloaded reviewers and readers process new work only in relationship to existing exemplars”. If a new paper does not fit the existing dynamic, it will be ignored. If  the young researcher wants to advance, he or she must avoid trying to rock the boat. You may feel that the authors of this are overplaying a non-problem. Not so. One example shows how the scientific hierarchy thinks. One of the two major advances in theoretical physics in the twentieth century was quantum mechanics. Basically, all our advanced electronic technology depends on that theory, and in turn the theory is based on one equation published by Erwin Schrödinger. This equation is effectively a statement that energy is conserved, and that the energy is determined by a wave function ψ. It is too much to go into here, but the immediate consequence was the problem, what exactly does ψ represent?

Louis de Broglie was the first to propose that quantum motion was represented by a wave, and he came up with a different equation which stated the product of the momentum and wavelength was Planck’s constant, or the quantum of action. De Broglie then proposed that ψ was a physical wave, which he called the pilot wave. This was promptly ignored in favour of a far more complicated mathematical procedure that we can ignore for the present. Then, in the early 1950s David Bohm more or less came up with the same idea as de Broglie, which was quite different from the standard paradigm. So how was that received? I found a 1953 quote from J. R. Oppenheimer: “We consider it juvenile deviationism .. we don’t waste our time … [by] actually read[ing] the paper. If we cannot disprove Bohm, then we must agree to ignore him.” So much for rational analysis.

The standard theory states that if an electron is fired at two slits it goes through BOTH of them then gives an interference pattern. The pilot wave says the electron has a trajectory, goes through one slit only, and while it forms the same interference pattern, an electron going through the left slit never ends up in the right hand pattern. Observations have proved this to be correct (Kocsis, S. and 6 others. 2011. Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer Science 332: 1170 – 1173.) Does that change anyone’s mind? Actually, no. The pilot wave is totally ignored, except for the odd character like me, although my version is a little different (called a guidance wave) and it is ignored more.

Unexpected Astronomical Discoveries.

This week, three unexpected astronomical discoveries. The first relates to white dwarfs. A star like our sun is argued to eventually run out of hydrogen, at which point its core collapses somewhat and it starts to burn helium, which it converts to carbon and oxygen, and gives off a lot more energy. This is a much more energetic process than burning hydrogen to helium, so although the core contracts, the star itself expands and becomes a red giant. When it runs out of that, it has two choices. If it is big enough, the core contracts further and it burns carbon and oxygen, rather rapidly, and we get a supernova. If it does not have enough mass, it tends to shed its outer matter and the rest collapses to a white dwarf, which glows mainly due to residual heat. It is extremely dense, and if it had the mass of the sun, it would have a volume roughly that of Earth.

Because it does not run fusion reactions, it cannot generate heat, so it will gradually cool, getting dimmer and dimmer, until eventually it becomes a black dwarf. It gets old and it dies. Or at least that was the theory up until very recently. Notice anything wrong with what I have written above?

The key is “runs out”. The problem is that all these fusion reactions occur in the core, but what is going on outside. It takes light formed in the core about 100,000 years to get to the surface. Strictly speaking, that is calculated because nobody has gone to the core of a star to measure it, but the point is made. It takes that long because it keeps running into atoms on the way out, getting absorbed and re-emitted. But if light runs into that many obstacles getting out, why do you think all the hydrogen would work its way to the core? Hydrogen is light, and it would prefer to stay right where it is. So even when a star goes supernova, there is still hydrogen in it. Similarly, when a red giant sheds outer matter and collapses, it does not necessarily shed all its hydrogen.

The relevance? The Hubble space telescope has made another discovery, namely that it has found white dwarfs burning hydrogen on their surfaces. A slightly different version of “forever young”. They need not run out at all because interstellar space, and even intergalactic space, still has vast masses of hydrogen that, while thinly dispersed, can still be gravitationally acquired. The surface of the dwarf, having such mass and so little size, will have an intense gravity to make up for the lack of exterior pressure. It would be interesting to know if they could determine the mechanism of the fusion. I would suspect it mainly involves the CNO cycle. What happens here is that protons (hydrogen nuclei) in sequence enter a nucleus that starts out as ordinary carbon 12 to make the element with one additional proton, which then decays to produce a gamma photon, and sometimes a positron and a neutrino until it gets to nitrogen 15 (having been in oxygen 15) after which if it absorbs a proton it spits out helium 4 and returns to carbon 12. The gamma spectrum (if it is there) should give us a clue.

The second is the discovery of a new Atira asteroid, which orbits the sun every 115 days and has a semi-major axis of 0.46 A.U. The only known object in the solar system with a smaller semimajor axis is Mercury, which orbits the sun in 89 days. Another peculiarity of its orbit is that it can only be seen when it is away from the line of the sun, and as it happens, these times are very difficult to see it from the Northern Hemisphere. It would be interesting to know its composition. Standard theory has it that all the asteroids we see have been dislodged from the asteroid belt, because the planets would have cleaned out any such bodies that were there from the time of the accretion disk. And, of course, we can show that many asteroids were so dislodged, but many does not mean all. The question then is, how reliable is that proposed cleanout? I suspect, not very. The idea is that numerous collisions would give the asteroids an eccentricity that would lead them to eventually collide with a planet, so the fact they are there means they have to be resupplied, and the asteroid belt is the only source. However, I see no reason why some could not have avoided this fate. In my ebook “Planetary Formation and Biogenesis” I argue that the two possibilities would have clear compositional differences, hence my interest. Of course, getting compositional information is easier said than done.

The third “discovery” is awkward. Two posts ago I wrote how the question of the nature of dark energy might not be a question because it may not exist. Well, no sooner had I posted, than someone came up with a claim for a second type of dark energy. The problem is, if the standard model is correct, the Universe should be expanding 5 – 10% faster than it appears to be doing. (Now, some would say that indicates the standard model is not quite right, but that is apparently not an option when we can add in a new type of “dark energy”.) This only applied for the first 300 million years or so, and if true, the Universe has suddenly got younger. While it is usually thought to be 13.8 billion years old, this model has it at 12.4 billion years old. So while the model has “invented” a new dark energy, it has also lost 1.4 billion years in age. I tend to be suspicious of this, especially when even the proposers are not confident of their findings. I shall try to keep you posted.

Thorium as a Nuclear Fuel

Apparently, China is constructing a molten salt nuclear reactor to be powered by thorium, and it should be undergoing trials about now. Being the first of its kind, it is, naturally, a small reactor that will produce 2 megawatt of thermal energy. This is not much, but it is important when scaling up technology not to make too great of leaps because if something in the engineering has to be corrected it is a lot easier if the unit is smaller. Further, while smaller is cheaper, it is also more likely to create fluctuations, especially with temperature, and when smaller they are far easier to control. The problem with a very large reactor is if something is going wrong it takes a long time to find out, but then it also becomes increasingly difficult to do anything about it.

Thorium is a weakly radioactive metal that has little current use. It occurs naturally as thorium-232 and that cannot undergo fission. However, in a reactor it absorbs neutrons and forms thorium-233, which has a half-life of 22 minutes and β-decays to protactinium-233. That has a half-life of 27 days, and then β-decays to uranium-233, which can undergo fission. Uranium-233 has a half-life of 160,000 years so weapons could be made and stored.  

Unfortunately, 1.6 tonne of thorium exposed to neutrons and if appropriate chemical processing were available, is sufficient to make 8 kg of uranium-233, and that is enough to produce a weapon. So thorium itself is not necessarily a form of fuel that is free of weapons production. However, to separate Uranium-233 in a form to make a bomb, major chemical plant is needed, and the separation needs to be done remotely because apparently contamination with Uranium-232 is possible, and its decay products include a powerful gamma emitter. However, to make bomb material, the process has to be aimed directly at that. The reason is, the first step is to separate the protactinium-233 from the thorium, and because of the short half-life, only a small amount of the thorium gets converted. Because a power station will be operating more or less continuously, it should not be practical to use it to make fissile material for bombs.

The idea of a molten salt reactor is that the fissile material is dissolved in a liquid salt in the reactor core. The liquid salt also takes away the heat which, when the salt is cycles through heat exchangers, converts water to steam, and electricity is obtained in the same way as any other thermal station. Indeed, China says it intends to continue using its coal-fired generators by taking away the furnaces and replacing them with a molten salt reactor. Much of the infrastructure would remain. Further, compared with the usual nuclear power stations, the molten salt reactors operate at a higher temperature, which means electricity can be generated more efficiently.

One advantage of a molten salt reactor is it operates at lower pressures, which greatly reduces the potential for explosions. Further, because the fuel is dissolved in the salt you cannot get a meltdown. That does not mean there cannot be problems, but they should be much easier to manage. The great advantage of the molten salt reactor is it burns its reaction products, and an advantage of a thorium reactor is that most of the fission products have shorter half-lives, and since each fission produces about 2.5 neutrons, a molten salt reactor further burns larger isotopes that might be a problem, such as those of neptunium or plutonium formed from further neutron capture. Accordingly, the waste products do not comprise such a potential problem.

The reason we don’t directly engage and make lots of such reactors is there is a lot of development work required. A typical molten salt mix might include lithium fluoride, beryllium fluoride, the thorium tetrafluoride and some uranium tetrafluoride to act as a starter. Now, suppose the thorium or uranium splits and produces, say, a strontium atom and a xenon atom. At this point there are two fluorine atoms as surplus, and fluorine is an extraordinarily corrosive gas. As it happens, xenon is not totally unreactive and it will react with fluorine, but so will the interior of the reactor. Whatever happens in there, it is critical that pumps, etc keep working. Such problems can be solved, but it does take operating time to be sure such problems are solved. Let’s hope they are successful.