A Different Way of Applying Quantum Mechanics to Chemistry

Time to explain what I presented at the conference, and to start, there are interpretive issues to sort. The first issue is either there is a physical wave or there is not. While there is no definitive answer to this, I argue there is, because something has to go through both slits in the two slit experiment, and the evidence is the particle always goes through only one slit. That means something else should be there. 

I differ from the pilot wave theory in two ways. The first is mathematical. The wave is taken as complex because its phase is. (Here, complex means a value includes the square root of minus 1, and is sometimes called an imginary number for obvious reasons.) However, Euler, the mathematician that really invented complex numbers, showed that if the phase evolves, as waves always do by definition of an oscillation and as action does with time, there are two points that are real, and these are at the antinodes of the wave. That means the amplitudes of the quantal matter wave should have real values there. The second difference is if the particle is governed by a wave pulse, the two must travel at the same velocity. If so, and if the standard quantum equations apply, there is a further energy, equal in magnitude to the kinetic energy of the particle. This could be regarded as equivalent to Bohm’s quantum potential, except there is a clear difference in that there is a clear value of it. It is a hidden variable in that you cannot measure it, but then again, so is potential energy; you have never actually measured that.

This gives a rather unexpected simplification: the wave behaves exactly as a classical wave, in which the square of the amplitude, which is a real number, gives the energy of the particle. This is a big advance over the probabilistic interpetation because while it may be correct,  the standard wave theory would say that A^2 = 1 per particle, that is, the probability of one particle being somewhere is 1, which is not particularly informative. The difference now is that for something like the chemical bond, the probabilistic interpretation requires all values of ψ1 to interact with all values of ψ2; The guidance wave method merely needs the magnitude of the combined amplitude.  Further, the wave refers only to a particle’s motion, and not to a state (although the particle motion may define a state). This gives a remarkable simplification for the stationary state, and in particular, the chemical bond. The usual way of calculating this involves calculating the probabilities of where all the electrons will be, then further calculating the interactions between them, recalculating their positions with the new interactions, recalculating the new interactions, etc, and throughout this procedure, getting the positional probabilities requires double integrations because forces give accelerations, which means constants have to be assigned. This is often done, following Pople, by making the calculations fit similar molecules and transferring the constants, which to me is essentially an empirical assignment, albeit disguised with some fearsome mathematics.

The approach I introduced was to ignore the position of the electrons and concentrate on the waves. The waves add linearly, but you introduce two new interactions that effectively require two new wave components. They are restricted to being between the nuclei, the reason being that the force on nucleus 1, from atom 2 must be zero, otherwise it would accelerate and there would be no bond. The energy of an interaction is the energy added to the antinode. For a hydrogen molecule, the electrons are indistinguishable, so the energy of the two new interactions are equivalent to the energy of the two equivalent ones from the linear addition, therefore the bond energy of H2 is 1/3 the Rydberg energy. This is out by about 0.3%. We get better agreement using potential energy, and introduce electric field renormalisation to comply with Maxwell’s equations. Needless to say this did not get much excitement from the conference audience. You cannot dazzle with obscure mathematics doing this.

The first shock for the audience was when I introduced my Certainty Principle, which, roughly stated, is the action of a stationary state must be quantized, and further, because of the force relationship I introduced above, the action must be that of the sum of the participating atoms. Further, the periodic time of the initial electron interactions must be constant (because their waves add linearly, a basic wave property). The reason you get bonding from electron pairing is that the two electrons halve the periodic time on that zone, which permits twice the energy at constant action, or the addition of the two extras as shown for hydrogen. That also contracts the distance to the antinode, in which case the appropriate energy can only arise because the electron – electron repulsion compensates. This action relationship is also the first time a real cause of there being a bond radius that is constant across a number of bonds has been made. The repulsion energy which is such a problem if you consider it from the point of view of electron position is self-correcting if you consider the wave aspect. The waves cannot add linearly and maintain constant action unless the electrons provide exactly the correct compensation, and the proposition is the guidance waves guide them into doing that.

The next piece of “shock” comes with other atoms. The standard theory says their wave functions correspond to the excited states of hydrogen because they are the only solutions of the Schrödinger equation. However, if you do that, you find that the calculated values are too small. By caesium, the calculated energy is out by an order of magnitude. The standard answer – the electrons penetrate the space occupied by the inner electrons and experience stronger electric field. If you accept the guidance wave principle, that cannot be because the energy is determined at the major antinode. (If you accept Maxwell’s equations I argue it cannot be either, but that is too complicated to be put here.) So my answer is that the wave is actually a sum of component waves that have different nodal structures, and an expression for the so-called screening constant (which is anything but constant, and varies according to circumstances) is actually a function of quantum numbers, and I produced tables that shows the functions give good agreement for the various groups, and for excited states.

Now, the next shock. Standard theory has argued that the properties of heavy elements like gold have unusual properties because these “penetrations” lead to significant relativistic corrections. My guidance wave theory requires such relativistic effects to be very minor, and I provided  evidence that showed the properties of elements like francium, gold, thallium, and lead were as good or better fitting with my functions as any of the other elements.

What I did not tell them was that the tables of data I showed them had been published in a peer reviewed journal over thirty years before, but nobody took any notice. As someone said to me, when I mentioned this as an aside, “If it isn’t published in the Physical Review or J. Chem Phys., nobody reads it.” So much for publishing in academic journals.  As to why I skipped those two, I published that while I was starting my company, and $US 1,000 per page did not strike me as a good investment.So, if you have got this far, I appreciate your persistence. I wish you a very Merry Christmas and the best for 2020. I shall stop posting until a Thursday in mid January, as this is the summer holiday season here.

What does Quantum Mechanics Mean?

Patrice Ayme gave a long comment to my previous post that effectively asked me to explain in some detail the significance of some of my comments on my conference talk involving quantum mechanics. But before that, I should explain why there is even a problem, and I apologise if the following potted history seems a little turgid. Unfortuately, the background situation is important. 

First, we are familiar with classical mechanics, where, given all necessary conditions, exact values of the position and momentum of something can be calculated for any future time, and thanks to Newtom and Leibniz, we do this through differential equations involving familiar concepts such as force, time, position, etc. Thus suppose we shot an arrow into the air and ignored friction and we wanted to know where it was, when. Velocity is the differential of position with respect to time, so we take the velocity and integrate it. However, to get an answer, because there are two degrees of freedom (assuming we know which direction it was shot) we get two constants to the two integrations. In classical mechanics these are easily assigned: the horizontal constant depends on where it was fired from, and the other constant comes from the angle of elevation. 

Classical mechanics reached a mathematical peak through Lagrange and Hamilton. Lagrange introduced a term that is usually the difference between the potential and kinetic energy, and thus converted the problem from forces to one of energy. Hamilton and Jacobi converted the problem to one involving action, which is the time integral of the Lagrangian. The significance of this is that in one sense action summarises all that is involved in our particle going from A to B. All of these variations are equivalent, and merely reflect alternative ways of going about the problem, however the Hamilton Jacobi equation is of special significance because it can be mathematically transformed into a mathematical wave expression. When Hamilton did this, there were undoubtedly a lot of yawns. Only an abstract mathematician would want to represent a cannonball as a wave.

So what is a wave? While energy can be transmitted by particles moving (like a cannon ball) waves transmit energy without moving matter, apart from a small local oscillation. Thus if you place a cork on the sea far from land, the cork basically goes around in a circle, but on average stays in the same place. If there is an ocean current, that will be superimposed on the circular motion without affecting it. The wave has two terms required to describe it: an amplitude (how big is the oscillation?) and a phase (where on the circle is it?).

Then at the end of the 19th century, suddenly classical mechanics gave wrong answers for what was occurring at the atomic level. As a hot body cools, it should give radiation from all possible oscillators and it does not. To explain this, Planck assumed radiation was given off in discrete packets, and introduced the quantum of action h. Einstein, recognizing the Principle of Microscopic Reversibility should apply, argued that light should be absorbed in discrete packages as well, which solved the problem of the photoelectric effect. A big problem arose with atoms, which have positively charged nuclei and electrons moving around it. To move, electrons must accelerate, and hence should radiate energy and spiral into the nucleus. They don’t. Bohr “solved” this problem with the ad hoc assumption that angular momentum was quantised, nevertheless his circular orbits (like planetary orbits) are wrong. For example, if they occurred, hydrogen would be a powerful magnet and it isn’t. Oops. Undeterred, Sommerfeld recognised that angular momentum is dimensionally equivalent to action, and he explained the theory in terms of action integrals. So near, but so far.

The next step involved the French physicist de Broglie. With a little algebra and a bit more inspiration, he represented the motion in terms of momentum and a wavelength, linked by the quantum of action. At this point, it was noted that if you fired very few electrons through two slits at an appropriate distance apart and let them travel to a screen, each electron was registered as a point, but if you kept going, the points started to form a diffraction pattern, the characteristic of waves. The way to solve this was if you take Hamilton’s wave approach, do a couple of pages of algebra and quantise the period by making the phase complex and proportional to the action divided by (to be dimensionally correct bcause the phase must be a number), you arrive at the Schrödinger equation, which is a partial differential equation, and thus is fiendishly difficult to solve. About the same time, Heisenberg introduced what we call the Uncertainty Principle, which usually states that you cannot know the product of the position and the momentum to better than h/2π. Mathematicians then formulated the Schrödinger equation into what we call the state vector formalism, in part to ensure that there are no cunning tricks to get around the Uncertainty Principle.

The Schrödinger equation expresses the energy in terms of a wave function ψ. That immediately raised the question, what does ψ mean? The square of a wave amplitude usually indicats the energy transmitted by the wave. Because ψ is complex, Born interpreted ψ.ψ* as indicating the probability that you would find the particle at the nominated point. The state vector formalism then proposed that ψ.ψ* indicates the probability that a state will have probabilities of certain properties at that point. There was an immediate problem that no experiment could detect the wave. Either there is a wave or there is not. De Broglie and Bohm assumed there was and developed what we call the pilot wave theory, but almost all physicists assume, because you cannot detect it, there is no actual wave.

What do we know happens? First, the particle is always detected as a point, and it is the sum of the points that gives the diffraction pattern characteristic of waves. You never see half a particle. This becomes significant because you can get this diffraction pattern using molecules made from 60 carbon atoms. In the two-slit experiment, what are called weak measurements have shown that the particle always goes through only one slit, and not only that, they do so with exactly the pattern predicted by David Bohm. That triumph appears to be ignored. Another odd feature is that while momentum and energy are part of uncertainty relationships, unlike random variation in something like Brownian motion, the uncertainty never grows

Now for the problems. The state vector formalism considers ψ to represent states. Further, because waves add linearly, the state may be a linear superposition of possibilities. If this merely meant that the probabilities merely represented what you do not know, then there would be no problem, but instead there is a near mystical assertion that all probabilities are present until the subject is observed, at which point the state collapses to what you see. Schrödinger could not tolerate this, not the least because the derivation of his equation is incompatible with this interpretation, and he presented his famous cat paradox, in which a cat is neither dead nor alive but in some sort of quantum superposition until observed. The result was the opposite of what he expected: this ridiculous outcome was asserted to be true, and we have the peculiar logic applied in that you cannot prove it is not true (because the state collapses if you try to observe the cat). Equally, you cannot prove it is true, but that does not deter the mystics. However, there is worse. Recall I noted when we integrate we have to assign necessary constants. When all positions are uncertain, and when we are merely dealing with probabilities in superposition, how do you do this? As John Pople stated in his Nobel lecture, for the chemical bonds of hydrocarbons, he assigned values to the constants by validating them with over two hundred reference compounds. But suppose there is something fundamentally wrong? You can always get the right answer if you have enough assignable constants.The same logic applies to the two-slit experiment. Because the particle could go through either slit and the wave must go through both to get the diffraction pattern, when you assume there is no wave it is argued that the particle goes through both slits as a superposition of the possibilities. This is asserted even though it has clearly been demonstrated that it does not. There is another problem. The assertion that the wave function collapses on observation, and all other probabilities are lost actually lies outside the theory. How does that actually happen? That is called the measurement problem, and as far as I am aware, nobody has an answer, although the obvious answer, the probabilities merely reflected possibilities and the system was always just one but we did not know it is always rejected. Confused? You should be. Next week I shall get around to some from my conference talk that caused stunned concern with the audience.

Galactic Collisions

As some may know, the Milky Way galaxy and the Andromeda galaxy are closing together and will “collide” in something like 4 – 5 billion years. If you are a good distance away, say in a Magellenic Cloud, this would look really spectacular, but what about if you were on a planet like Earth, right in the middle of it, so to speak? Probably not a lot of difference from what we see. There could be a lot more stars in the sky (and there should be if you use a good telescope) and there may be enhanced light from a dust cloud, but basically, a galaxy is a lot of empty space. As an example, light takes 8 minutes and twenty seconds to get from the sun to Earth. Light from the nearest star takes 4.23 years to get here. Stars are well-spaced.

As we understand it, stars orbit the galactic centre. The orbital velocity of our sun is about 828,000 km/hr, a velocity that makes our rockets look like snails, but it takes something like 230,000,000 years to make an orbit, and we are only about half-way out. As I said, galaxies are rather large. So when the galaxies merge, there will be stars going in a lot of different directions until things settle down. There is a NASA simulation in which, over billions of years, the two pass through each other, throwing “stuff” out into interstellar space, then they turn around and repeat the process, except this time the centres merge, and a lot more “stuff” is thrown out into space. The meaning of “stuff” here is clusters of stars. Hundreds of millions of stars get thrown out into space, many of which turn around and come back, eventually to join the new galaxy. 

Because of the distance between stars the chances of stars colliding comes pretty close to zero, however, it is possible that a star might pass by close enough to perturb planetary orbits. It would have to come quite close to affect Earth, as, for example, if it came as close as Saturn, it would only make a minor perturbation to Earth’s orbit. On the other hand, if that close it could easily rain down a storm of comets, etc, from further out, and seriously disrupt the Kuiper Belt, which could lead to extinction-type collisions. As for the giant planets, it would depend on where they were in their orbit. If a star came that close, it could be travelling at such a speed that if Saturn were on the other side of the star it could know little of the passage.

One interesting point is that such a galactic merger has already happened for the Milky Way. In the Milky Way, the sun and the majority of stars are all in orderly near-circular orbits around the centre, but in the outer zones of the galaxy there is what is called a halo, in which many of the stellar orbits are orbiting in the opposite direction. A study was made of the stars in the halo directly out from the sun, where it was found that there are a number of the stars that have strong similarities in composition, suggesting they formed in the same environment, and this was not expected. (Apparently how active star formation is alters their composition slightly. These stars are roughly similar to those in the Large Magellenic Cloud.)  This suggests they formed from different gas clouds, and the ages of these different stars run from 13 to 10 billion years ago. Further, it turned out that the majority of the stars in this part of the halo appeared to have come from a single source, and it was proposed that this part of the halo of our galaxy largely comprises stars from a smaller galaxy, about the size of the Large Magellenic Cloud that collided with the Milky Way about ten billion years ago. There were no comments on other parts of the halo, presumably because parts on the other side of the galactic centre are difficult to see.

It is likely, in my opinion, that such stars are not restricted to the halo. One example might be Kapteyn’s star. This is a red dwarf about eleven light years away and receding. It, too, is going “the wrong way”, and is about eleven billion years old. It is reputed to have two planets in the so-called habitable zone (reputed because they have not been confirmed) and is of interest in that since the star is going the wrong way, presumably as a consequence of a galactic merger, this shows the probability running into another system sufficiently closely to disrupt the planetary system is of reasonably low probability.

A Planet Destroyer

Probably everyone now knows that there are planets around other stars, and planet formation may very well be normal around developing stars. This, at least, takes such alien planets out of science fiction and into reality. In the standard theory of planetary formation, the assumption is that dust from the accretion disk somehow turns into planetesimals, which are objects of about asteroid size and then mutual gravity brings these together to form planets. A small industry has sprung up in the scientific community to do computerised simulations of this sort of thing, with the output of a very large number of scientific papers, which results in a number of grants to keep the industry going, lots of conferences to attend, and a strong “academic reputation”. The mere fact that nobody knows how to get to their initial position appears to be irrelevant and this is one of the things I believe is wrong with modern science. Because those who award prizes, grants, promotions, etc have no idea whether the work is right or wrong, they look for productivity. Lots of garbage usually easily defeats something novel that the establishment does not easily understand, or is prepared to give the time to try.

Initially, these simulations predicted solar systems similar to ours in that there were planets in circular orbits around their stars, although most simulations actually showed a different number of planets, usually more in the rocky planet zone. The outer zone has been strangely ignored, in part because simulations indicate that because of the greater separation of planetesimals, everything is extremely slow. The Grand Tack simulations indicate that planets cannot form further than about 10 A.U. from the star. That is actually demonstrably wrong, because giants larger than Jupiter and very much further out are observed. What some simulations have argued for is that there is planetary formation activity limited to around the ice point, where the disk was cold enough for water to form ice, and this led to Jupiter and Saturn. The idea behind the NICE model, or Grand Tack model (which is very close to being the same thing) is that Uranus and Neptune formed in this zone and moved out by throwing planetesimals inwards through gravity. However, all the models ended up with planets being in near circular motion around the star because whatever happened was more or less happening equally at all angles to some fixed background. The gas was spiralling into the star so there were models where the planets moved slightly inwards, and sometimes outwards, but with one exception there was never a directional preference. That one exception was when a star came by too close – a rather uncommon occurrence. 

Then, we started to see exoplanets, and there were three immediate problems. The first was the presence of “star-burners”; planets incredibly close to their star; so close they could not have formed there. Further, many of them were giants, and bigger than Jupiter. Models soon came out to accommodate this through density waves in the gas. On a personal level, I always found these difficult to swallow because the very earliest such models calculated the effects as minor and there were two such waves that tended to cancel out each other’s effects. That calculation was made to show why Jupiter did not move, which, for me, raises the problem, if it did not, why did others?

The next major problem was that giants started to appear in the middle of where you might expect the rocky planets to be. The obvious answer to that was, they moved in and stopped, but that begs the question, why did they stop? If we go back to the Grand Tack model, Jupiter was argued to migrate in towards Mars, and while doing so, throw a whole lot of planetesimals out, then Saturn did much the same, then for some reason Saturn turned around and began throwing planetesimals inwards, which Jupiter continued the act and moved out. One answer to our question might be that Jupiter ran out of planetesimals to throw out and stopped, although it is hard to see why. The reason Saturn began throwing planetesimals in was that Uranus and Neptune started life just beyond Saturn and moved out to where they are now by throwing planetesimals in, which fed Saturn’s and Jupiter’s outwards movement. Note that this does depend on a particular starting point, and it is not clear to me  that since planetesimals are supposed to collide and form planets, if there was an equivalent to the masses of Jupiter and Saturn, why did they not form a planet?

The final major problem was that we discovered that the great bulk of exoplanets, apart from those very close to the star, had quite significant elliptical orbits. If you draw a line through the major axis, on one side of the star the planet moves faster and closer to it than the other side. There is a directional preference. How did that come about? The answer appears to be simple. The circular orbit arises from a large number of small interactions that have no particular directional preference. Thus the planet might form from collecting a huge number of planetesimals, or a large amount of gas, and these occur more or less continuously as the planet orbits the star. The elliptical orbit occurs if there is on very big impact or interaction. What is believed to happen is when planets grow, if they get big enough their gravity alters their orbits and if they come quite close to another planet, they exchange energy and one goes outwards, usually leaving the system altogether, and the other moves towards the star, or even into the star. If it comes close enough to the star, the star’s tidal forces circularise the orbit and the planet remains close to the star, and if it is moving prograde, like our moon the tidal forces will push the planet out. Equally, if the orbit is highly elliptical, the planet might “flip”, and become circularised with a retrograde orbit. If so, eventually it is doomed because the tidal forces cause it to fall into the star.

All of which may seem somewhat speculative, but the more interesting point is we have now found evidence this happens, namely evidence that the star M67 Y2235 has ingested a “superearth”. The technique goes by the name “differential stellar spectroscopy”, and what happens is that provided you can realistically estimate what the composition should be, which can be done with reasonable confidence if stars have been formed in a cluster and can reasonably be assigned as having started from the same gas. M67 is a cluster with over 1200 known members and it is close enough that reasonable details can be obtained. Further, the stars have a metallicity (the amount of heavy elements) similar to the sun. A careful study has shown that when the stars are separated into subgroups, they all behave according to expectations, except for Y2235, which has far too high a metallicity. This enhancement corresponds to an amount of rocky planet 5.2 times the mass of the earth in the outer convective envelope. If a star swallows a planet, the impact will usually be tangential because the ingestion is a consequence of an elliptical orbit decaying through tidal interactions with the star such that the planet grazes the external region of the star a few times before its orbital energy is reduced enough for ingestion. If so, the planet should dissolve in the stellar medium and increase the metallicity of the outer envelope of the star. So, to the extent that these observations are correctly interpreted, we have the evidence that stars do ingest planets, at least sometimes.

For those who wish to go deeper, being biased I recommend my ebook “Planetary Formation and Biogenesis.” Besides showing what I think happened, it analyses over 600 scientific papers, most of which are about different aspects.

Gravitational Waves, or Not??

On February 11, 2016 LIGO reported that on September 14, 2015, they had verified the existence of gravitational waves, the “ripples in spacetime” predicted by General Relativity. In 2017, LIGO/Virgo laboratories announced the detection of a gravitational wave signal from merging neutron stars, which was verified by optical telescopes, and which led to the award of the Nobel Prize to three physicists. This was science in action and while I suspect most people had no real idea what this means, the items were big news. The detectors were then shut down for an upgrade to make them more sensitive and when they started up again it was apparently predicted that dozens of events would be observed by 2020, and with automated detection, information could be immediately relayed to optical telescopes. Lots of scientific papers were expected. So, with the program having been running for three months, or essentially half the time of the prediction, what have we found?

Er, despite a number of alerts, nothing has been confirmed by optical telescopes. This has led to some questions as to whether any gravitational waves have actually been detected and led to a group at the Neils Bohr Institute at Copenhagen to review the data so far. The detectors at LIGO correspond to two “arms” at right angles to each other running four kilometers from a central building. Lasers are beamed down each arm and reflected from a mirror and the use of wave interference effects lets the laboratory measure these distances to within (according to the LIGO website) 1/10,000 the width of a proton! Gravitational waves will change these lengths on this scale. So, of course, will local vibrations, so there are two laboratories 3,002 km apart, such that if both detect the same event, it should not be local. The first sign that something might be wrong was that besides the desired signals, a lot of additional vibrations are present, which we shall call noise. That is expected, but what was suspicious was that there seemed to be inexplicable correlations in the noise signals. Two labs that far apart should not have the “same” noise.

Then came a bit of embarrassment: it turned out that the figure published in Physical Review Letters that claimed the detection (and led to Nobel prize awards) was not actually the original data, but rather the figure was prepared for “illustrative purposes”, details added “by eye”.  Another piece of “trickery” claimed by that institute is that the data are analysed by comparison with a large database of theoretically expected signals, called templates. If so, for me there is a problem. If there is a large number of such templates, then the chances of fitting any data to one of them is starting to get uncomfortably large. I recall the comment attributed to the mathematician John von Neumann: “Give me four constants and I shall map your data to an elephant. Give me five and I shall make it wave its trunk.” When they start adjusting their best fitting template to fit the data better, I have real problems.

So apparently those at the Neils Bohr Institute made a statistical analysis of data allegedly seen by the two laboratories, and found no signal was verified by both, except the first. However, even the LIGO researchers were reported to be unhappy about that one. The problem: their signal was too perfect. In this context, when the system was set up, there was a procedure to deliver artificially produced dummy signals, just to check that the procedure following signal detection at both sites was working properly. In principle, this perfect signal could have been the accidental delivery of such an artifical signal, or even the deliberate insertion by someone. Now I am not saying that did happen, but it is uncomfortable that we have only one signal, and it is in “perfect” agreement with theory.

A further problem lies in the fact that the collision of two neutron stars as required by that one discovery and as a source of the gamma ray signals detected along with the gravitational waves is apparently unlikely in an old galaxy where star formation has long since ceased. One group of researchers claim the gamma ray signal is more consistent with the merging of white dwarfs and these should not produce gravitational waves of the right strength.

Suppose by the end of the year, no further gravitational waves are observed. Now what? There are three possibilities: there are no gravitational waves; there are such waves, but the detectors cannot detect them for some reason; there are such waves, but they are much less common than models predict. Apparently there have been attempts to find gravitational waves for the last sixty years, and with every failure it has been argued that they are weaker than predicted. The question then is, when do we stop spending increasingly large amounts of money on seeking something that may not be there? One issue that must be addressed, not only in this matter but in any scientific exercise, is how to get rid of the confirmation bias, that is, when looking for something we shall call A, and a signal is received that more or less fits the target, it is only so easy to say you have found it. In this case, when a very weak signal is received amidst a lot of noise and there is a very large number of templates to fit the data to, it is only too easy to assume that what is actually just unusually reinforced noise is the signal you seek. Modern science seems to have descended into a situation where exceptional evidence is required to persuade anyone that a standard theory might be wrong, but only quite a low standard of evidence to support an existing theory.

Cold Fusion

My second post-doc. was at Southampton University, and one of the leading physical chemists there was Martin Fleischmann, who had an excellent record for clever and careful work. There would be no doubt that if he measured something, it would be accurate and very well done. In the academic world he was a rising star until he scored a career “own goal”. In 1989, he and Stanley Pons claimed to have observed nuclear fusion through a remarkably simple experiment: they passed electricity through samples of deuterium oxide (heavy water) using palladium electrodes. They reported the generation of net heat in significant excess of what would be expected from the power loss due to the resistance of the solution. Whatever else happened, I have no doubt that Fleischmann correctly measured and accounted for the heat. From then on, the story gets murky. Pons and Fleischmann claimed the heat had to come from nuclear fusion, but obviously there was not very much of it. According to “Physics World”, they also claimed the production of neutrons and tritium. I do not recall any actual detection of neutrons, and I doubt the equipment they had would have been at all suitable for that. Tritium might seem to imply neutron production, thus a neutron hitting deuterium might well make tritium, but tritium (even heavier hydrogen) could well have been a contaminant in their deuterium, or maybe they never detected it.

The significance, of course, was that deuterium fusion would be an inexhaustible source of clean energy. You may notice that the Earth has plenty of water, and while the fraction that is deuterium is small, it is nevertheless a very large amount in total, and the energy in going to 4-helium is huge. The physicists, quite rightly, did not believe this. The problem is the nuclei strongly repel each other due to the positive electric fields until they get to about 1,000 – 10,000 times closer than they are in molecules. Nuclear fusion usually works by either extreme pressure squeezing the nuclei together, or extreme temperature giving the nuclei sufficient energy that they overcome the repulsion, or both.

What happened next was that many people tried to reproduce the experiment, and failed, with the result this became considered an example of pathological science. Of course, the problem always was that if anything happened, it happened only very slightly, and while heat was supposedly obtained and measured by a calorimeter, that could happen from extremely minute amounts of fusion. Equally, if it were that minute, it might seem to be useless, however, experimental science doesn’t work that way either. As a general rule, if you can find an effect that occurs, quite often once you work out why, you can alter conditions and boost the effect. The problem occurs when you cannot get an effect.

The criticisms included there were no signs of neutrons. That in itself is, in my opinion, meaningless. In the usual high energy, and more importantly, high momentum reactions, if you react two deuterium nuclei, some of the time the energy is such that the helium isotope 3He is formed, plus a neutron. If you believe the catalyst is squeezing the atoms closer together in a matrix of metal, that neutron might strike another deuterium nucleus before it can get out and form tritium. Another reason might be that the mechanism in the catalyst was that the metal brought the nuclei together in some form of metal hydride complex, and the fusion occurred through quantum tunnelling, which, being a low momentum event, might not eject a neutron. 4He is very stable. True, getting the deuterium atoms close enough is highly improbable, but until you know the structure of the hydride complex, you cannot be absolutely sure. As it was, it was claimed that tritium was found, but it might well have been that the tritium was always there. As to why it was not reproducible, normally palladium absorbs about 0.7 hydrogen atoms per palladium atom in the metal lattice. The claim was that fusion required a minimum of 0.875 deuterium atoms per palladium atom. The defensive argument was the surface of the catalyst was not adequate, and the original claim included the warning that not all electrodes worked, and they only worked for so long. We now see a problem. If the electrode does not absorb and react with sufficient deuterium, you do not expect an effect. Worse, if a special form of palladium is required, that rectifying itself during hydridization could be the source of the heat, i.e.the heat is real, but it is of chemical origin and not nuclear.

I should add at this point I am not advocating that this worked, but merely that the criticisms aimed at it were not exactly valid. Very soon the debate degenerated into scoffing and personal insults rather than facts. Science was not working at all well then. Further, if we accept that there was heat generated, and I am convinced that Martin Fleischmann, whatever his other faults, was a very careful and honest chemist and would have measured that heat properly, then there is something we don’t understand. What it was is another matter, and it is an unfortunate human characteristic that the scientific community, rather than try to work out what had happened, preferred to scoff.

However, the issue is not entirely dead. It appears that Google put $10 million of its money to clear the issue up. Now, the research team that has been using that money still have not found fusion, but they have discovered that the absorption of hydrogen by palladium works in a way thus far unrecognised. At first that may not seem very exciting, nevertheless getting hydrogen in and out of metals could be an important aspect of a hydrogen fuel system as the hydrogen is stored at more moderate pressures than in a high-pressure vessel. The point here, of course, is that understanding what has happened, even in a failed experiment, can be critically important. Sure, the actual initial objective might never be reached, but sometimes it is the something else that leads to real benefits. Quite frequently, in science, success stories actually started out as something else although, through embarrassment, it is seldom admitted.

Finally, there is another form of cold fusion that really works. If the electrons around deuterium and tritium are replaced with muons, the nuclei in a molecule come very much closer together, and nuclear fusion does occur through quantum tunnelling and the full fusion energy is generated. There are, unfortunately, three problems. The first is to maintain a decent number of muons. These are made through the decay of pions, which in turn are made in colliders. This means very considerable amounts of energy are spent getting your muons. The second is that muons have a very short life – about 2 microseconds. The third is if they lose some energy they fall into the helium atom and stay there, thus taking themselves out of play. Apparently a muon can catalyse up to 150 fusions, which looks good, but the best so far is that to get 1 MW of net energy, you have to put 4 MW in to make the muons. Thus to get really large amounts of energy, extremely huge generators are required just to drive the generation. Yes, you get net power but the cost is far too great. For the moment, that is not a productive source.

The Ice Giants’ Magnetism

One interesting measurement made from NASA’S sole flyby of Uranus and Neptune is that they have complicated magnetic fields, and seemingly not the simple dipolar field as found on Earth. The puzzle then is, what causes this? One possible answer is ice.

You will probably consider ice as not particularly magnetic nor particularly good at conducting electric current, and you would be right with the ice you usually see. However, there is more than one form of ice. As far back as 1912, the American physicist Percy Bridgman discovered five solid phases of water, which were obtained by applying pressure to the ice. One of the unusual properties of ice is that as you add pressure, the ice melts because the triple point (the temperature where solid, liquid and gas are in equilibrium) is at a lower temperature than the melting point of ice at room pressure (which is 0.1 MPa. A pascal is a rather small unit of pressure; the M mean million, G would mean billion). So add pressure and it melts, which is why ice skates work. Ices II, III and V need 200 to 600 MPa of pressure to form. Interestingly, as you increase the pressure, Ice III forms at about 200 Mpa, and at about -22 degrees C, but then the melting point rises with extra pressure, and at 350 MPa, it switches to Ice V, which melts at – 18 degrees C, and if the pressure is increased to 632.4 MPa, the melting point is 0.16 degrees C. At 2,100 MPa, ice VI melts at just under 82 degrees C. Skates don’t work on these higher ices. As an aside, Ice II does not exist in the presence of liquid, and I have no idea what happened to Ice IV, but my guess is it was a mistake.

As you increase the pressure on ice VI the melting point increases, and sooner or later you expect perhaps another phase, or even more. Well, there are more, so let me jump to the latest: ice XVIII. The Lawrence Livermore National Laboratory has produced this by compressing water to 100 to 400 GPa (1 to 4 million times atmospheric pressure) at temperatures of 2,000 to 3,000 degrees K (0 degrees centigrade is about 273 degrees K, and the scale is the same) to produce what they call superionic ice. What happens is the protons from the hydroxyl groups of water become free and they can diffuse through the empty sites of the oxygen lattice, with the result that the ice starts to conduct electricity almost as well as a metal, but instead of moving electrons around, as happens in metals, it is assumed that it is the protons that move.

These temperatures and pressures were reached by placing a very thin layer of water between two diamond disks, following which six very high power lasers generated a sequence of shock waves that heated and pressurised the water. They deduced what they got by firing 16 additional high powered lasers that delivered 8 kJ of energy in a  one-nanosecond burst on a tiny spot on a small piece of iron foil two centimeters away from the water a few billionths of a second after the shock waves. This generated Xrays, and from the way they diffracted off the water sample they could work out what they generated. This in itself is difficult enough because they would also get a pattern from the diamond, which they would have to subtract.

The important point is that this ice conducts electricity, and is a possible source of the magnetic fields of Uranus and Neptune, which are rather odd. For Earth, Jupiter and Saturn, the magnetic poles are reasonably close to the rotational poles, and we think the magnetism arises from electrically conducting liquids rotating with the planet’s rotation. But Uranus and Neptune have quite odd magnetic fields. The field for Uranus is aligned at 60 degrees to the rotational axis, while that for Neptune is aligned at 46 degrees to the rotational axis. But even odder, the axes of the magnetic fields of each do not go through the centre of the planet, and are displaced quite significantly from it.

The structure of these planets is believed to be, from outside inwards, first an atmosphere of hydrogen and helium, then a mantle of water, ammonia and methane ices, then interior to that a core of rock. My personal view is that there will also be carbon monoxide and nitrogen ices in the mantle, at least of Neptune. The usual explanation for the magnetism has been that magnetic fields are generated by local events in the icy mantles, and you see comments that the fields may be due to high concentrations of ammonia, which readily forms charged species. Such charges would produce magnetic fields due to the rapid rotation of the planets. This new ice is an additional possibility, and it is not beyond the realms of possibility that it might contribute to the other giants.

Jupiter is found from our spectroscopic analyses to be rather deficient in oxygen, and this is explained as being due to the water condensing out as ice. The fact that these ices form at such high temperatures is a good reason to believe there may be such layers of ice. This superionic ice is stable as a solid at 3000 degrees K, and that upper figure simply represents the highest temperature the equipment could stand. (Since water reacts with carbon, I am surprised it got that high.) So if there were a layer of such ice around Jupiter’s core, it too might contribute to the magnetism. Whatever else Jupiter lacks down there, pressure is not one of them.