Free Will

You will see many discussions regarding free will. The question is, do you have it, or are we in some giant computer program. The problem is that classical physics is deterministic, and you will often see claims that Newtonian physics demands that the Universe works like some finely tuned machine, following precise laws of motion. And indeed, we can predict quite accurately when eclipses of the sun will occur, and where we should go to view them. The presence of eclipses in the future is determined now. Now let us extrapolate. If planets follow physical laws, and hence their behaviour can be determined, then so do snooker or pool balls, even if we cannot in practice calculate all that will happen on a given break. Let us take this further. Heat is merely random kinetic energy, but is it truly random? It seems that way, but the laws of motion are quite clear: we can calculate exactly what will happen in any collision and it is just in practice the calculations are too complicated to even consider doing it. You bring in chaos theory, but this does nothing for you; the calculations may be utterly impossible to carry out, but they are governed solely by deterministic physics, so ultimately what happens was determined and it is just that we do not know how to calculate it. Electrodynamics and quantum theory are deterministic, even if quantum theory has random probability distributions. Quantum behaviour always follows strict conservation laws and the Schrödinger equation is actually deterministic. If you know ψ and know the change of conditions, you know the new ψ. Further, all chemistry is deterministic. If I go into the lab, take some chemicals and mix them and if necessary heat them according to some procedure, every time I follow exactly the same procedures, I shall end up with the same result.

So far, so good. Every physical effect follows from a physical cause. Therefore, the argument goes, since our brain works on physical and chemical effects and these are deterministic, what our brains do is determined exactly by those conditions. But those conditions were determined by what went before, and those before that, and so on. Extrapolating, everything was predetermined at the time of the big bang! At this point the perceptive may feel that does not seem right, and it is not. Consider nuclear decay. We know that particles, say neutrons, are emitted with a certain probability over an extended period of time. They will be emitted, but we cannot say exactly, or even roughly, when. The nuclei have angular uncertainty, therefore it follows that you cannot know what direction it is emitted because according to the laws of physics that is not determined until it is emitted. You may say, so what? That is trivial. No, the so what is that when you find one exception, you falsify the overall premise that everythingwas determined at the big bang. Which means something else introduced causes. Also, the emitted neutron may now generate new causes that could not be predetermined.

Now we start to see a way out. Every physical effect follows from a physical cause, but where do the causes come from? Consider stretching a wire with ever increasing force; eventually it breaks. It usually breaks at the weakest point, which in principle is predictable, but suppose we have a perfect wire with no point weaker than any other. It must still break, but where? At the instant of breaking some quantum effect, such as molecular vibration, will offer momentarily weaker and stronger spots. One with the greatest weakness will go, but due to the Uncertainty Principle that the given spot is unpredictable.

Take evolution. This proceeds by variation in the nucleic acids, but where in the chain is almost certainly random because each phosphate ester linkage that has to be broken is equivalent, just like the points in the “ideal wire”. Most resultant mutations die out. Some survive, and those that survive long enough to reproduce contribute to an evolutionary change. But again, which survives depends on where it is. Thus a change that provides better heat insulation at the expense of mobility may survive in polar regions, but it offers nothing in the equatorial rain forest. There is nothing that determines where what mutation will arise; it is a random event.Once you cannot determine everything, even in principle, it follows you must accept that not every cause is determined by previous events. Once you accept that, since we have no idea how the mind works, you cannot insist the way my mind works was determined at the time of the big bang. The Universe is mechanical and predictable in terms of properties obeying the conservation laws, but not necessarily anything else. I have free will, and so do you. Use it well.

Unravelling Stellar Fusion

Trying to unravel many things in modern science is painstaking, as will be seen from the following example, which makes looking for a needle in a haystack relatively easy. Here, the requirement for careful work and analysis can be seen, although less obvious is the need for assumptions during the calculations, and these are not always obviously correct. The example involves how our sun works. The problem is, how do we form the neutrons needed for fusion in the star’s interior? 

In the main process, the immense pressures force two protons form the incredibly unstable 2He (a helium isotope). Besides giving off a lot of heat there are two options: a proton can absorb an electron and give off a neutrino (to conserve leptons) or a proton can give off a positron and a neutrino. The positron would react with an electron to give two gamma ray photons, which would be absorbed by the star and converted to energy. Either way, energy is conserved and we get the same result, except the neutrinos may have different energies. 

The dihydrogen starts to operate at about 4 million degrees C. Gravitational collapse of a star starts to reach this sort of temperature if the star has a mass at least 80 times that of Jupiter. These are the smaller of the red dwarfs. If it has a mass of approximately 16 – 20 times that of Jupiter, it can react deuterium with protons, and this supplies the heat to brown dwarfs. In this case, the deuterium had to come from the Big Bang, and hence is somewhat limited in supply, but again it only reacts in the centre where the pressure is high enough, so the system will continue for a very long time, even if not very strongly.

If the temperatures reach about 17 million degrees C, another reaction is possible, which is called the CNO cycle. What this does is start with 12C (standard carbon, which has to come from accretion dust). It then adds a proton to make 13N, which loses a positron and a neutrino to make 13C. Then come a sequence of proton additions to make 14N (most stable nitrogen), then 15O, which loses a positron and a neutrino to make 15N, and when this is struck by a proton, it spits out 4He and returns to 12C. We have gone around in a circle, BUT converted four hydrogen nuclei to 4helium, and produced 25 MeV of energy. So there are two ways of burning hydrogen, so can the sun do both? Is it hot enough at the centre? How do we tell?

Obviously we cannot see the centre of the star, but we know for the heat generated it will be close to the second cycle. However, we can, in principle, tell by observing the neutrinos. Neutrinos from the 2He positron route can have any energy but not more than a little over 0.4 MeV. The electron capture neutrinos are up to approximately 1.1 MeV, while the neutrinos from 15O are from anywhere up to about 0.3 MeV more energetic, and those from 13N are anywhere up to 0.3 MeV less energetic than electron capture. Since these should be of the same intensity, the energy difference allows a count. The sun puts out a flux where the last three are about the same intensity, while the 2He neutrino intensity is at least 100 times higher. (The use of “at least” and similar terms is because such determinations are very error prone, and you will see in the literature some relatively different values.) So all we have to do is detect the neutrinos. That is easier said than done if they can pass through a star unimpeded. The way it is done is if a neutrino accidentally hits certain substances capable of scintillation it may give off a momentary flash of light.

The first problem then is, anything hitting those substances with enough energy will do it. Cosmic rays or nuclear decay are particularly annoying. So in Italy they built a neutrino detector under1400 meters of rock (to block cosmic rays). The detector is a sphere containing 300 t of suitable liquid and the flashes are detected by photomultiplier tubes. While there is a huge flux of neutrinos from the star, very few actually collide. The signals from spurious sources had to be eliminated, and a “neutrino spectrum” was collected for the standard process. Spurious sources included radioactivity from the rocks and liquid. These are rare, but so are the CNO neutrinos. Apparently only a few counts per day were recorded. However, the Italians ran the experiment for 1000 hours, and claimed to show that the sun does use this CNO cycle, which contributes about 1% of the energy. For bigger stars, this CNO cycle becomes more important. This is quite an incredible effort, right at the very edge of detection capability. Just think of the patience required, and the care needed to be sure spurious signals were not counted.

An Example of How Science Works: Where Does Gold Come From?

Most people seem to think that science marches on inexorably, gradually uncovering more and more knowledge, going in a straight line towards “the final truth”. Actually, it is far from that, and it is really a lurch from one point to another. It is true science continues to make a lot of measurements, and these fill our banks of data. Thus in organic chemistry, over the odd century or so we have collected an enormous number of melting points. These were obtained so someone else could check whether something else he had could be the same material, so it was not pointless. However, our attempts to understand what is going on have been littered with arguments, false leads, wrong turns, debates, etc. Up until the mid twentieth century, such debates were common, but now much less so. The system has coalesced in acceptance of the major paradigms, until awkward information comes to light that is sufficiently important that it cannot be ignored.

As an example, currently, there is a debate going on relating to how elements like gold were formed. All elements heavier than helium, apart from traces of lithium, were formed in stars. The standard theory says we start with hydrogen, and in the centre of a star, where the temperatures and pressures are sufficient two hydrogen atoms combine to form, for a very brief instant, helium 2 (two protons). An electron is absorbed, and we get deuterium, which is a proton and a neutron combined. The formation of a neutron from a proton and an electron is difficult because it needs about 1.3 MeV of energy to force it in, which is about a third of a million times bigger than the energy of any chemical bond. The diproton is a bit easier because the doubling of the positive field provides some supplementary energy. Once we get deuterium, we can do more and eventually get to helium 4 (two protons, two neutrons) and then it stops because the energy produced prevents the pressure from rising. The inside of the sun is an equilibrium, and in any given volume, a surprisingly few fusion reactions take place. The huge amount of energy is simply because of size. However, when the centre starts to run out of hydrogen, the star collapses further, and if it is big enough, it can start burning helium to make carbon and oxygen. Once the supply of helium becomes insufficient, if the star is large enough, a greater collapse happens, but this refuses to form an equilibrium. Atoms fuse at a great rate and produce the enormous amount of energy in a supernova.

What has happened in the scientific community is that once the initial theory was made, it was noticed that iron is at an energy minimum, and making elements heavier than iron absorb energy, nevertheless we know there are elements like uranium, gold, etc, because we use them. So how did they form? The real short answer is, we don’t know, but scientists with computers like to form models and publish lots of papers. The obvious way was that in stars, we could add a sequence of helium nuclei, or protons, or even, maybe, neutrons, but these would be rare events. However, in the aftermath of a supernova, huge amounts of energy are released, and, it is calculated, a huge flux of neutrons. That 1.3 MeV is a bit of a squib to what is available in a supernova, and so the flux of neutrons could gradually add to nuclei, and when it contained too many neutrons it would decay by turning a neutron into a proton, and the next element up, and hence this would be available form further neutrons. The problem though, is there are only so many steps that can be carried out before the rapidly expanding neutron flux itself decays. At first sight, this does not produce enough elements like gold or uranium, but since we see them, it must have.

Or must it? In 2017, we detected gravitational wave from an event that we could observe and had to be attributed to the collision of two neutron stars. The problem for heavy elements from supernovae is, how do you get enough time to add all the protons and neutrons, more or less one at a time. That problem does not arise for a neutron star. Once it starts ejecting stuff into space, there is no shortage of neutrons, and these are in huge assemblies that simply decay and explode into fragments, which could be a shower of heavy elements. While fusion reactions favour forming lighter elements, this source will favour heavier ones. According to the scientific community, problem solved.

There is a problem: where did all the neutron stars come from? If the elements come from supernovae, all we need is big enough stars. However, neutron stars are a slightly different matter because to get the elements, the stars have to collide. Space is a rather big place. Let over all time the space density of supernovae be x, the density of neutron stars y, and the density of stars as z. All these are very small, but z is very much bigger than x and x is almost certainly bigger than y. The probability of two neutron stars colliding is proportional to y squared, while the probability of a collision of a neutron stars another star would be correspondingly proportional to yz. Given that y is extremely small, and z much bigger, but still small, most neutron stars will not collide with anything in a billion years, some will collide with a different star, while very few will collide with another neutron star. There have been not enough neutron stars to make our gold, or so the claims go.So what is it? I don’t know, but my feeling is that the most likely outcome is that both mechanisms will have occurred, together with possible mechanisms we have yet to consider. In this last respect, we have made elements by smashing nuclei together. These take a lot of energy and momentum, but anything we can make on Earth is fairly trivial compared with the heart of a supernova. Some supernovae are calculated to produce enormous pressure waves, and these could fuse any nuclei together, to subsequently decay, because the heavy ones would be too proton rich.  This is a story that is unfolding. In twenty years, it may be quite different again.

The Fermi Paradox: Where are the Aliens?

This question, as much as anything, illustrates why people have trouble thinking through problems when they cannot put their own self-importance to one side. Let us look at this problem not from our point of view.

The Fermi paradox is a statement that since there are so many stars, most of which probably have planets, and a reasonable number of them have life, more than half of those are likely to have been around longer than us and so should be more technically advanced, but we have seen no clue as to their presence. Why not? That question begs the obvious counter: why should we? First, while the number of planets is huge, most of them are in other galaxies, and of those in the Milky Way, stars are very well-separated. The nearest, Alpha Centauri, is a three star system: two rather close stars (A G-type star like our sun and a K1 star) and a more distant red dwarf, and these are 4.37 light years away. The two have distances that vary between 35.6 AU to 11.2 AU, i.e. on closest approach they come a little further apart than Saturn and the sun.  That close approach means that planets corresponding to our giants could not exist in stable orbits, and astronomers are fairly confident there are no giants closer to the star. Proxima Centauri has one planet in the habitable zone, but for those familiar with my ebook “Planetary Formation and Biogenesis” will know that in my opinion, the prospect for life originating there, or around most Red Dwarfs, is extremely low. So, could there be Earth-like planets around the two larger stars? Maybe, but our technology cannot find them. As it happens, if there were aliens there, they could not detect Earth with technology at our level either.  Since most stars are immensely further away, rocky planets are difficult to discover. We have found exoplanets, but they are generally giants, planets around M stars, or planets that inadvertently have their orbital planes aligned so we can see eclipses.

This is relevant, because if we are seeking a signal from another civilization, as Seti seeks, then either the signal is deliberate or accidental. An example of accidental is the electromagnetic radiation we send into space through radio and TV signals. According to tvtechnology.com “An average large transmitter transmits about 8kW per multiplex.” That will give “acceptable signal strength” over, say, 50 km. The signal strength attenuates according to the square of the distance, so while the signals will get to Alpha Centauri, they will be extremely weak, and because of bandwidth issues, broadcasts from well separated transmitters will interfere with each other. Weak signals can be amplified, but aliens at Alpha Centauri would get extremely faint noise that might be assignable to technology. 

Suppose you want to send a deliberate signal? Now, you want to boost the power, and the easiest way to get over the inverse square attenuation is to focus the signal. Now, however, you need to know exactly where the intended recipient will be. You might do this for one of your space ships, in which case you would send a slightly broader signal on a very high power level at an agreed frequency but as a short burst. To accidentally detect this, because you have a huge range of frequencies to monitor, you have to accidentally be on that frequency at the time of the burst. There is some chance of Seti detecting such a signal if the space ship was heading to Earth, but then why listen for such a signal, as opposed to waiting for the ship.

The next possible deliberate signal would be aimed at us. To do that, they would need to know we had potential, but let us suppose they did. Suppose it takes something like 4.5 billion years to get technological life, and at that nice round number, they peppered Earth with signals. Oops! We are still in the Cretaceous. Such a move would require a huge power output so as to flood whatever we were using, a guess as to what frequencies we would find of interest, and big costs. Why would they do that, when it may take hundreds or thousands of years for a response? It makes little sense for any “person” to go to all that trouble and know they could never know whether it worked or not. We take the cheap option of listening with telescopes, but if everyone is listening, nobody is sending.

How do they choose a planet? My “Planetary Formation and Biogenesis” concludes you need a rocky planet with major felsic deposits, which is most probable around the G type star (but still much less than 50% of them). So you would need some composition data, and in principle you can get that from spectroscopy (but with much better technology than we have). What could you possibly see? Oxygen is obvious, except it gives poor signals. In the infrared spectra, you might detect ozone, and that would be definitive. You often see statements that methane should be detectable. Yes, but Titan has methane and no life. Very low levels of carbon dioxide is a strong indication, as it suggests large amounts of water to fix it, and plate tectonics to renew it. Obviously, signals from chlorophyll would be proof, but they are not exactly strong. So if they are at anything but the very closest stars they would not know whether we are here, so why waste that expense. The Government accountants would never fund such a project with such a low probability of getting a return on investment. Finally, suppose you decided a planet might have technology, why would you send a signal? As Hawking remarked, an alien species might decide this would be a good planet to eradicate all life and transform it suitable for the aliens to settle. You say that is unlikely, but with all those planets, it only needs one such race. So simple game theory suggests “Don’t do it!” If we assume they are more intelligent than us, they won’t transmit because there is no benefit for those transmitting.

Energy from the Sea. A Difficult Environmental Choice.

If you have many problems and you are forced to do something, it makes sense to choose any option that solves more than one problem. So now, thanks to a certain virus, changes to our economic system will be forced on us, so why not do something about carbon emissions at the same time? The enthusiast will tell us science offers us a number of options, so let’s get on with it. The enthusiast trots out what supports his view, but what about what he does not say? Look at the following.

An assessment from the US Energy Information Administration states the world will use 21,000 TWh of electricity in 2020. According to the International Energy Agency, the waves in the world’s oceans store about 80,000 TWh. Of course much of that is, well, out at sea, but they estimate about 4,000 TWh could be harvested. While that is less than 20% of what is needed, it is still a huge amount. They are a little coy on how this could be done, though. Wave power depends on wave height (the amplitude of the wave) and how fast the waves are moving (the phase velocity). One point is that waves usually move to the coast, and there are many parts of the world where there are usually waves of reasonable amplitude so an energy source is there.

Ocean currents also have power, and the oceans are really one giant heat engine. One estimate claimed that 0.1% of the power of the Gulf Stream running along the East Coast of the US would be equivalent to 150 nuclear power stations. Yes, but the obvious problem is the cross-sectional area of the Gulf Stream. Enormous amounts of energy may be present, but the water is moving fairly slowly, so a huge area has to be trapped to get that energy. 

It is simpler to extract energy from tides, if you can find appropriate places. If a partial dam can be put across a narrow river mouth that has broad low-lying ground behind it, quite significant flows can be generated for most of the day. Further, unlike solar and wind power, tides are very predictable. Tides vary in amplitude, with a record apparently going to the Bay of Fundy in Canada: 15 meters in height.

So why don’t we use these forms of energy? Waves and tides are guaranteed renewable and we do not have to do anything to generate them. A surprising fraction of the population lives close to the sea, so transmission costs for them would be straightforward. Similarly, tidal power works well even at low water speeds because compared with wind, water is much denser, and the equipment lasts longer. La Rance, in France, has been operational since 1966. They also do not take up valuable agricultural land. On the other hand, they disturb sea life. A number of fish appear to use the Earth’s magnetic field to navigate and nobody knows if EMF emissions have an effect on marine life. Turbine blades most certainly will. They also tend to be needed near cities, which means they disturb fishing boats and commercial ships.

There are basically two problems. One is engineering. The sea is not a very forgiving place, and when storms come, the water has serious power. The history of wave power is littered with washed up structures, smashed to pieces in storms. Apparently an underwater turbine was put in the Bay of Fundy, but it lasted less than a month. There is a second technical problem: how to make electricity? The usual way would be to move wire through a magnetic field, which is the usual form of a generator/dynamo. The issue here is salt water must be kept completely out, which is less than easy. Since waves go up and down, an alternative is to have some sort of float that mechanically transmits the energy to a generator on shore. That can be made to work on a small scale, but it is less desirable on a larger scale.The second problem is financial. Since history is littered with failed attempts, investors get wary, and perhaps rightly so. There may be huge energies present, but they are dispersed over huge areas, which means power densities are low, and the economics usually become unattractive. Further, while the environmentalists plead for something like this, inevitably it will be, “Somewhere else, please. Not in my line of sight.” So, my guess is this is not a practical solution now or anytime in the reasonable future other than for small specialized efforts.

Materials that Remember their Original Design

Recall in the movie Terminator 2 there was this robot that could turn into a liquid then return to its original shape and act as if it were solid metal. Well, according to Pu Zhang at Binghampton University in the US, something like that has been made, although not quite like the evil robot. What he has made is a solid that acts like a metal that, with sufficient force, can be crushed or variously deformed, then brought back to its original shape spontaneously by warming.

The metal part is a collection of small pieces of Field’s alloy, an alloy of bismuth, indium and tin. This has the rather unusual property of melting at 62 degrees Centigrade, which is the temperature reached by fairly warm water. The pieces have to be made with flat faces of the desired shape so that they effectively lock themselves together and it is this locking that at least partially gives the body its strength. The alloy pieces are then coated with a silicone shell using a process called conformal coating, a technique used to coat circuit boards to protect them from the environment and the whole is put together with 3D printing. How the system works (assuming it does) is that when force is applied that would crush or variously deform the fabricated object, as the metal pieces get deformed, the silicone coating gets stretched. The silicone is an elastomer, so as it gets stretched, just like a rubber band, it stores energy. Now, if the object is warmed the metal melts and can flow. At this point, like a rubber band let go, the silicone restores everything to the original shape, the when it cools the metal crystallizes and we are back where we started.

According to Physics World Zhang and his colleagues made several demonstration structures such as a honeycomb, a spider’s web-like structure and a hand, these were all crushed, and when warmed they sprang back to life in their original form. At first sight this might seem to be designed to put panel beaters out of business. You have a minor prang but do not worry: just get out the hair drier and all will be well. That, of course, is unlikely. As you may have noticed, one of the components is indium. There is not a lot of indium around and for its currently very restricted uses it costs about $US800/kg, which would make for a rather expensive bumper. Large-scale usage would make the cost astronomical. The cost of manufacturing would also always limit its use to rather specialist objects, irrespective of availabiity.One of the uses advocated by Zhang is in space missions. While weight has to be limited on space missions, volume is also a problem, especially for objects with awkward shapes, such as antennae or awkward shaped superstructures. The idea is they could be crushed down to a flat compact load for easy storage, then reassembled. The car bumper might be out of bounds because of cost and limited indium supply, but the cushioning effect arising from its ability to absorb a considerable amount of energy might be useful in space missions. Engineers usually use aluminium or steel for cushioning parts, but they are single use. A spacecraft with such landing cushions can be used once, but landing cushions made of this material could be restored simply by heating them. Zhang seems to favour the use in space engineering. He says he is contemplating building a liquid robot, but there is one thing, apart from behaviour, that such a robot could not do that the terminator robot did, and that is, if the robot has bits knocked off and the bits melt, they cannot reassemble into a whole. Leaving aside the fact there is no force to rejoin the bits, the individual bits will merely reassemble into whatever parts they were and cannot rejoin with the other bits. Think of it as held together by millions of rubber bands. Breaking into bits breaks a fraction of the rubber bands, which leaves no force to restore the original shape at the break.

A Spanner in the Cosmological Works

One of the basic assumptions in Einstein’s relativity is that the laws of physics are constant throughout the Universe. One of those laws is gravity, and an odd thing about gravity is that matter always attracts other matter, so why doesn’t everything simply collapse and fall to one gigantic mass? Einstein “explained” that with a “cosmological constant” which was really an ad hoc term put there to stop that happening. Then in 1927 Georges Lemaȋtre, a Belgian priest proposed that the Universe started off as a tiny incredibly condensed state that expanded, and is still expanding –  the so-called “Big Bang”. Hubble then found that on a sufficiently large scale, everything is moving away from the rest, and it was possible to extrapolate back in time to see when it all started. This was not universally agreed until the cosmic microwave background, which is required by this theory, was detected, and detected more or less in the required form. All was well, until eventually, three “problems” were perceived to arise: the “Horizon problem”, the “Flatness problem”, and the “Magnetic monopole problem”.

The Horizon problem is that on a sufficiently large scale, everything looks the same. The problem is, things are moving away from each other at such great distances, so how did they come into thermal equilibrium when there was no contact between them? I must confess I do not understand this. If the initial mass is a ball of uniform but incredibly dense energy, then if it is uniform, and if the expansion is uniform, everything that happens follows common rate equations, so to get the large-scale uniformity, all you need is the uniform expansion of the energy and a common clock. If particle A wants to decay, surely it does not have to get permission from the other side of the Universe. The flatness problem is that the Universe seems to behave as if it followed Euclidean geometry. In the cosmological models, this requires a certain specific particle density. The problem is, out of all the densities, why is it more or less exactly right? Is this reasoning circular, bearing in mind the models were constructed to give what we see? The cosmic microwave background is a strong indication that Euclidean geometry is correct, but maybe there are other models that might give this result with less difficulties. Finally, the magnetic monopole problem is we cannot find magnetic monopoles. In this context, so far all electromagnetism is in accord with Maxwell’s electromagnetic theory and its equations exclude magnetic monopoles. Maybe we can’t find them because the enthusiasts who argue they should be there are wrong.

Anyway, around 1980, Alan Guth introduced a theory called inflation that would “solve” these problems. In this, within 10^-36 seconds after the big bang (that is 1 second of time divided by10 multiplied by itself 36 times) the Universe made a crazy expansion from almost no volume to something approaching what we see now by 10^-32 seconds after the big bang, then everything slowed down and we get what we have now – a tolerably slowly expanding Universe but with quantum fluctuations that led to the galaxies, etc that we see today. This theory “does away” with these problems. Mind you, not everyone agrees. The mathematician Roger Penrose has pointed out that this inflation requires extremely specific initial conditions, so not only has this moved the problem, but it made it worse. Further, getting a flat universe with these mathematics is extremely improbable. Oops.

So, to the spanner. Scientists from UNSW Sydney reported that measurements on light from a quasar 13 billion light years away found that the fine structure constant was, er, not exactly constant. The fine structure constant α is

α=e2/2εoch

The terms are e the elementary electric charge, εo is the permittivity of free space, c is the sped of light, and h is Planck’s constant, or the quantum of action. If you don’t understand the details, don’t worry. The key point is α is a number (a shade over 137) and is a ratio of the most important constants in electromagnetic theory. If that is not constant, it means all of fundamental physics is not constant.  No only that, but in one direction, the strength of the electric force appeared to increase, but in the opposite direction, decrease. Not only that but a team in the US made observations about Xrays from distant galaxies, and found directionality as well, and even more interesting, their directional axis was essentially the same as the Australian findings. That appears to mean the Universe is dipolar, which means the basic assumption underpinning relativity is not exactly correct, while all those mathematical gymnastics to explain some difficult “problems” such as the horizon problem are irrelevant because they have concluded how something occurred that actually didn’t. Given that enthusiasts do not give up easily I expect soon there will be a deluge of papers explaining why it had tp be dipolar.

Molten Salt Nuclear Reactors

In the previous post, I outlined two reasons why nuclear power is overlooked, if not shunned, despite the fact it will clearly reduce greenhouse gas emissions. I discussed wastes as a problem, and while they are a problem, as I tried to show they are in principle reasonably easily dealt with. There is a need for more work and there are difficulties, but there is no reason this problem cannot be overcome. The other reason is the danger of the Chernobyl/Fukushima type explosion. In the case of Chernobyl, it needed a frightening number of totally stupid decisions to be made, and you might expect that since it was a training exercise there would be people there who knew what they were doing to supervise. But no, and worse, the operating instructions were unintelligible, having been amended with strike-outs and hand-written “corrections” that nobody could understand. You might have thought the supervisor would check to see everything was available and correct before starting, but as I noted, there has never been a shortage of stupidity.

The nuclear reaction, which generates the heat, is initiated by a fissile nucleus absorbing a neutron and splitting, and then keeping going by providing more neutrons. These neutrons either split further fissile nuclei, such as 235U, or they get absorbed by something else, such as 238U, which converts that nucleus to something else, in this case eventually 239Pu. The splitting of nuclei produces the heat, and to run at constant temperature, it is necessary to have a means of removing that amount of heat continuously. The rate of neutron absorption is determined by the “concentration” of fissile material and the amount of neutrons absorbed by something else, such as water, graphite and a number of other materials. The disaster happens when the reaction goes too quickly, and there is too much heat generated for the cooling medium. The metal melts and drips to the bottom of the reactor, where it flows together to form a large blob that is out of the cooling circuit. As the amount builds up it gets hotter and hotter, and we have a disaster.

The idea of the molten salt reactor is there are no metal rods. The material can be put in as a salt in solution, so the concentration automatically determines the operating temperature. The reactor can be moderated with graphite, beryllium oxide, or a number of others, or it can be run unmoderated. Temperatures can get up to 1400 degrees C, which, from basic thermodynamics, gives exceptional power efficiency, and finally, reactors can be relatively small. The initial design was apparently for aircraft propulsion, and you guessed it: bombers. The salts are usually fluorides because low-valence fluorides boil at very high temperatures, they are poor neutron absorbers, and their chemical bonds are exceptionally strong, which limits corrosion, and they are exceptionally inert chemically. In one sense they are extremely safe, although since beryllium fluoride is often used, its extreme toxicity requires careful handling. But the big main advantage of this sort of reactor, besides avoiding the meltdown, is it burns actinides and so if it makes plutonium, that is added to the fuel. More energy! It also burns some of the fission wastes, and such burning of wastes also releases energy. It can be powered by thorium (with some uranium to get the starting neutrons) which does not make anything suitable for making bombs. Further, the fission products in the thorium cycle have far shorter half-lives. Research on this started in the 1960s and essentially stopped. Guess why! There are other fourth generation reactors being designed, and some nuclear engineers may well disagree with my preference, but it is imperative, in my opinion, that we adopt some. We badly need some means of generating large amounts of electricity without burning fossil fuels. Whatever we decide to do, while the physics is well understood, the engineering may not be, and this must be solved if we are to avoid a planet-wide overheating. The politicians have to ensure this job gets done.

A Different Way of Applying Quantum Mechanics to Chemistry

Time to explain what I presented at the conference, and to start, there are interpretive issues to sort. The first issue is either there is a physical wave or there is not. While there is no definitive answer to this, I argue there is, because something has to go through both slits in the two slit experiment, and the evidence is the particle always goes through only one slit. That means something else should be there. 

I differ from the pilot wave theory in two ways. The first is mathematical. The wave is taken as complex because its phase is. (Here, complex means a value includes the square root of minus 1, and is sometimes called an imginary number for obvious reasons.) However, Euler, the mathematician that really invented complex numbers, showed that if the phase evolves, as waves always do by definition of an oscillation and as action does with time, there are two points that are real, and these are at the antinodes of the wave. That means the amplitudes of the quantal matter wave should have real values there. The second difference is if the particle is governed by a wave pulse, the two must travel at the same velocity. If so, and if the standard quantum equations apply, there is a further energy, equal in magnitude to the kinetic energy of the particle. This could be regarded as equivalent to Bohm’s quantum potential, except there is a clear difference in that there is a clear value of it. It is a hidden variable in that you cannot measure it, but then again, so is potential energy; you have never actually measured that.

This gives a rather unexpected simplification: the wave behaves exactly as a classical wave, in which the square of the amplitude, which is a real number, gives the energy of the particle. This is a big advance over the probabilistic interpetation because while it may be correct,  the standard wave theory would say that A^2 = 1 per particle, that is, the probability of one particle being somewhere is 1, which is not particularly informative. The difference now is that for something like the chemical bond, the probabilistic interpretation requires all values of ψ1 to interact with all values of ψ2; The guidance wave method merely needs the magnitude of the combined amplitude.  Further, the wave refers only to a particle’s motion, and not to a state (although the particle motion may define a state). This gives a remarkable simplification for the stationary state, and in particular, the chemical bond. The usual way of calculating this involves calculating the probabilities of where all the electrons will be, then further calculating the interactions between them, recalculating their positions with the new interactions, recalculating the new interactions, etc, and throughout this procedure, getting the positional probabilities requires double integrations because forces give accelerations, which means constants have to be assigned. This is often done, following Pople, by making the calculations fit similar molecules and transferring the constants, which to me is essentially an empirical assignment, albeit disguised with some fearsome mathematics.

The approach I introduced was to ignore the position of the electrons and concentrate on the waves. The waves add linearly, but you introduce two new interactions that effectively require two new wave components. They are restricted to being between the nuclei, the reason being that the force on nucleus 1, from atom 2 must be zero, otherwise it would accelerate and there would be no bond. The energy of an interaction is the energy added to the antinode. For a hydrogen molecule, the electrons are indistinguishable, so the energy of the two new interactions are equivalent to the energy of the two equivalent ones from the linear addition, therefore the bond energy of H2 is 1/3 the Rydberg energy. This is out by about 0.3%. We get better agreement using potential energy, and introduce electric field renormalisation to comply with Maxwell’s equations. Needless to say this did not get much excitement from the conference audience. You cannot dazzle with obscure mathematics doing this.

The first shock for the audience was when I introduced my Certainty Principle, which, roughly stated, is the action of a stationary state must be quantized, and further, because of the force relationship I introduced above, the action must be that of the sum of the participating atoms. Further, the periodic time of the initial electron interactions must be constant (because their waves add linearly, a basic wave property). The reason you get bonding from electron pairing is that the two electrons halve the periodic time on that zone, which permits twice the energy at constant action, or the addition of the two extras as shown for hydrogen. That also contracts the distance to the antinode, in which case the appropriate energy can only arise because the electron – electron repulsion compensates. This action relationship is also the first time a real cause of there being a bond radius that is constant across a number of bonds has been made. The repulsion energy which is such a problem if you consider it from the point of view of electron position is self-correcting if you consider the wave aspect. The waves cannot add linearly and maintain constant action unless the electrons provide exactly the correct compensation, and the proposition is the guidance waves guide them into doing that.

The next piece of “shock” comes with other atoms. The standard theory says their wave functions correspond to the excited states of hydrogen because they are the only solutions of the Schrödinger equation. However, if you do that, you find that the calculated values are too small. By caesium, the calculated energy is out by an order of magnitude. The standard answer – the electrons penetrate the space occupied by the inner electrons and experience stronger electric field. If you accept the guidance wave principle, that cannot be because the energy is determined at the major antinode. (If you accept Maxwell’s equations I argue it cannot be either, but that is too complicated to be put here.) So my answer is that the wave is actually a sum of component waves that have different nodal structures, and an expression for the so-called screening constant (which is anything but constant, and varies according to circumstances) is actually a function of quantum numbers, and I produced tables that shows the functions give good agreement for the various groups, and for excited states.

Now, the next shock. Standard theory has argued that the properties of heavy elements like gold have unusual properties because these “penetrations” lead to significant relativistic corrections. My guidance wave theory requires such relativistic effects to be very minor, and I provided  evidence that showed the properties of elements like francium, gold, thallium, and lead were as good or better fitting with my functions as any of the other elements.

What I did not tell them was that the tables of data I showed them had been published in a peer reviewed journal over thirty years before, but nobody took any notice. As someone said to me, when I mentioned this as an aside, “If it isn’t published in the Physical Review or J. Chem Phys., nobody reads it.” So much for publishing in academic journals.  As to why I skipped those two, I published that while I was starting my company, and $US 1,000 per page did not strike me as a good investment.So, if you have got this far, I appreciate your persistence. I wish you a very Merry Christmas and the best for 2020. I shall stop posting until a Thursday in mid January, as this is the summer holiday season here.

What does Quantum Mechanics Mean?

Patrice Ayme gave a long comment to my previous post that effectively asked me to explain in some detail the significance of some of my comments on my conference talk involving quantum mechanics. But before that, I should explain why there is even a problem, and I apologise if the following potted history seems a little turgid. Unfortuately, the background situation is important. 

First, we are familiar with classical mechanics, where, given all necessary conditions, exact values of the position and momentum of something can be calculated for any future time, and thanks to Newtom and Leibniz, we do this through differential equations involving familiar concepts such as force, time, position, etc. Thus suppose we shot an arrow into the air and ignored friction and we wanted to know where it was, when. Velocity is the differential of position with respect to time, so we take the velocity and integrate it. However, to get an answer, because there are two degrees of freedom (assuming we know which direction it was shot) we get two constants to the two integrations. In classical mechanics these are easily assigned: the horizontal constant depends on where it was fired from, and the other constant comes from the angle of elevation. 

Classical mechanics reached a mathematical peak through Lagrange and Hamilton. Lagrange introduced a term that is usually the difference between the potential and kinetic energy, and thus converted the problem from forces to one of energy. Hamilton and Jacobi converted the problem to one involving action, which is the time integral of the Lagrangian. The significance of this is that in one sense action summarises all that is involved in our particle going from A to B. All of these variations are equivalent, and merely reflect alternative ways of going about the problem, however the Hamilton Jacobi equation is of special significance because it can be mathematically transformed into a mathematical wave expression. When Hamilton did this, there were undoubtedly a lot of yawns. Only an abstract mathematician would want to represent a cannonball as a wave.

So what is a wave? While energy can be transmitted by particles moving (like a cannon ball) waves transmit energy without moving matter, apart from a small local oscillation. Thus if you place a cork on the sea far from land, the cork basically goes around in a circle, but on average stays in the same place. If there is an ocean current, that will be superimposed on the circular motion without affecting it. The wave has two terms required to describe it: an amplitude (how big is the oscillation?) and a phase (where on the circle is it?).

Then at the end of the 19th century, suddenly classical mechanics gave wrong answers for what was occurring at the atomic level. As a hot body cools, it should give radiation from all possible oscillators and it does not. To explain this, Planck assumed radiation was given off in discrete packets, and introduced the quantum of action h. Einstein, recognizing the Principle of Microscopic Reversibility should apply, argued that light should be absorbed in discrete packages as well, which solved the problem of the photoelectric effect. A big problem arose with atoms, which have positively charged nuclei and electrons moving around it. To move, electrons must accelerate, and hence should radiate energy and spiral into the nucleus. They don’t. Bohr “solved” this problem with the ad hoc assumption that angular momentum was quantised, nevertheless his circular orbits (like planetary orbits) are wrong. For example, if they occurred, hydrogen would be a powerful magnet and it isn’t. Oops. Undeterred, Sommerfeld recognised that angular momentum is dimensionally equivalent to action, and he explained the theory in terms of action integrals. So near, but so far.

The next step involved the French physicist de Broglie. With a little algebra and a bit more inspiration, he represented the motion in terms of momentum and a wavelength, linked by the quantum of action. At this point, it was noted that if you fired very few electrons through two slits at an appropriate distance apart and let them travel to a screen, each electron was registered as a point, but if you kept going, the points started to form a diffraction pattern, the characteristic of waves. The way to solve this was if you take Hamilton’s wave approach, do a couple of pages of algebra and quantise the period by making the phase complex and proportional to the action divided by (to be dimensionally correct bcause the phase must be a number), you arrive at the Schrödinger equation, which is a partial differential equation, and thus is fiendishly difficult to solve. About the same time, Heisenberg introduced what we call the Uncertainty Principle, which usually states that you cannot know the product of the position and the momentum to better than h/2π. Mathematicians then formulated the Schrödinger equation into what we call the state vector formalism, in part to ensure that there are no cunning tricks to get around the Uncertainty Principle.

The Schrödinger equation expresses the energy in terms of a wave function ψ. That immediately raised the question, what does ψ mean? The square of a wave amplitude usually indicats the energy transmitted by the wave. Because ψ is complex, Born interpreted ψ.ψ* as indicating the probability that you would find the particle at the nominated point. The state vector formalism then proposed that ψ.ψ* indicates the probability that a state will have probabilities of certain properties at that point. There was an immediate problem that no experiment could detect the wave. Either there is a wave or there is not. De Broglie and Bohm assumed there was and developed what we call the pilot wave theory, but almost all physicists assume, because you cannot detect it, there is no actual wave.

What do we know happens? First, the particle is always detected as a point, and it is the sum of the points that gives the diffraction pattern characteristic of waves. You never see half a particle. This becomes significant because you can get this diffraction pattern using molecules made from 60 carbon atoms. In the two-slit experiment, what are called weak measurements have shown that the particle always goes through only one slit, and not only that, they do so with exactly the pattern predicted by David Bohm. That triumph appears to be ignored. Another odd feature is that while momentum and energy are part of uncertainty relationships, unlike random variation in something like Brownian motion, the uncertainty never grows

Now for the problems. The state vector formalism considers ψ to represent states. Further, because waves add linearly, the state may be a linear superposition of possibilities. If this merely meant that the probabilities merely represented what you do not know, then there would be no problem, but instead there is a near mystical assertion that all probabilities are present until the subject is observed, at which point the state collapses to what you see. Schrödinger could not tolerate this, not the least because the derivation of his equation is incompatible with this interpretation, and he presented his famous cat paradox, in which a cat is neither dead nor alive but in some sort of quantum superposition until observed. The result was the opposite of what he expected: this ridiculous outcome was asserted to be true, and we have the peculiar logic applied in that you cannot prove it is not true (because the state collapses if you try to observe the cat). Equally, you cannot prove it is true, but that does not deter the mystics. However, there is worse. Recall I noted when we integrate we have to assign necessary constants. When all positions are uncertain, and when we are merely dealing with probabilities in superposition, how do you do this? As John Pople stated in his Nobel lecture, for the chemical bonds of hydrocarbons, he assigned values to the constants by validating them with over two hundred reference compounds. But suppose there is something fundamentally wrong? You can always get the right answer if you have enough assignable constants.The same logic applies to the two-slit experiment. Because the particle could go through either slit and the wave must go through both to get the diffraction pattern, when you assume there is no wave it is argued that the particle goes through both slits as a superposition of the possibilities. This is asserted even though it has clearly been demonstrated that it does not. There is another problem. The assertion that the wave function collapses on observation, and all other probabilities are lost actually lies outside the theory. How does that actually happen? That is called the measurement problem, and as far as I am aware, nobody has an answer, although the obvious answer, the probabilities merely reflected possibilities and the system was always just one but we did not know it is always rejected. Confused? You should be. Next week I shall get around to some from my conference talk that caused stunned concern with the audience.