The Fermi Paradox: Where are the Aliens?

This question, as much as anything, illustrates why people have trouble thinking through problems when they cannot put their own self-importance to one side. Let us look at this problem not from our point of view.

The Fermi paradox is a statement that since there are so many stars, most of which probably have planets, and a reasonable number of them have life, more than half of those are likely to have been around longer than us and so should be more technically advanced, but we have seen no clue as to their presence. Why not? That question begs the obvious counter: why should we? First, while the number of planets is huge, most of them are in other galaxies, and of those in the Milky Way, stars are very well-separated. The nearest, Alpha Centauri, is a three star system: two rather close stars (A G-type star like our sun and a K1 star) and a more distant red dwarf, and these are 4.37 light years away. The two have distances that vary between 35.6 AU to 11.2 AU, i.e. on closest approach they come a little further apart than Saturn and the sun.  That close approach means that planets corresponding to our giants could not exist in stable orbits, and astronomers are fairly confident there are no giants closer to the star. Proxima Centauri has one planet in the habitable zone, but for those familiar with my ebook “Planetary Formation and Biogenesis” will know that in my opinion, the prospect for life originating there, or around most Red Dwarfs, is extremely low. So, could there be Earth-like planets around the two larger stars? Maybe, but our technology cannot find them. As it happens, if there were aliens there, they could not detect Earth with technology at our level either.  Since most stars are immensely further away, rocky planets are difficult to discover. We have found exoplanets, but they are generally giants, planets around M stars, or planets that inadvertently have their orbital planes aligned so we can see eclipses.

This is relevant, because if we are seeking a signal from another civilization, as Seti seeks, then either the signal is deliberate or accidental. An example of accidental is the electromagnetic radiation we send into space through radio and TV signals. According to tvtechnology.com “An average large transmitter transmits about 8kW per multiplex.” That will give “acceptable signal strength” over, say, 50 km. The signal strength attenuates according to the square of the distance, so while the signals will get to Alpha Centauri, they will be extremely weak, and because of bandwidth issues, broadcasts from well separated transmitters will interfere with each other. Weak signals can be amplified, but aliens at Alpha Centauri would get extremely faint noise that might be assignable to technology. 

Suppose you want to send a deliberate signal? Now, you want to boost the power, and the easiest way to get over the inverse square attenuation is to focus the signal. Now, however, you need to know exactly where the intended recipient will be. You might do this for one of your space ships, in which case you would send a slightly broader signal on a very high power level at an agreed frequency but as a short burst. To accidentally detect this, because you have a huge range of frequencies to monitor, you have to accidentally be on that frequency at the time of the burst. There is some chance of Seti detecting such a signal if the space ship was heading to Earth, but then why listen for such a signal, as opposed to waiting for the ship.

The next possible deliberate signal would be aimed at us. To do that, they would need to know we had potential, but let us suppose they did. Suppose it takes something like 4.5 billion years to get technological life, and at that nice round number, they peppered Earth with signals. Oops! We are still in the Cretaceous. Such a move would require a huge power output so as to flood whatever we were using, a guess as to what frequencies we would find of interest, and big costs. Why would they do that, when it may take hundreds or thousands of years for a response? It makes little sense for any “person” to go to all that trouble and know they could never know whether it worked or not. We take the cheap option of listening with telescopes, but if everyone is listening, nobody is sending.

How do they choose a planet? My “Planetary Formation and Biogenesis” concludes you need a rocky planet with major felsic deposits, which is most probable around the G type star (but still much less than 50% of them). So you would need some composition data, and in principle you can get that from spectroscopy (but with much better technology than we have). What could you possibly see? Oxygen is obvious, except it gives poor signals. In the infrared spectra, you might detect ozone, and that would be definitive. You often see statements that methane should be detectable. Yes, but Titan has methane and no life. Very low levels of carbon dioxide is a strong indication, as it suggests large amounts of water to fix it, and plate tectonics to renew it. Obviously, signals from chlorophyll would be proof, but they are not exactly strong. So if they are at anything but the very closest stars they would not know whether we are here, so why waste that expense. The Government accountants would never fund such a project with such a low probability of getting a return on investment. Finally, suppose you decided a planet might have technology, why would you send a signal? As Hawking remarked, an alien species might decide this would be a good planet to eradicate all life and transform it suitable for the aliens to settle. You say that is unlikely, but with all those planets, it only needs one such race. So simple game theory suggests “Don’t do it!” If we assume they are more intelligent than us, they won’t transmit because there is no benefit for those transmitting.

Energy from the Sea. A Difficult Environmental Choice.

If you have many problems and you are forced to do something, it makes sense to choose any option that solves more than one problem. So now, thanks to a certain virus, changes to our economic system will be forced on us, so why not do something about carbon emissions at the same time? The enthusiast will tell us science offers us a number of options, so let’s get on with it. The enthusiast trots out what supports his view, but what about what he does not say? Look at the following.

An assessment from the US Energy Information Administration states the world will use 21,000 TWh of electricity in 2020. According to the International Energy Agency, the waves in the world’s oceans store about 80,000 TWh. Of course much of that is, well, out at sea, but they estimate about 4,000 TWh could be harvested. While that is less than 20% of what is needed, it is still a huge amount. They are a little coy on how this could be done, though. Wave power depends on wave height (the amplitude of the wave) and how fast the waves are moving (the phase velocity). One point is that waves usually move to the coast, and there are many parts of the world where there are usually waves of reasonable amplitude so an energy source is there.

Ocean currents also have power, and the oceans are really one giant heat engine. One estimate claimed that 0.1% of the power of the Gulf Stream running along the East Coast of the US would be equivalent to 150 nuclear power stations. Yes, but the obvious problem is the cross-sectional area of the Gulf Stream. Enormous amounts of energy may be present, but the water is moving fairly slowly, so a huge area has to be trapped to get that energy. 

It is simpler to extract energy from tides, if you can find appropriate places. If a partial dam can be put across a narrow river mouth that has broad low-lying ground behind it, quite significant flows can be generated for most of the day. Further, unlike solar and wind power, tides are very predictable. Tides vary in amplitude, with a record apparently going to the Bay of Fundy in Canada: 15 meters in height.

So why don’t we use these forms of energy? Waves and tides are guaranteed renewable and we do not have to do anything to generate them. A surprising fraction of the population lives close to the sea, so transmission costs for them would be straightforward. Similarly, tidal power works well even at low water speeds because compared with wind, water is much denser, and the equipment lasts longer. La Rance, in France, has been operational since 1966. They also do not take up valuable agricultural land. On the other hand, they disturb sea life. A number of fish appear to use the Earth’s magnetic field to navigate and nobody knows if EMF emissions have an effect on marine life. Turbine blades most certainly will. They also tend to be needed near cities, which means they disturb fishing boats and commercial ships.

There are basically two problems. One is engineering. The sea is not a very forgiving place, and when storms come, the water has serious power. The history of wave power is littered with washed up structures, smashed to pieces in storms. Apparently an underwater turbine was put in the Bay of Fundy, but it lasted less than a month. There is a second technical problem: how to make electricity? The usual way would be to move wire through a magnetic field, which is the usual form of a generator/dynamo. The issue here is salt water must be kept completely out, which is less than easy. Since waves go up and down, an alternative is to have some sort of float that mechanically transmits the energy to a generator on shore. That can be made to work on a small scale, but it is less desirable on a larger scale.The second problem is financial. Since history is littered with failed attempts, investors get wary, and perhaps rightly so. There may be huge energies present, but they are dispersed over huge areas, which means power densities are low, and the economics usually become unattractive. Further, while the environmentalists plead for something like this, inevitably it will be, “Somewhere else, please. Not in my line of sight.” So, my guess is this is not a practical solution now or anytime in the reasonable future other than for small specialized efforts.

Materials that Remember their Original Design

Recall in the movie Terminator 2 there was this robot that could turn into a liquid then return to its original shape and act as if it were solid metal. Well, according to Pu Zhang at Binghampton University in the US, something like that has been made, although not quite like the evil robot. What he has made is a solid that acts like a metal that, with sufficient force, can be crushed or variously deformed, then brought back to its original shape spontaneously by warming.

The metal part is a collection of small pieces of Field’s alloy, an alloy of bismuth, indium and tin. This has the rather unusual property of melting at 62 degrees Centigrade, which is the temperature reached by fairly warm water. The pieces have to be made with flat faces of the desired shape so that they effectively lock themselves together and it is this locking that at least partially gives the body its strength. The alloy pieces are then coated with a silicone shell using a process called conformal coating, a technique used to coat circuit boards to protect them from the environment and the whole is put together with 3D printing. How the system works (assuming it does) is that when force is applied that would crush or variously deform the fabricated object, as the metal pieces get deformed, the silicone coating gets stretched. The silicone is an elastomer, so as it gets stretched, just like a rubber band, it stores energy. Now, if the object is warmed the metal melts and can flow. At this point, like a rubber band let go, the silicone restores everything to the original shape, the when it cools the metal crystallizes and we are back where we started.

According to Physics World Zhang and his colleagues made several demonstration structures such as a honeycomb, a spider’s web-like structure and a hand, these were all crushed, and when warmed they sprang back to life in their original form. At first sight this might seem to be designed to put panel beaters out of business. You have a minor prang but do not worry: just get out the hair drier and all will be well. That, of course, is unlikely. As you may have noticed, one of the components is indium. There is not a lot of indium around and for its currently very restricted uses it costs about $US800/kg, which would make for a rather expensive bumper. Large-scale usage would make the cost astronomical. The cost of manufacturing would also always limit its use to rather specialist objects, irrespective of availabiity.One of the uses advocated by Zhang is in space missions. While weight has to be limited on space missions, volume is also a problem, especially for objects with awkward shapes, such as antennae or awkward shaped superstructures. The idea is they could be crushed down to a flat compact load for easy storage, then reassembled. The car bumper might be out of bounds because of cost and limited indium supply, but the cushioning effect arising from its ability to absorb a considerable amount of energy might be useful in space missions. Engineers usually use aluminium or steel for cushioning parts, but they are single use. A spacecraft with such landing cushions can be used once, but landing cushions made of this material could be restored simply by heating them. Zhang seems to favour the use in space engineering. He says he is contemplating building a liquid robot, but there is one thing, apart from behaviour, that such a robot could not do that the terminator robot did, and that is, if the robot has bits knocked off and the bits melt, they cannot reassemble into a whole. Leaving aside the fact there is no force to rejoin the bits, the individual bits will merely reassemble into whatever parts they were and cannot rejoin with the other bits. Think of it as held together by millions of rubber bands. Breaking into bits breaks a fraction of the rubber bands, which leaves no force to restore the original shape at the break.

A Spanner in the Cosmological Works

One of the basic assumptions in Einstein’s relativity is that the laws of physics are constant throughout the Universe. One of those laws is gravity, and an odd thing about gravity is that matter always attracts other matter, so why doesn’t everything simply collapse and fall to one gigantic mass? Einstein “explained” that with a “cosmological constant” which was really an ad hoc term put there to stop that happening. Then in 1927 Georges Lemaȋtre, a Belgian priest proposed that the Universe started off as a tiny incredibly condensed state that expanded, and is still expanding –  the so-called “Big Bang”. Hubble then found that on a sufficiently large scale, everything is moving away from the rest, and it was possible to extrapolate back in time to see when it all started. This was not universally agreed until the cosmic microwave background, which is required by this theory, was detected, and detected more or less in the required form. All was well, until eventually, three “problems” were perceived to arise: the “Horizon problem”, the “Flatness problem”, and the “Magnetic monopole problem”.

The Horizon problem is that on a sufficiently large scale, everything looks the same. The problem is, things are moving away from each other at such great distances, so how did they come into thermal equilibrium when there was no contact between them? I must confess I do not understand this. If the initial mass is a ball of uniform but incredibly dense energy, then if it is uniform, and if the expansion is uniform, everything that happens follows common rate equations, so to get the large-scale uniformity, all you need is the uniform expansion of the energy and a common clock. If particle A wants to decay, surely it does not have to get permission from the other side of the Universe. The flatness problem is that the Universe seems to behave as if it followed Euclidean geometry. In the cosmological models, this requires a certain specific particle density. The problem is, out of all the densities, why is it more or less exactly right? Is this reasoning circular, bearing in mind the models were constructed to give what we see? The cosmic microwave background is a strong indication that Euclidean geometry is correct, but maybe there are other models that might give this result with less difficulties. Finally, the magnetic monopole problem is we cannot find magnetic monopoles. In this context, so far all electromagnetism is in accord with Maxwell’s electromagnetic theory and its equations exclude magnetic monopoles. Maybe we can’t find them because the enthusiasts who argue they should be there are wrong.

Anyway, around 1980, Alan Guth introduced a theory called inflation that would “solve” these problems. In this, within 10^-36 seconds after the big bang (that is 1 second of time divided by10 multiplied by itself 36 times) the Universe made a crazy expansion from almost no volume to something approaching what we see now by 10^-32 seconds after the big bang, then everything slowed down and we get what we have now – a tolerably slowly expanding Universe but with quantum fluctuations that led to the galaxies, etc that we see today. This theory “does away” with these problems. Mind you, not everyone agrees. The mathematician Roger Penrose has pointed out that this inflation requires extremely specific initial conditions, so not only has this moved the problem, but it made it worse. Further, getting a flat universe with these mathematics is extremely improbable. Oops.

So, to the spanner. Scientists from UNSW Sydney reported that measurements on light from a quasar 13 billion light years away found that the fine structure constant was, er, not exactly constant. The fine structure constant α is

α=e2/2εoch

The terms are e the elementary electric charge, εo is the permittivity of free space, c is the sped of light, and h is Planck’s constant, or the quantum of action. If you don’t understand the details, don’t worry. The key point is α is a number (a shade over 137) and is a ratio of the most important constants in electromagnetic theory. If that is not constant, it means all of fundamental physics is not constant.  No only that, but in one direction, the strength of the electric force appeared to increase, but in the opposite direction, decrease. Not only that but a team in the US made observations about Xrays from distant galaxies, and found directionality as well, and even more interesting, their directional axis was essentially the same as the Australian findings. That appears to mean the Universe is dipolar, which means the basic assumption underpinning relativity is not exactly correct, while all those mathematical gymnastics to explain some difficult “problems” such as the horizon problem are irrelevant because they have concluded how something occurred that actually didn’t. Given that enthusiasts do not give up easily I expect soon there will be a deluge of papers explaining why it had tp be dipolar.

Molten Salt Nuclear Reactors

In the previous post, I outlined two reasons why nuclear power is overlooked, if not shunned, despite the fact it will clearly reduce greenhouse gas emissions. I discussed wastes as a problem, and while they are a problem, as I tried to show they are in principle reasonably easily dealt with. There is a need for more work and there are difficulties, but there is no reason this problem cannot be overcome. The other reason is the danger of the Chernobyl/Fukushima type explosion. In the case of Chernobyl, it needed a frightening number of totally stupid decisions to be made, and you might expect that since it was a training exercise there would be people there who knew what they were doing to supervise. But no, and worse, the operating instructions were unintelligible, having been amended with strike-outs and hand-written “corrections” that nobody could understand. You might have thought the supervisor would check to see everything was available and correct before starting, but as I noted, there has never been a shortage of stupidity.

The nuclear reaction, which generates the heat, is initiated by a fissile nucleus absorbing a neutron and splitting, and then keeping going by providing more neutrons. These neutrons either split further fissile nuclei, such as 235U, or they get absorbed by something else, such as 238U, which converts that nucleus to something else, in this case eventually 239Pu. The splitting of nuclei produces the heat, and to run at constant temperature, it is necessary to have a means of removing that amount of heat continuously. The rate of neutron absorption is determined by the “concentration” of fissile material and the amount of neutrons absorbed by something else, such as water, graphite and a number of other materials. The disaster happens when the reaction goes too quickly, and there is too much heat generated for the cooling medium. The metal melts and drips to the bottom of the reactor, where it flows together to form a large blob that is out of the cooling circuit. As the amount builds up it gets hotter and hotter, and we have a disaster.

The idea of the molten salt reactor is there are no metal rods. The material can be put in as a salt in solution, so the concentration automatically determines the operating temperature. The reactor can be moderated with graphite, beryllium oxide, or a number of others, or it can be run unmoderated. Temperatures can get up to 1400 degrees C, which, from basic thermodynamics, gives exceptional power efficiency, and finally, reactors can be relatively small. The initial design was apparently for aircraft propulsion, and you guessed it: bombers. The salts are usually fluorides because low-valence fluorides boil at very high temperatures, they are poor neutron absorbers, and their chemical bonds are exceptionally strong, which limits corrosion, and they are exceptionally inert chemically. In one sense they are extremely safe, although since beryllium fluoride is often used, its extreme toxicity requires careful handling. But the big main advantage of this sort of reactor, besides avoiding the meltdown, is it burns actinides and so if it makes plutonium, that is added to the fuel. More energy! It also burns some of the fission wastes, and such burning of wastes also releases energy. It can be powered by thorium (with some uranium to get the starting neutrons) which does not make anything suitable for making bombs. Further, the fission products in the thorium cycle have far shorter half-lives. Research on this started in the 1960s and essentially stopped. Guess why! There are other fourth generation reactors being designed, and some nuclear engineers may well disagree with my preference, but it is imperative, in my opinion, that we adopt some. We badly need some means of generating large amounts of electricity without burning fossil fuels. Whatever we decide to do, while the physics is well understood, the engineering may not be, and this must be solved if we are to avoid a planet-wide overheating. The politicians have to ensure this job gets done.

A Different Way of Applying Quantum Mechanics to Chemistry

Time to explain what I presented at the conference, and to start, there are interpretive issues to sort. The first issue is either there is a physical wave or there is not. While there is no definitive answer to this, I argue there is, because something has to go through both slits in the two slit experiment, and the evidence is the particle always goes through only one slit. That means something else should be there. 

I differ from the pilot wave theory in two ways. The first is mathematical. The wave is taken as complex because its phase is. (Here, complex means a value includes the square root of minus 1, and is sometimes called an imginary number for obvious reasons.) However, Euler, the mathematician that really invented complex numbers, showed that if the phase evolves, as waves always do by definition of an oscillation and as action does with time, there are two points that are real, and these are at the antinodes of the wave. That means the amplitudes of the quantal matter wave should have real values there. The second difference is if the particle is governed by a wave pulse, the two must travel at the same velocity. If so, and if the standard quantum equations apply, there is a further energy, equal in magnitude to the kinetic energy of the particle. This could be regarded as equivalent to Bohm’s quantum potential, except there is a clear difference in that there is a clear value of it. It is a hidden variable in that you cannot measure it, but then again, so is potential energy; you have never actually measured that.

This gives a rather unexpected simplification: the wave behaves exactly as a classical wave, in which the square of the amplitude, which is a real number, gives the energy of the particle. This is a big advance over the probabilistic interpetation because while it may be correct,  the standard wave theory would say that A^2 = 1 per particle, that is, the probability of one particle being somewhere is 1, which is not particularly informative. The difference now is that for something like the chemical bond, the probabilistic interpretation requires all values of ψ1 to interact with all values of ψ2; The guidance wave method merely needs the magnitude of the combined amplitude.  Further, the wave refers only to a particle’s motion, and not to a state (although the particle motion may define a state). This gives a remarkable simplification for the stationary state, and in particular, the chemical bond. The usual way of calculating this involves calculating the probabilities of where all the electrons will be, then further calculating the interactions between them, recalculating their positions with the new interactions, recalculating the new interactions, etc, and throughout this procedure, getting the positional probabilities requires double integrations because forces give accelerations, which means constants have to be assigned. This is often done, following Pople, by making the calculations fit similar molecules and transferring the constants, which to me is essentially an empirical assignment, albeit disguised with some fearsome mathematics.

The approach I introduced was to ignore the position of the electrons and concentrate on the waves. The waves add linearly, but you introduce two new interactions that effectively require two new wave components. They are restricted to being between the nuclei, the reason being that the force on nucleus 1, from atom 2 must be zero, otherwise it would accelerate and there would be no bond. The energy of an interaction is the energy added to the antinode. For a hydrogen molecule, the electrons are indistinguishable, so the energy of the two new interactions are equivalent to the energy of the two equivalent ones from the linear addition, therefore the bond energy of H2 is 1/3 the Rydberg energy. This is out by about 0.3%. We get better agreement using potential energy, and introduce electric field renormalisation to comply with Maxwell’s equations. Needless to say this did not get much excitement from the conference audience. You cannot dazzle with obscure mathematics doing this.

The first shock for the audience was when I introduced my Certainty Principle, which, roughly stated, is the action of a stationary state must be quantized, and further, because of the force relationship I introduced above, the action must be that of the sum of the participating atoms. Further, the periodic time of the initial electron interactions must be constant (because their waves add linearly, a basic wave property). The reason you get bonding from electron pairing is that the two electrons halve the periodic time on that zone, which permits twice the energy at constant action, or the addition of the two extras as shown for hydrogen. That also contracts the distance to the antinode, in which case the appropriate energy can only arise because the electron – electron repulsion compensates. This action relationship is also the first time a real cause of there being a bond radius that is constant across a number of bonds has been made. The repulsion energy which is such a problem if you consider it from the point of view of electron position is self-correcting if you consider the wave aspect. The waves cannot add linearly and maintain constant action unless the electrons provide exactly the correct compensation, and the proposition is the guidance waves guide them into doing that.

The next piece of “shock” comes with other atoms. The standard theory says their wave functions correspond to the excited states of hydrogen because they are the only solutions of the Schrödinger equation. However, if you do that, you find that the calculated values are too small. By caesium, the calculated energy is out by an order of magnitude. The standard answer – the electrons penetrate the space occupied by the inner electrons and experience stronger electric field. If you accept the guidance wave principle, that cannot be because the energy is determined at the major antinode. (If you accept Maxwell’s equations I argue it cannot be either, but that is too complicated to be put here.) So my answer is that the wave is actually a sum of component waves that have different nodal structures, and an expression for the so-called screening constant (which is anything but constant, and varies according to circumstances) is actually a function of quantum numbers, and I produced tables that shows the functions give good agreement for the various groups, and for excited states.

Now, the next shock. Standard theory has argued that the properties of heavy elements like gold have unusual properties because these “penetrations” lead to significant relativistic corrections. My guidance wave theory requires such relativistic effects to be very minor, and I provided  evidence that showed the properties of elements like francium, gold, thallium, and lead were as good or better fitting with my functions as any of the other elements.

What I did not tell them was that the tables of data I showed them had been published in a peer reviewed journal over thirty years before, but nobody took any notice. As someone said to me, when I mentioned this as an aside, “If it isn’t published in the Physical Review or J. Chem Phys., nobody reads it.” So much for publishing in academic journals.  As to why I skipped those two, I published that while I was starting my company, and $US 1,000 per page did not strike me as a good investment.So, if you have got this far, I appreciate your persistence. I wish you a very Merry Christmas and the best for 2020. I shall stop posting until a Thursday in mid January, as this is the summer holiday season here.

What does Quantum Mechanics Mean?

Patrice Ayme gave a long comment to my previous post that effectively asked me to explain in some detail the significance of some of my comments on my conference talk involving quantum mechanics. But before that, I should explain why there is even a problem, and I apologise if the following potted history seems a little turgid. Unfortuately, the background situation is important. 

First, we are familiar with classical mechanics, where, given all necessary conditions, exact values of the position and momentum of something can be calculated for any future time, and thanks to Newtom and Leibniz, we do this through differential equations involving familiar concepts such as force, time, position, etc. Thus suppose we shot an arrow into the air and ignored friction and we wanted to know where it was, when. Velocity is the differential of position with respect to time, so we take the velocity and integrate it. However, to get an answer, because there are two degrees of freedom (assuming we know which direction it was shot) we get two constants to the two integrations. In classical mechanics these are easily assigned: the horizontal constant depends on where it was fired from, and the other constant comes from the angle of elevation. 

Classical mechanics reached a mathematical peak through Lagrange and Hamilton. Lagrange introduced a term that is usually the difference between the potential and kinetic energy, and thus converted the problem from forces to one of energy. Hamilton and Jacobi converted the problem to one involving action, which is the time integral of the Lagrangian. The significance of this is that in one sense action summarises all that is involved in our particle going from A to B. All of these variations are equivalent, and merely reflect alternative ways of going about the problem, however the Hamilton Jacobi equation is of special significance because it can be mathematically transformed into a mathematical wave expression. When Hamilton did this, there were undoubtedly a lot of yawns. Only an abstract mathematician would want to represent a cannonball as a wave.

So what is a wave? While energy can be transmitted by particles moving (like a cannon ball) waves transmit energy without moving matter, apart from a small local oscillation. Thus if you place a cork on the sea far from land, the cork basically goes around in a circle, but on average stays in the same place. If there is an ocean current, that will be superimposed on the circular motion without affecting it. The wave has two terms required to describe it: an amplitude (how big is the oscillation?) and a phase (where on the circle is it?).

Then at the end of the 19th century, suddenly classical mechanics gave wrong answers for what was occurring at the atomic level. As a hot body cools, it should give radiation from all possible oscillators and it does not. To explain this, Planck assumed radiation was given off in discrete packets, and introduced the quantum of action h. Einstein, recognizing the Principle of Microscopic Reversibility should apply, argued that light should be absorbed in discrete packages as well, which solved the problem of the photoelectric effect. A big problem arose with atoms, which have positively charged nuclei and electrons moving around it. To move, electrons must accelerate, and hence should radiate energy and spiral into the nucleus. They don’t. Bohr “solved” this problem with the ad hoc assumption that angular momentum was quantised, nevertheless his circular orbits (like planetary orbits) are wrong. For example, if they occurred, hydrogen would be a powerful magnet and it isn’t. Oops. Undeterred, Sommerfeld recognised that angular momentum is dimensionally equivalent to action, and he explained the theory in terms of action integrals. So near, but so far.

The next step involved the French physicist de Broglie. With a little algebra and a bit more inspiration, he represented the motion in terms of momentum and a wavelength, linked by the quantum of action. At this point, it was noted that if you fired very few electrons through two slits at an appropriate distance apart and let them travel to a screen, each electron was registered as a point, but if you kept going, the points started to form a diffraction pattern, the characteristic of waves. The way to solve this was if you take Hamilton’s wave approach, do a couple of pages of algebra and quantise the period by making the phase complex and proportional to the action divided by (to be dimensionally correct bcause the phase must be a number), you arrive at the Schrödinger equation, which is a partial differential equation, and thus is fiendishly difficult to solve. About the same time, Heisenberg introduced what we call the Uncertainty Principle, which usually states that you cannot know the product of the position and the momentum to better than h/2π. Mathematicians then formulated the Schrödinger equation into what we call the state vector formalism, in part to ensure that there are no cunning tricks to get around the Uncertainty Principle.

The Schrödinger equation expresses the energy in terms of a wave function ψ. That immediately raised the question, what does ψ mean? The square of a wave amplitude usually indicats the energy transmitted by the wave. Because ψ is complex, Born interpreted ψ.ψ* as indicating the probability that you would find the particle at the nominated point. The state vector formalism then proposed that ψ.ψ* indicates the probability that a state will have probabilities of certain properties at that point. There was an immediate problem that no experiment could detect the wave. Either there is a wave or there is not. De Broglie and Bohm assumed there was and developed what we call the pilot wave theory, but almost all physicists assume, because you cannot detect it, there is no actual wave.

What do we know happens? First, the particle is always detected as a point, and it is the sum of the points that gives the diffraction pattern characteristic of waves. You never see half a particle. This becomes significant because you can get this diffraction pattern using molecules made from 60 carbon atoms. In the two-slit experiment, what are called weak measurements have shown that the particle always goes through only one slit, and not only that, they do so with exactly the pattern predicted by David Bohm. That triumph appears to be ignored. Another odd feature is that while momentum and energy are part of uncertainty relationships, unlike random variation in something like Brownian motion, the uncertainty never grows

Now for the problems. The state vector formalism considers ψ to represent states. Further, because waves add linearly, the state may be a linear superposition of possibilities. If this merely meant that the probabilities merely represented what you do not know, then there would be no problem, but instead there is a near mystical assertion that all probabilities are present until the subject is observed, at which point the state collapses to what you see. Schrödinger could not tolerate this, not the least because the derivation of his equation is incompatible with this interpretation, and he presented his famous cat paradox, in which a cat is neither dead nor alive but in some sort of quantum superposition until observed. The result was the opposite of what he expected: this ridiculous outcome was asserted to be true, and we have the peculiar logic applied in that you cannot prove it is not true (because the state collapses if you try to observe the cat). Equally, you cannot prove it is true, but that does not deter the mystics. However, there is worse. Recall I noted when we integrate we have to assign necessary constants. When all positions are uncertain, and when we are merely dealing with probabilities in superposition, how do you do this? As John Pople stated in his Nobel lecture, for the chemical bonds of hydrocarbons, he assigned values to the constants by validating them with over two hundred reference compounds. But suppose there is something fundamentally wrong? You can always get the right answer if you have enough assignable constants.The same logic applies to the two-slit experiment. Because the particle could go through either slit and the wave must go through both to get the diffraction pattern, when you assume there is no wave it is argued that the particle goes through both slits as a superposition of the possibilities. This is asserted even though it has clearly been demonstrated that it does not. There is another problem. The assertion that the wave function collapses on observation, and all other probabilities are lost actually lies outside the theory. How does that actually happen? That is called the measurement problem, and as far as I am aware, nobody has an answer, although the obvious answer, the probabilities merely reflected possibilities and the system was always just one but we did not know it is always rejected. Confused? You should be. Next week I shall get around to some from my conference talk that caused stunned concern with the audience.

Galactic Collisions

As some may know, the Milky Way galaxy and the Andromeda galaxy are closing together and will “collide” in something like 4 – 5 billion years. If you are a good distance away, say in a Magellenic Cloud, this would look really spectacular, but what about if you were on a planet like Earth, right in the middle of it, so to speak? Probably not a lot of difference from what we see. There could be a lot more stars in the sky (and there should be if you use a good telescope) and there may be enhanced light from a dust cloud, but basically, a galaxy is a lot of empty space. As an example, light takes 8 minutes and twenty seconds to get from the sun to Earth. Light from the nearest star takes 4.23 years to get here. Stars are well-spaced.

As we understand it, stars orbit the galactic centre. The orbital velocity of our sun is about 828,000 km/hr, a velocity that makes our rockets look like snails, but it takes something like 230,000,000 years to make an orbit, and we are only about half-way out. As I said, galaxies are rather large. So when the galaxies merge, there will be stars going in a lot of different directions until things settle down. There is a NASA simulation in which, over billions of years, the two pass through each other, throwing “stuff” out into interstellar space, then they turn around and repeat the process, except this time the centres merge, and a lot more “stuff” is thrown out into space. The meaning of “stuff” here is clusters of stars. Hundreds of millions of stars get thrown out into space, many of which turn around and come back, eventually to join the new galaxy. 

Because of the distance between stars the chances of stars colliding comes pretty close to zero, however, it is possible that a star might pass by close enough to perturb planetary orbits. It would have to come quite close to affect Earth, as, for example, if it came as close as Saturn, it would only make a minor perturbation to Earth’s orbit. On the other hand, if that close it could easily rain down a storm of comets, etc, from further out, and seriously disrupt the Kuiper Belt, which could lead to extinction-type collisions. As for the giant planets, it would depend on where they were in their orbit. If a star came that close, it could be travelling at such a speed that if Saturn were on the other side of the star it could know little of the passage.

One interesting point is that such a galactic merger has already happened for the Milky Way. In the Milky Way, the sun and the majority of stars are all in orderly near-circular orbits around the centre, but in the outer zones of the galaxy there is what is called a halo, in which many of the stellar orbits are orbiting in the opposite direction. A study was made of the stars in the halo directly out from the sun, where it was found that there are a number of the stars that have strong similarities in composition, suggesting they formed in the same environment, and this was not expected. (Apparently how active star formation is alters their composition slightly. These stars are roughly similar to those in the Large Magellenic Cloud.)  This suggests they formed from different gas clouds, and the ages of these different stars run from 13 to 10 billion years ago. Further, it turned out that the majority of the stars in this part of the halo appeared to have come from a single source, and it was proposed that this part of the halo of our galaxy largely comprises stars from a smaller galaxy, about the size of the Large Magellenic Cloud that collided with the Milky Way about ten billion years ago. There were no comments on other parts of the halo, presumably because parts on the other side of the galactic centre are difficult to see.

It is likely, in my opinion, that such stars are not restricted to the halo. One example might be Kapteyn’s star. This is a red dwarf about eleven light years away and receding. It, too, is going “the wrong way”, and is about eleven billion years old. It is reputed to have two planets in the so-called habitable zone (reputed because they have not been confirmed) and is of interest in that since the star is going the wrong way, presumably as a consequence of a galactic merger, this shows the probability running into another system sufficiently closely to disrupt the planetary system is of reasonably low probability.

A Planet Destroyer

Probably everyone now knows that there are planets around other stars, and planet formation may very well be normal around developing stars. This, at least, takes such alien planets out of science fiction and into reality. In the standard theory of planetary formation, the assumption is that dust from the accretion disk somehow turns into planetesimals, which are objects of about asteroid size and then mutual gravity brings these together to form planets. A small industry has sprung up in the scientific community to do computerised simulations of this sort of thing, with the output of a very large number of scientific papers, which results in a number of grants to keep the industry going, lots of conferences to attend, and a strong “academic reputation”. The mere fact that nobody knows how to get to their initial position appears to be irrelevant and this is one of the things I believe is wrong with modern science. Because those who award prizes, grants, promotions, etc have no idea whether the work is right or wrong, they look for productivity. Lots of garbage usually easily defeats something novel that the establishment does not easily understand, or is prepared to give the time to try.

Initially, these simulations predicted solar systems similar to ours in that there were planets in circular orbits around their stars, although most simulations actually showed a different number of planets, usually more in the rocky planet zone. The outer zone has been strangely ignored, in part because simulations indicate that because of the greater separation of planetesimals, everything is extremely slow. The Grand Tack simulations indicate that planets cannot form further than about 10 A.U. from the star. That is actually demonstrably wrong, because giants larger than Jupiter and very much further out are observed. What some simulations have argued for is that there is planetary formation activity limited to around the ice point, where the disk was cold enough for water to form ice, and this led to Jupiter and Saturn. The idea behind the NICE model, or Grand Tack model (which is very close to being the same thing) is that Uranus and Neptune formed in this zone and moved out by throwing planetesimals inwards through gravity. However, all the models ended up with planets being in near circular motion around the star because whatever happened was more or less happening equally at all angles to some fixed background. The gas was spiralling into the star so there were models where the planets moved slightly inwards, and sometimes outwards, but with one exception there was never a directional preference. That one exception was when a star came by too close – a rather uncommon occurrence. 

Then, we started to see exoplanets, and there were three immediate problems. The first was the presence of “star-burners”; planets incredibly close to their star; so close they could not have formed there. Further, many of them were giants, and bigger than Jupiter. Models soon came out to accommodate this through density waves in the gas. On a personal level, I always found these difficult to swallow because the very earliest such models calculated the effects as minor and there were two such waves that tended to cancel out each other’s effects. That calculation was made to show why Jupiter did not move, which, for me, raises the problem, if it did not, why did others?

The next major problem was that giants started to appear in the middle of where you might expect the rocky planets to be. The obvious answer to that was, they moved in and stopped, but that begs the question, why did they stop? If we go back to the Grand Tack model, Jupiter was argued to migrate in towards Mars, and while doing so, throw a whole lot of planetesimals out, then Saturn did much the same, then for some reason Saturn turned around and began throwing planetesimals inwards, which Jupiter continued the act and moved out. One answer to our question might be that Jupiter ran out of planetesimals to throw out and stopped, although it is hard to see why. The reason Saturn began throwing planetesimals in was that Uranus and Neptune started life just beyond Saturn and moved out to where they are now by throwing planetesimals in, which fed Saturn’s and Jupiter’s outwards movement. Note that this does depend on a particular starting point, and it is not clear to me  that since planetesimals are supposed to collide and form planets, if there was an equivalent to the masses of Jupiter and Saturn, why did they not form a planet?

The final major problem was that we discovered that the great bulk of exoplanets, apart from those very close to the star, had quite significant elliptical orbits. If you draw a line through the major axis, on one side of the star the planet moves faster and closer to it than the other side. There is a directional preference. How did that come about? The answer appears to be simple. The circular orbit arises from a large number of small interactions that have no particular directional preference. Thus the planet might form from collecting a huge number of planetesimals, or a large amount of gas, and these occur more or less continuously as the planet orbits the star. The elliptical orbit occurs if there is on very big impact or interaction. What is believed to happen is when planets grow, if they get big enough their gravity alters their orbits and if they come quite close to another planet, they exchange energy and one goes outwards, usually leaving the system altogether, and the other moves towards the star, or even into the star. If it comes close enough to the star, the star’s tidal forces circularise the orbit and the planet remains close to the star, and if it is moving prograde, like our moon the tidal forces will push the planet out. Equally, if the orbit is highly elliptical, the planet might “flip”, and become circularised with a retrograde orbit. If so, eventually it is doomed because the tidal forces cause it to fall into the star.

All of which may seem somewhat speculative, but the more interesting point is we have now found evidence this happens, namely evidence that the star M67 Y2235 has ingested a “superearth”. The technique goes by the name “differential stellar spectroscopy”, and what happens is that provided you can realistically estimate what the composition should be, which can be done with reasonable confidence if stars have been formed in a cluster and can reasonably be assigned as having started from the same gas. M67 is a cluster with over 1200 known members and it is close enough that reasonable details can be obtained. Further, the stars have a metallicity (the amount of heavy elements) similar to the sun. A careful study has shown that when the stars are separated into subgroups, they all behave according to expectations, except for Y2235, which has far too high a metallicity. This enhancement corresponds to an amount of rocky planet 5.2 times the mass of the earth in the outer convective envelope. If a star swallows a planet, the impact will usually be tangential because the ingestion is a consequence of an elliptical orbit decaying through tidal interactions with the star such that the planet grazes the external region of the star a few times before its orbital energy is reduced enough for ingestion. If so, the planet should dissolve in the stellar medium and increase the metallicity of the outer envelope of the star. So, to the extent that these observations are correctly interpreted, we have the evidence that stars do ingest planets, at least sometimes.

For those who wish to go deeper, being biased I recommend my ebook “Planetary Formation and Biogenesis.” Besides showing what I think happened, it analyses over 600 scientific papers, most of which are about different aspects.

Gravitational Waves, or Not??

On February 11, 2016 LIGO reported that on September 14, 2015, they had verified the existence of gravitational waves, the “ripples in spacetime” predicted by General Relativity. In 2017, LIGO/Virgo laboratories announced the detection of a gravitational wave signal from merging neutron stars, which was verified by optical telescopes, and which led to the award of the Nobel Prize to three physicists. This was science in action and while I suspect most people had no real idea what this means, the items were big news. The detectors were then shut down for an upgrade to make them more sensitive and when they started up again it was apparently predicted that dozens of events would be observed by 2020, and with automated detection, information could be immediately relayed to optical telescopes. Lots of scientific papers were expected. So, with the program having been running for three months, or essentially half the time of the prediction, what have we found?

Er, despite a number of alerts, nothing has been confirmed by optical telescopes. This has led to some questions as to whether any gravitational waves have actually been detected and led to a group at the Neils Bohr Institute at Copenhagen to review the data so far. The detectors at LIGO correspond to two “arms” at right angles to each other running four kilometers from a central building. Lasers are beamed down each arm and reflected from a mirror and the use of wave interference effects lets the laboratory measure these distances to within (according to the LIGO website) 1/10,000 the width of a proton! Gravitational waves will change these lengths on this scale. So, of course, will local vibrations, so there are two laboratories 3,002 km apart, such that if both detect the same event, it should not be local. The first sign that something might be wrong was that besides the desired signals, a lot of additional vibrations are present, which we shall call noise. That is expected, but what was suspicious was that there seemed to be inexplicable correlations in the noise signals. Two labs that far apart should not have the “same” noise.

Then came a bit of embarrassment: it turned out that the figure published in Physical Review Letters that claimed the detection (and led to Nobel prize awards) was not actually the original data, but rather the figure was prepared for “illustrative purposes”, details added “by eye”.  Another piece of “trickery” claimed by that institute is that the data are analysed by comparison with a large database of theoretically expected signals, called templates. If so, for me there is a problem. If there is a large number of such templates, then the chances of fitting any data to one of them is starting to get uncomfortably large. I recall the comment attributed to the mathematician John von Neumann: “Give me four constants and I shall map your data to an elephant. Give me five and I shall make it wave its trunk.” When they start adjusting their best fitting template to fit the data better, I have real problems.

So apparently those at the Neils Bohr Institute made a statistical analysis of data allegedly seen by the two laboratories, and found no signal was verified by both, except the first. However, even the LIGO researchers were reported to be unhappy about that one. The problem: their signal was too perfect. In this context, when the system was set up, there was a procedure to deliver artificially produced dummy signals, just to check that the procedure following signal detection at both sites was working properly. In principle, this perfect signal could have been the accidental delivery of such an artifical signal, or even the deliberate insertion by someone. Now I am not saying that did happen, but it is uncomfortable that we have only one signal, and it is in “perfect” agreement with theory.

A further problem lies in the fact that the collision of two neutron stars as required by that one discovery and as a source of the gamma ray signals detected along with the gravitational waves is apparently unlikely in an old galaxy where star formation has long since ceased. One group of researchers claim the gamma ray signal is more consistent with the merging of white dwarfs and these should not produce gravitational waves of the right strength.

Suppose by the end of the year, no further gravitational waves are observed. Now what? There are three possibilities: there are no gravitational waves; there are such waves, but the detectors cannot detect them for some reason; there are such waves, but they are much less common than models predict. Apparently there have been attempts to find gravitational waves for the last sixty years, and with every failure it has been argued that they are weaker than predicted. The question then is, when do we stop spending increasingly large amounts of money on seeking something that may not be there? One issue that must be addressed, not only in this matter but in any scientific exercise, is how to get rid of the confirmation bias, that is, when looking for something we shall call A, and a signal is received that more or less fits the target, it is only so easy to say you have found it. In this case, when a very weak signal is received amidst a lot of noise and there is a very large number of templates to fit the data to, it is only too easy to assume that what is actually just unusually reinforced noise is the signal you seek. Modern science seems to have descended into a situation where exceptional evidence is required to persuade anyone that a standard theory might be wrong, but only quite a low standard of evidence to support an existing theory.