How can we exist?

One of the more annoying questions in physics is why are we here? Bear with me for a minute, as this is a real question. The Universe is supposed to have started with what Fred Hoyle called “The Big Bang”. Fred was being derisory, but the name stuck. Anyway what happened is that a very highly intense burst of energy began expanding, and as it did, perforce the energy became less dense. As that happened, out condensed elementary particles. On an extremely small scale, that happens in high-energy collisions, such as in the Large Hadron Collider. So we are reasonably convinced we know what happened up to this point, but there is a very big fly in the ointment. When such particles condense out we get an equal amount of matter and what we call antimatter. (In principle, we should get dark matter too, but since we do not know what that is, I shall leave that.) 

Antimatter is, as you might guess, the opposite of matter. The most obvious example is the positron, which is exactly the same as the electron except it has positive electric charge, so when a positron is around an electron they attract. In principle, if they were to hit each other they would release an infinite amount of energy, but nature hates the infinities that come out of our equations so when they get so close they annihilate each other and you get two gamma ray photons that leave in opposite directions to conserve momentum. That is more or less what happens when antimatter generally meets matter – they annihilate each other, which is why, when we make antimatter in colliders, if we want to collect it we have to do it very carefully with magnetic traps and in a vacuum.

So now we get to the problem of why we are here: with all that antimatter made in equal proportions to matter, why do we have so much matter? As it happens, the symmetry is violated very slightly in kaon decay, but this is probably not particularly helpful because the effect is too slight. In the previous post on muon decay I mentioned that that could be a clue that there might be physics beyond the Standard Model to be unraveled. Right now, the fact that there is so much matter in the Universe should be a far stronger clue that something is wrong with the Standard Model. 

Or is it? One observation that throws that into doubt was published in the Physical Review, D, 103, 083016 in April this year. But before coming to that, some background. A little over ten years ago, colliding heavy ions made a small amount of anti helium-3, and a little later, antihelium-4. The antihelium has two antiprotons, and one or two antineutrons. To make this, the problem is to get enough antiprotons and antineutrons close enough. To give some idea of the trouble, a billion collisions of gold ions with energies of two hundred billion and sixty-two billion electron volts produced 18 atoms of antihelium 4, with masses of 3.73 billion electron volts. In such a collision, the energy requires a temperature of over 250,000 times that of the sun’s core. 

Such antihelium can be detected through gamma ray frequencies when the atoms decay on striking matter, and apparently also through the Alpha Magnetic Spectrometer on the International Space Station, which tracks cosmic rays. The important point is that antihelium-4 behaves exactly the same as an alpha particle, except that, because the antiprotons have negative charge, their trajectories bend in the opposite direction to ordinary nuclei. These antinuclei can be made through the energies of cosmic rays hitting something, however it has been calculated that the amount of antihelium-3 detected so far is 50 times too great to be explained by cosmic rays, and the amount of antihelium-4 detected is 100,000 times too much.

How can this be? The simple answer is that the antihelium is being made by antistars. If you accept them, gamma ray detection indicates 5787 sources, and it has been proposed that at least fourteen of these are antistars, and if we look at the oldest stars near the centre of the galaxy, then estimates suggest up to a fifth of the stars there could be antistars, possibly with antiplanets. If there were people on these, giving them a hug would be outright disastrous for each of you.Of course, caution here is required. It is always possible that this antihelium was made in a more mundane way that as yet we do not understand. On the other hand, if there are antistars, it solves automatically a huge problem, even if it creates a bigger one: how did the matter and antimatter separate? As is often the case in science, solving one problem creates even bigger problems. However, real antistars would alter our view of the universe and as long as the antimatter is at a good distance, we can accept them.

Much Ado About Muons

You may or may not have heard that the Standard Model, which explains “all of particle physics”, is in trouble and “new physics” may be around the corner. All of this arises from a troublesome result from the muon, a particle that is very similar to an electron except it is about 207 times more massive and has a mean lifetime of 2.2 microseconds. If you think that is not very important for your current lifestyle (and it isn’t) wait, there’s more. Like the electron it has a charge of -1, and a spin of ½, which means it acts like a small magnet. Now, if the particle is placed in a strong magnetic field, the direction of the spin wobbles (technically, precesses) and the strength of this interaction is described by a number called the g factor, which for a classical situation, g = 2. Needless to say, in the quantum world that is wrong. For the electron, it is roughly 2.002 319 304 362, the numbers here stop where uncertainty starts. If nothing else, this shows the remarkable precision achieved by experimental physicists. Why is it not 2? The basic reason is the particle interacts with the vacuum, which is not quite “nothing”. You will see quantum electrodynamics has got this number down fairly precisely, and quantum electrodynamics, which is part of the standard model, is considered to be the most accurate theoretical calculation ever, or the greatest agreement between calculation and observation. All was well, until this wretched muon misbehaved.

Now, the standard model predicts the vacuum comprises a “quantum foam” of virtual particles popping in and out of existence, and these short-lived particles affect the g-factor, causing the muon’s wobble to speed up or slow down very slightly, which in turn leads to what is called an “anomalous magnetic moment”. The standard model should calculate these to the same agreement as with the electron, and the calculations give:

  • g-factor: 2.00233183620
  • anomalous magnetic moment: 0.00116591810

The experimental values announced by Fermilab and Brookhaven are:

  • g-factor: 2.00233184122(82)
  • anomalous magnetic moment: 0.00116592061(41)

The brackets indicate uncertainty. Notice a difference? Would you say it is striking? Apparently there is only a one in 40,000 chance that it will be a statistical error. Nevertheless, apparently they will keep this experiment running at Fermilab for another two years to firm it up. That is persistence, if nothing else.

This result is what has excited a lot of physicists because it means the calculation of how this particle interacts with the vacuum has underestimated the actual effect for the muon. That suggests more physics beyond the standard model, and in particular, a new particle may be the cause of the additional effect. Of course, there has to be a fly in the ointment. One rather fearsome calculation claims to be a lot closer to the observational value. To me the real problem is how can the same theory come up with two different answers when there is no arithmetical mistake?

Anyway, if the second one is right, problem gone? Again, not necessarily. At the Large Hadron collider they have looked at B meson decay. This can produce electrons and positrons, or muons and antimuons. According to the standard model, these two particles are identical other than for mass, which means the rate of production of each should be identical, but it isn’t quite. Again, it appears we are looking at small deviations. The problem then is, hypothetical particles that might explain one experiment fail for the other. Worse, the calculations are fearsome, and can take years. The standard model has 19 parameters that have to be obtained from experiment, so the errors can mount up, and if you wish to give the three neutrinos mass, in come another eight parameters. If we introduce yet another particle, in come at least one more parameter, and probably more. Which raises the question, since adding a new assignable parameter will always answer one problem, how do we know we are even on the right track?

All of which raises the question, is the standard model, which is a part of quantum field theory, itself too complicated, and maybe not going along the right path? You might say, how could I possibly question quantum field theory, which gives such agreeable results for the electron magnetic moment, admittedly after including a series of interactions? The answer is that it also gives the world’s worst agreement with the cosmological constant. When you sum the effects of all these virtual particles over the cosmos, the expansion of the Universe is wrong by 10^120, that is, 10 followed by 120 zeros. Not exceptionally good agreement. To get the agreement it gets, something must be right, but as I see it, to get such a howling error, something must be wrong also. The problem is, what?

Dark Energy

Many people will have heard of dark energy, yet nobody knows what it is, apart from being something connected with the rate of expansion of the Universe. This is an interesting part of science. When Einstein formulated General Relativity, he found that if his equations were correct, the Universe should collapse due to gravity. It hasn’t so far, so to avoid that he introduced a term Λ, the so-called cosmological constant, which was a straight-out fudge with no basis other than that of avoiding the obvious mistake that the universe had not collapsed and did not look like doing so. Then, when he found from observations that the Universe was actually expanding, he tore that up. In General Relativity, Λ represents the energy density of empty space.

We think the Universe expansion is accelerating because when we look back in time by looking at ancient galaxies, we can measure the velocity of their motion relative to us through the so-called red shift of light, and all the distant galaxies are going away from us, and seemingly faster the further away they are. We can also work out how far away they are by taking light sources and measuring how bright they are, and provided we know how bright they were when they started, the dimming gives us a measure of how far away they are. What two research groups found in 1998 is that the expansion of the Universe was accelerating, which won them the 2011 Nobel prize for physics. 

The next question is, how accurate are these measurements and what assumptions are inherent? The red shift can be measured accurately because the light contains spectral lines, and as long as the physical constants have remained constant, we know exactly their original frequencies, and consequently the shift when we measure the current frequencies. The brightness relies on what are called standard candles. We know of a class of supernovae called type 1a, and these are caused by one star gobbling the mass of another until it reaches the threshold to blow up. This mass is known to be fairly constant, so the energy output should be constant.  Unfortunately, as often happens, the 1a supernovae are not quite as standard as you might think. They have been separated into three classes: standard 1a, dimmer 1a , and brighter 1a. We don’t know why, and there is an inherent problem that the stars of a very long time ago would have had a lower fraction of elements from previous supernovae. They get very bright, then dim with time, and we cannot be certain they always dim at the same rate. Some have different colour distributions, which makes specific luminosity difficult to measure. Accordingly, some consider the evidence is inadequate and it is possible there is no acceleration at all. There is no way for anyone outside the specialist field to resolve this. Such measurements are made at the limits of our ability, and a number of assumptions tend to be involved.

The net result of this is that if the universe is really expanding, we need a value for Λ because that will describe what is pushing everything apart. That energy of the vacuum is called dark energy, and if we consider the expansion and use relativity to compare this energy with the mass of the Universe we can see, dark energy makes up 70% of the total Universe. That is, assuming the expansion is real. If not, 70% of the Universe just disappears! So what is it, if real?

The only real theory that can explain why the vacuum has energy at all and has any independent value is quantum field theory. By independent value, I mean it explains something else. If you have one observation and you require one assumption, you effectively assume the answer. However, quantum field theory is not much help here because if you calculate Λ using it, the calculation differs from observation by a factor of 120 orders of magnitude, which means ten multiplied by itself 120 times. To put that in perspective, if you were to count all the protons, neutrons and electrons in the entire universe that we can see, you would multiply ten by itself about 83 times to express the answer. This is the most dramatic failed prediction in all theoretical physics and is so bad it tends to be put in the desk drawer and ignored/forgotten about.So the short answer is, we haven’t got a clue what dark energy is, and to make matters worse, it is possible there is no need for it at all. But it most certainly is a great excuse for scientific speculation.

Living Near Ceres

Some will have heard of Gerard O’Neill’s book, “The High Frontier”. If not, see https://en.wikipedia.org/wiki/The_High_Frontier:_Human_Colonies_in_Space. The idea was to throw material up from the surface of the Moon to make giant cylinders that would get artificial gravity from rotation, and people could live their lives in the interior with energy being obtained in part by solar energy. The concept was partly employed in the TV series “Babylon 5”, but the original concept was to have open farmland as well. Looks like science fiction, you say, and in fairness I have included such a proposition in a science fiction novel I am currently writing, However, I have also read a scientific paper on this topic (arXiv:2011.07487v3) which appears to have been posted on the 14th January, 2021. The concept is to put such a space settlement using material obtained from the asteroid Ceres, and orbiting near Ceres.

The proposal is ambitious, if nothing else. The idea is to build a number of habitats, and to ensure such habitats are not too big but they stay together they are tethered to a megasatellite, which in turn will grow and new settlements are built. The habitats spin in such a way to attain a “gravity” of 1 g, and are attached to their tethers by magnetic bearings that have no physical contact between faces, and hence never wear. A system of travel between habitats proceeds along the tethers. Rockets would be unsustainable because the molecules they throw out to space would be lost forever.

The habitats would have a radius of 1 km, a length of 10 km, and have a population of 56,700, with 2,000 square meters per person, just under 45% of which would be urban. Slightly more scary would be the fact it has to rotate every 1.06 minutes. The total mass per person would be just under 10,000 t, requiring an energy to produce it of 1 MJ/kg, or about 10 GJ.

The design aims to produce an environment for the settlers that has Earth-like radiation shielding, gravity, and atmosphere. It will have day/night on a 24 hr cycle with 130 W/m^2 insolation, similar to southern Germany, and a population density of 500/km^2, similar to the Netherlands. There would be fields, parks, and forests, no adverse weather, no natural disasters and ultimately it could have a greater living area than Earth. It will be long-term sustainable. To achieve that, animals, birds and insects will be present, i.e.  a proper ecosystem. Ultimately it could provide more living area than Earth. As can be seen, that is ambitious. The radiation shielding involves 7600 kg/m^2, of which 20% is water and the rest silicate regolith. The rural spaces have a 1.5 m depth of soil, which is illuminated by the sunlight. The sunlight is collected and delivered from mirrors into light guides. Ceres is 2.77 times as far as Earth from the sun, which means the sunlight is only about 13% as strong as at Earth, so over eight times the mirror collecting are is required for every unit area to be illuminated to get equivalent energy. 

The reason cited for proposing this to be at Ceres is that Ceres has nitrogen. Actually, there are other carbonaceous asteroids, and one that is at least 100 km in size could be suitable. Because Ceres’ gravity is 0.029 times that of Earth, a space elevator could be feasible to bring material cheaply from the dwarf planet, while a settlement 100,000 km from the surface would be expected to have a stable orbit.

In principle, there could be any number of these habitats, all linked together. You could have more people living there than on Earth. Of course there are some issues with the calculation. The tethering of habitats, and of giving the habitats sufficient strength requires about 5% of the total mass in the form of steel. Where does the iron come from? The asteroids have plenty of iron, but the form is important. How will it be refined? If it is on the form of olivine or pyroxene, then with difficulty. Vesta apparently has an iron core, but Vesta is not close, and most of the time, because it has a different orbital period, it is very far away.But the real question is, would you want to live in such a place? How much would you pay for the privilege? The cost of all this was not estimated, but it would be enormous so most people could not afford it. In my opinion, cost alone is sufficient that this idea will not see the light of day.

Free Will

You will see many discussions regarding free will. The question is, do you have it, or are we in some giant computer program. The problem is that classical physics is deterministic, and you will often see claims that Newtonian physics demands that the Universe works like some finely tuned machine, following precise laws of motion. And indeed, we can predict quite accurately when eclipses of the sun will occur, and where we should go to view them. The presence of eclipses in the future is determined now. Now let us extrapolate. If planets follow physical laws, and hence their behaviour can be determined, then so do snooker or pool balls, even if we cannot in practice calculate all that will happen on a given break. Let us take this further. Heat is merely random kinetic energy, but is it truly random? It seems that way, but the laws of motion are quite clear: we can calculate exactly what will happen in any collision and it is just in practice the calculations are too complicated to even consider doing it. You bring in chaos theory, but this does nothing for you; the calculations may be utterly impossible to carry out, but they are governed solely by deterministic physics, so ultimately what happens was determined and it is just that we do not know how to calculate it. Electrodynamics and quantum theory are deterministic, even if quantum theory has random probability distributions. Quantum behaviour always follows strict conservation laws and the Schrödinger equation is actually deterministic. If you know ψ and know the change of conditions, you know the new ψ. Further, all chemistry is deterministic. If I go into the lab, take some chemicals and mix them and if necessary heat them according to some procedure, every time I follow exactly the same procedures, I shall end up with the same result.

So far, so good. Every physical effect follows from a physical cause. Therefore, the argument goes, since our brain works on physical and chemical effects and these are deterministic, what our brains do is determined exactly by those conditions. But those conditions were determined by what went before, and those before that, and so on. Extrapolating, everything was predetermined at the time of the big bang! At this point the perceptive may feel that does not seem right, and it is not. Consider nuclear decay. We know that particles, say neutrons, are emitted with a certain probability over an extended period of time. They will be emitted, but we cannot say exactly, or even roughly, when. The nuclei have angular uncertainty, therefore it follows that you cannot know what direction it is emitted because according to the laws of physics that is not determined until it is emitted. You may say, so what? That is trivial. No, the so what is that when you find one exception, you falsify the overall premise that everythingwas determined at the big bang. Which means something else introduced causes. Also, the emitted neutron may now generate new causes that could not be predetermined.

Now we start to see a way out. Every physical effect follows from a physical cause, but where do the causes come from? Consider stretching a wire with ever increasing force; eventually it breaks. It usually breaks at the weakest point, which in principle is predictable, but suppose we have a perfect wire with no point weaker than any other. It must still break, but where? At the instant of breaking some quantum effect, such as molecular vibration, will offer momentarily weaker and stronger spots. One with the greatest weakness will go, but due to the Uncertainty Principle that the given spot is unpredictable.

Take evolution. This proceeds by variation in the nucleic acids, but where in the chain is almost certainly random because each phosphate ester linkage that has to be broken is equivalent, just like the points in the “ideal wire”. Most resultant mutations die out. Some survive, and those that survive long enough to reproduce contribute to an evolutionary change. But again, which survives depends on where it is. Thus a change that provides better heat insulation at the expense of mobility may survive in polar regions, but it offers nothing in the equatorial rain forest. There is nothing that determines where what mutation will arise; it is a random event.Once you cannot determine everything, even in principle, it follows you must accept that not every cause is determined by previous events. Once you accept that, since we have no idea how the mind works, you cannot insist the way my mind works was determined at the time of the big bang. The Universe is mechanical and predictable in terms of properties obeying the conservation laws, but not necessarily anything else. I have free will, and so do you. Use it well.

Unravelling Stellar Fusion

Trying to unravel many things in modern science is painstaking, as will be seen from the following example, which makes looking for a needle in a haystack relatively easy. Here, the requirement for careful work and analysis can be seen, although less obvious is the need for assumptions during the calculations, and these are not always obviously correct. The example involves how our sun works. The problem is, how do we form the neutrons needed for fusion in the star’s interior? 

In the main process, the immense pressures force two protons form the incredibly unstable 2He (a helium isotope). Besides giving off a lot of heat there are two options: a proton can absorb an electron and give off a neutrino (to conserve leptons) or a proton can give off a positron and a neutrino. The positron would react with an electron to give two gamma ray photons, which would be absorbed by the star and converted to energy. Either way, energy is conserved and we get the same result, except the neutrinos may have different energies. 

The dihydrogen starts to operate at about 4 million degrees C. Gravitational collapse of a star starts to reach this sort of temperature if the star has a mass at least 80 times that of Jupiter. These are the smaller of the red dwarfs. If it has a mass of approximately 16 – 20 times that of Jupiter, it can react deuterium with protons, and this supplies the heat to brown dwarfs. In this case, the deuterium had to come from the Big Bang, and hence is somewhat limited in supply, but again it only reacts in the centre where the pressure is high enough, so the system will continue for a very long time, even if not very strongly.

If the temperatures reach about 17 million degrees C, another reaction is possible, which is called the CNO cycle. What this does is start with 12C (standard carbon, which has to come from accretion dust). It then adds a proton to make 13N, which loses a positron and a neutrino to make 13C. Then come a sequence of proton additions to make 14N (most stable nitrogen), then 15O, which loses a positron and a neutrino to make 15N, and when this is struck by a proton, it spits out 4He and returns to 12C. We have gone around in a circle, BUT converted four hydrogen nuclei to 4helium, and produced 25 MeV of energy. So there are two ways of burning hydrogen, so can the sun do both? Is it hot enough at the centre? How do we tell?

Obviously we cannot see the centre of the star, but we know for the heat generated it will be close to the second cycle. However, we can, in principle, tell by observing the neutrinos. Neutrinos from the 2He positron route can have any energy but not more than a little over 0.4 MeV. The electron capture neutrinos are up to approximately 1.1 MeV, while the neutrinos from 15O are from anywhere up to about 0.3 MeV more energetic, and those from 13N are anywhere up to 0.3 MeV less energetic than electron capture. Since these should be of the same intensity, the energy difference allows a count. The sun puts out a flux where the last three are about the same intensity, while the 2He neutrino intensity is at least 100 times higher. (The use of “at least” and similar terms is because such determinations are very error prone, and you will see in the literature some relatively different values.) So all we have to do is detect the neutrinos. That is easier said than done if they can pass through a star unimpeded. The way it is done is if a neutrino accidentally hits certain substances capable of scintillation it may give off a momentary flash of light.

The first problem then is, anything hitting those substances with enough energy will do it. Cosmic rays or nuclear decay are particularly annoying. So in Italy they built a neutrino detector under1400 meters of rock (to block cosmic rays). The detector is a sphere containing 300 t of suitable liquid and the flashes are detected by photomultiplier tubes. While there is a huge flux of neutrinos from the star, very few actually collide. The signals from spurious sources had to be eliminated, and a “neutrino spectrum” was collected for the standard process. Spurious sources included radioactivity from the rocks and liquid. These are rare, but so are the CNO neutrinos. Apparently only a few counts per day were recorded. However, the Italians ran the experiment for 1000 hours, and claimed to show that the sun does use this CNO cycle, which contributes about 1% of the energy. For bigger stars, this CNO cycle becomes more important. This is quite an incredible effort, right at the very edge of detection capability. Just think of the patience required, and the care needed to be sure spurious signals were not counted.

An Example of How Science Works: Where Does Gold Come From?

Most people seem to think that science marches on inexorably, gradually uncovering more and more knowledge, going in a straight line towards “the final truth”. Actually, it is far from that, and it is really a lurch from one point to another. It is true science continues to make a lot of measurements, and these fill our banks of data. Thus in organic chemistry, over the odd century or so we have collected an enormous number of melting points. These were obtained so someone else could check whether something else he had could be the same material, so it was not pointless. However, our attempts to understand what is going on have been littered with arguments, false leads, wrong turns, debates, etc. Up until the mid twentieth century, such debates were common, but now much less so. The system has coalesced in acceptance of the major paradigms, until awkward information comes to light that is sufficiently important that it cannot be ignored.

As an example, currently, there is a debate going on relating to how elements like gold were formed. All elements heavier than helium, apart from traces of lithium, were formed in stars. The standard theory says we start with hydrogen, and in the centre of a star, where the temperatures and pressures are sufficient two hydrogen atoms combine to form, for a very brief instant, helium 2 (two protons). An electron is absorbed, and we get deuterium, which is a proton and a neutron combined. The formation of a neutron from a proton and an electron is difficult because it needs about 1.3 MeV of energy to force it in, which is about a third of a million times bigger than the energy of any chemical bond. The diproton is a bit easier because the doubling of the positive field provides some supplementary energy. Once we get deuterium, we can do more and eventually get to helium 4 (two protons, two neutrons) and then it stops because the energy produced prevents the pressure from rising. The inside of the sun is an equilibrium, and in any given volume, a surprisingly few fusion reactions take place. The huge amount of energy is simply because of size. However, when the centre starts to run out of hydrogen, the star collapses further, and if it is big enough, it can start burning helium to make carbon and oxygen. Once the supply of helium becomes insufficient, if the star is large enough, a greater collapse happens, but this refuses to form an equilibrium. Atoms fuse at a great rate and produce the enormous amount of energy in a supernova.

What has happened in the scientific community is that once the initial theory was made, it was noticed that iron is at an energy minimum, and making elements heavier than iron absorb energy, nevertheless we know there are elements like uranium, gold, etc, because we use them. So how did they form? The real short answer is, we don’t know, but scientists with computers like to form models and publish lots of papers. The obvious way was that in stars, we could add a sequence of helium nuclei, or protons, or even, maybe, neutrons, but these would be rare events. However, in the aftermath of a supernova, huge amounts of energy are released, and, it is calculated, a huge flux of neutrons. That 1.3 MeV is a bit of a squib to what is available in a supernova, and so the flux of neutrons could gradually add to nuclei, and when it contained too many neutrons it would decay by turning a neutron into a proton, and the next element up, and hence this would be available form further neutrons. The problem though, is there are only so many steps that can be carried out before the rapidly expanding neutron flux itself decays. At first sight, this does not produce enough elements like gold or uranium, but since we see them, it must have.

Or must it? In 2017, we detected gravitational wave from an event that we could observe and had to be attributed to the collision of two neutron stars. The problem for heavy elements from supernovae is, how do you get enough time to add all the protons and neutrons, more or less one at a time. That problem does not arise for a neutron star. Once it starts ejecting stuff into space, there is no shortage of neutrons, and these are in huge assemblies that simply decay and explode into fragments, which could be a shower of heavy elements. While fusion reactions favour forming lighter elements, this source will favour heavier ones. According to the scientific community, problem solved.

There is a problem: where did all the neutron stars come from? If the elements come from supernovae, all we need is big enough stars. However, neutron stars are a slightly different matter because to get the elements, the stars have to collide. Space is a rather big place. Let over all time the space density of supernovae be x, the density of neutron stars y, and the density of stars as z. All these are very small, but z is very much bigger than x and x is almost certainly bigger than y. The probability of two neutron stars colliding is proportional to y squared, while the probability of a collision of a neutron stars another star would be correspondingly proportional to yz. Given that y is extremely small, and z much bigger, but still small, most neutron stars will not collide with anything in a billion years, some will collide with a different star, while very few will collide with another neutron star. There have been not enough neutron stars to make our gold, or so the claims go.So what is it? I don’t know, but my feeling is that the most likely outcome is that both mechanisms will have occurred, together with possible mechanisms we have yet to consider. In this last respect, we have made elements by smashing nuclei together. These take a lot of energy and momentum, but anything we can make on Earth is fairly trivial compared with the heart of a supernova. Some supernovae are calculated to produce enormous pressure waves, and these could fuse any nuclei together, to subsequently decay, because the heavy ones would be too proton rich.  This is a story that is unfolding. In twenty years, it may be quite different again.

The Fermi Paradox: Where are the Aliens?

This question, as much as anything, illustrates why people have trouble thinking through problems when they cannot put their own self-importance to one side. Let us look at this problem not from our point of view.

The Fermi paradox is a statement that since there are so many stars, most of which probably have planets, and a reasonable number of them have life, more than half of those are likely to have been around longer than us and so should be more technically advanced, but we have seen no clue as to their presence. Why not? That question begs the obvious counter: why should we? First, while the number of planets is huge, most of them are in other galaxies, and of those in the Milky Way, stars are very well-separated. The nearest, Alpha Centauri, is a three star system: two rather close stars (A G-type star like our sun and a K1 star) and a more distant red dwarf, and these are 4.37 light years away. The two have distances that vary between 35.6 AU to 11.2 AU, i.e. on closest approach they come a little further apart than Saturn and the sun.  That close approach means that planets corresponding to our giants could not exist in stable orbits, and astronomers are fairly confident there are no giants closer to the star. Proxima Centauri has one planet in the habitable zone, but for those familiar with my ebook “Planetary Formation and Biogenesis” will know that in my opinion, the prospect for life originating there, or around most Red Dwarfs, is extremely low. So, could there be Earth-like planets around the two larger stars? Maybe, but our technology cannot find them. As it happens, if there were aliens there, they could not detect Earth with technology at our level either.  Since most stars are immensely further away, rocky planets are difficult to discover. We have found exoplanets, but they are generally giants, planets around M stars, or planets that inadvertently have their orbital planes aligned so we can see eclipses.

This is relevant, because if we are seeking a signal from another civilization, as Seti seeks, then either the signal is deliberate or accidental. An example of accidental is the electromagnetic radiation we send into space through radio and TV signals. According to tvtechnology.com “An average large transmitter transmits about 8kW per multiplex.” That will give “acceptable signal strength” over, say, 50 km. The signal strength attenuates according to the square of the distance, so while the signals will get to Alpha Centauri, they will be extremely weak, and because of bandwidth issues, broadcasts from well separated transmitters will interfere with each other. Weak signals can be amplified, but aliens at Alpha Centauri would get extremely faint noise that might be assignable to technology. 

Suppose you want to send a deliberate signal? Now, you want to boost the power, and the easiest way to get over the inverse square attenuation is to focus the signal. Now, however, you need to know exactly where the intended recipient will be. You might do this for one of your space ships, in which case you would send a slightly broader signal on a very high power level at an agreed frequency but as a short burst. To accidentally detect this, because you have a huge range of frequencies to monitor, you have to accidentally be on that frequency at the time of the burst. There is some chance of Seti detecting such a signal if the space ship was heading to Earth, but then why listen for such a signal, as opposed to waiting for the ship.

The next possible deliberate signal would be aimed at us. To do that, they would need to know we had potential, but let us suppose they did. Suppose it takes something like 4.5 billion years to get technological life, and at that nice round number, they peppered Earth with signals. Oops! We are still in the Cretaceous. Such a move would require a huge power output so as to flood whatever we were using, a guess as to what frequencies we would find of interest, and big costs. Why would they do that, when it may take hundreds or thousands of years for a response? It makes little sense for any “person” to go to all that trouble and know they could never know whether it worked or not. We take the cheap option of listening with telescopes, but if everyone is listening, nobody is sending.

How do they choose a planet? My “Planetary Formation and Biogenesis” concludes you need a rocky planet with major felsic deposits, which is most probable around the G type star (but still much less than 50% of them). So you would need some composition data, and in principle you can get that from spectroscopy (but with much better technology than we have). What could you possibly see? Oxygen is obvious, except it gives poor signals. In the infrared spectra, you might detect ozone, and that would be definitive. You often see statements that methane should be detectable. Yes, but Titan has methane and no life. Very low levels of carbon dioxide is a strong indication, as it suggests large amounts of water to fix it, and plate tectonics to renew it. Obviously, signals from chlorophyll would be proof, but they are not exactly strong. So if they are at anything but the very closest stars they would not know whether we are here, so why waste that expense. The Government accountants would never fund such a project with such a low probability of getting a return on investment. Finally, suppose you decided a planet might have technology, why would you send a signal? As Hawking remarked, an alien species might decide this would be a good planet to eradicate all life and transform it suitable for the aliens to settle. You say that is unlikely, but with all those planets, it only needs one such race. So simple game theory suggests “Don’t do it!” If we assume they are more intelligent than us, they won’t transmit because there is no benefit for those transmitting.

Energy from the Sea. A Difficult Environmental Choice.

If you have many problems and you are forced to do something, it makes sense to choose any option that solves more than one problem. So now, thanks to a certain virus, changes to our economic system will be forced on us, so why not do something about carbon emissions at the same time? The enthusiast will tell us science offers us a number of options, so let’s get on with it. The enthusiast trots out what supports his view, but what about what he does not say? Look at the following.

An assessment from the US Energy Information Administration states the world will use 21,000 TWh of electricity in 2020. According to the International Energy Agency, the waves in the world’s oceans store about 80,000 TWh. Of course much of that is, well, out at sea, but they estimate about 4,000 TWh could be harvested. While that is less than 20% of what is needed, it is still a huge amount. They are a little coy on how this could be done, though. Wave power depends on wave height (the amplitude of the wave) and how fast the waves are moving (the phase velocity). One point is that waves usually move to the coast, and there are many parts of the world where there are usually waves of reasonable amplitude so an energy source is there.

Ocean currents also have power, and the oceans are really one giant heat engine. One estimate claimed that 0.1% of the power of the Gulf Stream running along the East Coast of the US would be equivalent to 150 nuclear power stations. Yes, but the obvious problem is the cross-sectional area of the Gulf Stream. Enormous amounts of energy may be present, but the water is moving fairly slowly, so a huge area has to be trapped to get that energy. 

It is simpler to extract energy from tides, if you can find appropriate places. If a partial dam can be put across a narrow river mouth that has broad low-lying ground behind it, quite significant flows can be generated for most of the day. Further, unlike solar and wind power, tides are very predictable. Tides vary in amplitude, with a record apparently going to the Bay of Fundy in Canada: 15 meters in height.

So why don’t we use these forms of energy? Waves and tides are guaranteed renewable and we do not have to do anything to generate them. A surprising fraction of the population lives close to the sea, so transmission costs for them would be straightforward. Similarly, tidal power works well even at low water speeds because compared with wind, water is much denser, and the equipment lasts longer. La Rance, in France, has been operational since 1966. They also do not take up valuable agricultural land. On the other hand, they disturb sea life. A number of fish appear to use the Earth’s magnetic field to navigate and nobody knows if EMF emissions have an effect on marine life. Turbine blades most certainly will. They also tend to be needed near cities, which means they disturb fishing boats and commercial ships.

There are basically two problems. One is engineering. The sea is not a very forgiving place, and when storms come, the water has serious power. The history of wave power is littered with washed up structures, smashed to pieces in storms. Apparently an underwater turbine was put in the Bay of Fundy, but it lasted less than a month. There is a second technical problem: how to make electricity? The usual way would be to move wire through a magnetic field, which is the usual form of a generator/dynamo. The issue here is salt water must be kept completely out, which is less than easy. Since waves go up and down, an alternative is to have some sort of float that mechanically transmits the energy to a generator on shore. That can be made to work on a small scale, but it is less desirable on a larger scale.The second problem is financial. Since history is littered with failed attempts, investors get wary, and perhaps rightly so. There may be huge energies present, but they are dispersed over huge areas, which means power densities are low, and the economics usually become unattractive. Further, while the environmentalists plead for something like this, inevitably it will be, “Somewhere else, please. Not in my line of sight.” So, my guess is this is not a practical solution now or anytime in the reasonable future other than for small specialized efforts.

Materials that Remember their Original Design

Recall in the movie Terminator 2 there was this robot that could turn into a liquid then return to its original shape and act as if it were solid metal. Well, according to Pu Zhang at Binghampton University in the US, something like that has been made, although not quite like the evil robot. What he has made is a solid that acts like a metal that, with sufficient force, can be crushed or variously deformed, then brought back to its original shape spontaneously by warming.

The metal part is a collection of small pieces of Field’s alloy, an alloy of bismuth, indium and tin. This has the rather unusual property of melting at 62 degrees Centigrade, which is the temperature reached by fairly warm water. The pieces have to be made with flat faces of the desired shape so that they effectively lock themselves together and it is this locking that at least partially gives the body its strength. The alloy pieces are then coated with a silicone shell using a process called conformal coating, a technique used to coat circuit boards to protect them from the environment and the whole is put together with 3D printing. How the system works (assuming it does) is that when force is applied that would crush or variously deform the fabricated object, as the metal pieces get deformed, the silicone coating gets stretched. The silicone is an elastomer, so as it gets stretched, just like a rubber band, it stores energy. Now, if the object is warmed the metal melts and can flow. At this point, like a rubber band let go, the silicone restores everything to the original shape, the when it cools the metal crystallizes and we are back where we started.

According to Physics World Zhang and his colleagues made several demonstration structures such as a honeycomb, a spider’s web-like structure and a hand, these were all crushed, and when warmed they sprang back to life in their original form. At first sight this might seem to be designed to put panel beaters out of business. You have a minor prang but do not worry: just get out the hair drier and all will be well. That, of course, is unlikely. As you may have noticed, one of the components is indium. There is not a lot of indium around and for its currently very restricted uses it costs about $US800/kg, which would make for a rather expensive bumper. Large-scale usage would make the cost astronomical. The cost of manufacturing would also always limit its use to rather specialist objects, irrespective of availabiity.One of the uses advocated by Zhang is in space missions. While weight has to be limited on space missions, volume is also a problem, especially for objects with awkward shapes, such as antennae or awkward shaped superstructures. The idea is they could be crushed down to a flat compact load for easy storage, then reassembled. The car bumper might be out of bounds because of cost and limited indium supply, but the cushioning effect arising from its ability to absorb a considerable amount of energy might be useful in space missions. Engineers usually use aluminium or steel for cushioning parts, but they are single use. A spacecraft with such landing cushions can be used once, but landing cushions made of this material could be restored simply by heating them. Zhang seems to favour the use in space engineering. He says he is contemplating building a liquid robot, but there is one thing, apart from behaviour, that such a robot could not do that the terminator robot did, and that is, if the robot has bits knocked off and the bits melt, they cannot reassemble into a whole. Leaving aside the fact there is no force to rejoin the bits, the individual bits will merely reassemble into whatever parts they were and cannot rejoin with the other bits. Think of it as held together by millions of rubber bands. Breaking into bits breaks a fraction of the rubber bands, which leaves no force to restore the original shape at the break.