Interpreting Observations

The ancients, with a few exceptions, thought the Earth was the centre of the Universe and everything rotated around it, thus giving day and night. Contrary to what many people think, this was not simply stupid; they reasoned that it could not be rotating. An obvious experiment that Aristotle performed was to throw a stone high into the air so that it reached its maximum height directly above. When it dropped, it landed directly underneath it and its path was vertical to the horizontal. Aristotle recognised that up at that height and dropped, if the Earth was rotating the angular momentum from the height should carry it eastwards, but it did not. Aristotle was a clever reasoner, but he was a poor experimenter. He also failed to consider consequences of some of his other reasoning. Thus he knew that the Earth was a sphere, and he knew the size of it and thanks to Eratosthenes this was a fairly accurate value. He had reasoned correctly why that was, which was that matter fell towards the centre. Accordingly, he should also have realised his stone should also fall slightly to the south. (He lived in Greece; if he lived here it would move slightly northwards.) When he failed to notice that he should have realized his technique was insufficiently accurate. What he failed to do was to put numbers onto his reasoning, and this is an error in reasoning we see all the time these days from politicians. As an aside, this is a difficult experiment to do. If you don’t believe me, try it. Exactly where is the point vertically below your drop point? You must not find it by dropping a stone!

He also realised that Earth could not orbit the sun, and there was plenty of evidence to show that it could not. First, there was the background. Put a stick in the ground and walk around it. What you see is the background moves and moves more the bigger the circle radius, and smaller the further away the object is. When Aristarchus proposed the heliocentric theory all he could do was make the rather unconvincing bleat that the stars in the background must be an enormous distance away. As it happens, they are. This illustrates another problem with reasoning – if you assume a statement in the reasoning chain, the value of the reasoning is only as good as the truth of the assumption. A further example was that Aristotle reasoned that if the earth was rotating or orbiting the sun, because air rises, the Universe must be full of air, and therefore we should be afflicted by persistent easterly winds. It is interesting to note that had he lived in the trade wind zone he might have come to the correct conclusion for entirely the wrong reason.

But if he did he would have a further problem because he had shown that Earth could not orbit the sun through another line of reasoning. As was “well known”, heavy things fall faster than light things, and orbiting involves an acceleration towards the centre. Therefore there should be a stream of light things hurling off into space. There isn’t, therefore Earth does not move. Further, you could see the tail of comets. They were moving, which proves the reasoning. Of course it doesn’t because the tail always goes away from the sun, and not behind the motion at least half the time. This was a simple thing to check and it was possible to carry out this checking far more easily than the other failed assumptions. Unfortunately, who bothers to check things that are “well known”? This shows a further aspect: a true proposition has everything that is relevant to it in accord with it. This is the basis of Popper’s falsification concept.

One of the hold-ups involved a rather unusual aspect. If you watch a planet, say Mars, it seems to travel across the background, then slow down, then turn around and go the other way, then eventually return to its previous path. Claudius Ptolemy explained this in terms of epicycles, but it is easily understood in term of both going around the sun provided the outer one is going slower. That is obvious because while Earth takes a year to complete an orbit, it takes Mars over two years to complete a cycle. So we had two theories that both give the correct answer, but one has two assignable constants to explain each observation, while the other relies on dynamical relationships that at the time were not understood. This shows another reasoning flaw: you should not reject a proposition simply because you are ignorant of how it could work.I went into a lot more detail of this in my ebook “Athene’s Prophecy”, where for perfectly good plot reasons a young Roman was ordered to prove Aristotle wrong. The key to settling the argument (as explained in more detail in the following novel, “Legatus Legionis”) is to prove the Earth moves. We can do this with the tides. The part closest to the external source of gravity has the water fall sideways a little towards it; the part furthest has more centrifugal force so it is trying to throw the water away. They may not have understood the mechanics of that, but they did know about the sling. Aristotle could not detect this because the tides where he lived are miniscule but in my ebook I had my Roman with the British invasion and hence had to study the tides to know when to sail. There you can get quite massive tides. If you simply assume the tide is cause by the Moon pulling the water towards it and Earth is stationary there would be only one tide per day; the fact that there are two is conclusive, even if you do not properly understand the mechanics.

Dark Matter Detection

Most people have heard of dark matter. Its existence is clear, at least so many so state. Actually, that is a bit of an exaggeration. All we know is that galaxies do not behave exactly as General Relativity would have us think. Thus the outer parts of galaxies orbit the centre faster than they should and galaxy clusters do not have the dynamics expected. Worse, if we look at gravitational lensing, where light is bent as it goes around a galaxy, it is bent as if there is additional mass there that we just cannot see. There are two possible explanations for this. One is there is additional matter there we cannot see, which we call dark matter. The other alternative is that our understanding of what gravity behaves like is wrong on a large scale. We understand it very well on the scale of our solar system, but that is incredibly small when compared with a galaxy so it is possible we simply cannot detect such anomalies with our experiments. As it happens, there are awkward aspects of each, although the modified gravity does have the advantage that one explanation is that we might simply not understand how it should be modified.

One way of settling this dispute is to actually detect dark matter. If we detect it, case over. Well, maybe. However, so far all attempts to detect it have failed. That is not critical because to detect something, we have to know what it is, or what its properties are. So far all we can say about dark matter is that its gravity affects galaxies. It is rather hard to do an experiment on a galaxy so that is not exactly helpful. So what physicists have done is to make a guess as to what it will be, and. not surprisingly, make the guess in the form they can do something about it if they are correct. The problem now is we know it has to have mass because it exerts a gravitational effect and we know it cannot interact with electromagnetic fields, otherwise we would see it. We can also say it does not clump because otherwise there would be observable effects on close stars. There will not be dark matter stars. That is not exactly much to work on, but the usual approach has been to try and detect collisions. If such a particle can transfer sufficient energy to the molecule or atom, it can get rid of the energy by giving off a photon. So one such detector had huge tanks containing 370 kg of liquid xenon. It was buried deep underground, and from theory massive particles of dark matter could be separated from occasional neutron events because a neutron would give multiple events. In the end, they found nothing. On the other hand, it is far from clear to me why dark matter could not give multiple events, so maybe they saw some and confused it with stray neutrons.

On the basis that a bigger detector would help, one proposal (Leane and Smirnov, Physical Review Letters 126: 161101 (2021) suggest using giant exoplanets. The idea is that as the dark matter particles collide with the planet, they will deposit energy as they scatter and with the scattering eventually annihilate within the planet. This additional energy will be detected as heat. The point of the giant is that its huge gravitational field will pull in extra dark matter.

Accordingly, they wish someone to measure the temperatures on the surface of old exoplanets of mass between Jupiter and 55 times Jupiter’s mass, and temperatures above that otherwise expected can be allocated to dark matter. Further, since dark matter density should be higher near the galactic centre, and collisional velocities higher, the difference in surface temperatures between comparable planets may signal the detection of dark matter.

Can you see problems? To me, the flaw lies in “what is expected?” In my opinion, one is the question of getting sufficient accuracy in the infrared detection. Gravitational collapse gives off excess heat. Once a planet gets to about 16 Jupiter masses it starts fusing deuterium. Another lies in estimating the heat given off by radioactive decay. That should be understandable from the age of the planet, but if it had accreted additional material from a later supernova the prediction could be wrong. However, for me the biggest assumption is that the dark matter will annihilate, as without this it is hard to see where sufficient energy will come from. If galaxies all behave the same way, irrespective of age (and we see some galaxies from a great distance, which means we see them as they were a long time ago) then this suggests the proposed dark matter does not annihilate. There is no reason why it should, and that our detection method needs it to will be totally ignored by nature. However, no doubt schemes to detect dark matter will generate many scientific papers in the near future and consume very substantial research grants. As for me, I would suggest one plausible approach, since so much else has failed by assuming large particles, is to look for small ones. Are there any unexplained momenta in collisions from the large hadron collider? What most people overlook is that about 99% of the data generated is trashed (because there is so much of it), but would it hurt to spend just a little effort on examining if fine detail that which you do not expect to see much?

How can we exist?

One of the more annoying questions in physics is why are we here? Bear with me for a minute, as this is a real question. The Universe is supposed to have started with what Fred Hoyle called “The Big Bang”. Fred was being derisory, but the name stuck. Anyway what happened is that a very highly intense burst of energy began expanding, and as it did, perforce the energy became less dense. As that happened, out condensed elementary particles. On an extremely small scale, that happens in high-energy collisions, such as in the Large Hadron Collider. So we are reasonably convinced we know what happened up to this point, but there is a very big fly in the ointment. When such particles condense out we get an equal amount of matter and what we call antimatter. (In principle, we should get dark matter too, but since we do not know what that is, I shall leave that.) 

Antimatter is, as you might guess, the opposite of matter. The most obvious example is the positron, which is exactly the same as the electron except it has positive electric charge, so when a positron is around an electron they attract. In principle, if they were to hit each other they would release an infinite amount of energy, but nature hates the infinities that come out of our equations so when they get so close they annihilate each other and you get two gamma ray photons that leave in opposite directions to conserve momentum. That is more or less what happens when antimatter generally meets matter – they annihilate each other, which is why, when we make antimatter in colliders, if we want to collect it we have to do it very carefully with magnetic traps and in a vacuum.

So now we get to the problem of why we are here: with all that antimatter made in equal proportions to matter, why do we have so much matter? As it happens, the symmetry is violated very slightly in kaon decay, but this is probably not particularly helpful because the effect is too slight. In the previous post on muon decay I mentioned that that could be a clue that there might be physics beyond the Standard Model to be unraveled. Right now, the fact that there is so much matter in the Universe should be a far stronger clue that something is wrong with the Standard Model. 

Or is it? One observation that throws that into doubt was published in the Physical Review, D, 103, 083016 in April this year. But before coming to that, some background. A little over ten years ago, colliding heavy ions made a small amount of anti helium-3, and a little later, antihelium-4. The antihelium has two antiprotons, and one or two antineutrons. To make this, the problem is to get enough antiprotons and antineutrons close enough. To give some idea of the trouble, a billion collisions of gold ions with energies of two hundred billion and sixty-two billion electron volts produced 18 atoms of antihelium 4, with masses of 3.73 billion electron volts. In such a collision, the energy requires a temperature of over 250,000 times that of the sun’s core. 

Such antihelium can be detected through gamma ray frequencies when the atoms decay on striking matter, and apparently also through the Alpha Magnetic Spectrometer on the International Space Station, which tracks cosmic rays. The important point is that antihelium-4 behaves exactly the same as an alpha particle, except that, because the antiprotons have negative charge, their trajectories bend in the opposite direction to ordinary nuclei. These antinuclei can be made through the energies of cosmic rays hitting something, however it has been calculated that the amount of antihelium-3 detected so far is 50 times too great to be explained by cosmic rays, and the amount of antihelium-4 detected is 100,000 times too much.

How can this be? The simple answer is that the antihelium is being made by antistars. If you accept them, gamma ray detection indicates 5787 sources, and it has been proposed that at least fourteen of these are antistars, and if we look at the oldest stars near the centre of the galaxy, then estimates suggest up to a fifth of the stars there could be antistars, possibly with antiplanets. If there were people on these, giving them a hug would be outright disastrous for each of you.Of course, caution here is required. It is always possible that this antihelium was made in a more mundane way that as yet we do not understand. On the other hand, if there are antistars, it solves automatically a huge problem, even if it creates a bigger one: how did the matter and antimatter separate? As is often the case in science, solving one problem creates even bigger problems. However, real antistars would alter our view of the universe and as long as the antimatter is at a good distance, we can accept them.

Much Ado About Muons

You may or may not have heard that the Standard Model, which explains “all of particle physics”, is in trouble and “new physics” may be around the corner. All of this arises from a troublesome result from the muon, a particle that is very similar to an electron except it is about 207 times more massive and has a mean lifetime of 2.2 microseconds. If you think that is not very important for your current lifestyle (and it isn’t) wait, there’s more. Like the electron it has a charge of -1, and a spin of ½, which means it acts like a small magnet. Now, if the particle is placed in a strong magnetic field, the direction of the spin wobbles (technically, precesses) and the strength of this interaction is described by a number called the g factor, which for a classical situation, g = 2. Needless to say, in the quantum world that is wrong. For the electron, it is roughly 2.002 319 304 362, the numbers here stop where uncertainty starts. If nothing else, this shows the remarkable precision achieved by experimental physicists. Why is it not 2? The basic reason is the particle interacts with the vacuum, which is not quite “nothing”. You will see quantum electrodynamics has got this number down fairly precisely, and quantum electrodynamics, which is part of the standard model, is considered to be the most accurate theoretical calculation ever, or the greatest agreement between calculation and observation. All was well, until this wretched muon misbehaved.

Now, the standard model predicts the vacuum comprises a “quantum foam” of virtual particles popping in and out of existence, and these short-lived particles affect the g-factor, causing the muon’s wobble to speed up or slow down very slightly, which in turn leads to what is called an “anomalous magnetic moment”. The standard model should calculate these to the same agreement as with the electron, and the calculations give:

  • g-factor: 2.00233183620
  • anomalous magnetic moment: 0.00116591810

The experimental values announced by Fermilab and Brookhaven are:

  • g-factor: 2.00233184122(82)
  • anomalous magnetic moment: 0.00116592061(41)

The brackets indicate uncertainty. Notice a difference? Would you say it is striking? Apparently there is only a one in 40,000 chance that it will be a statistical error. Nevertheless, apparently they will keep this experiment running at Fermilab for another two years to firm it up. That is persistence, if nothing else.

This result is what has excited a lot of physicists because it means the calculation of how this particle interacts with the vacuum has underestimated the actual effect for the muon. That suggests more physics beyond the standard model, and in particular, a new particle may be the cause of the additional effect. Of course, there has to be a fly in the ointment. One rather fearsome calculation claims to be a lot closer to the observational value. To me the real problem is how can the same theory come up with two different answers when there is no arithmetical mistake?

Anyway, if the second one is right, problem gone? Again, not necessarily. At the Large Hadron collider they have looked at B meson decay. This can produce electrons and positrons, or muons and antimuons. According to the standard model, these two particles are identical other than for mass, which means the rate of production of each should be identical, but it isn’t quite. Again, it appears we are looking at small deviations. The problem then is, hypothetical particles that might explain one experiment fail for the other. Worse, the calculations are fearsome, and can take years. The standard model has 19 parameters that have to be obtained from experiment, so the errors can mount up, and if you wish to give the three neutrinos mass, in come another eight parameters. If we introduce yet another particle, in come at least one more parameter, and probably more. Which raises the question, since adding a new assignable parameter will always answer one problem, how do we know we are even on the right track?

All of which raises the question, is the standard model, which is a part of quantum field theory, itself too complicated, and maybe not going along the right path? You might say, how could I possibly question quantum field theory, which gives such agreeable results for the electron magnetic moment, admittedly after including a series of interactions? The answer is that it also gives the world’s worst agreement with the cosmological constant. When you sum the effects of all these virtual particles over the cosmos, the expansion of the Universe is wrong by 10^120, that is, 10 followed by 120 zeros. Not exceptionally good agreement. To get the agreement it gets, something must be right, but as I see it, to get such a howling error, something must be wrong also. The problem is, what?

Dark Energy

Many people will have heard of dark energy, yet nobody knows what it is, apart from being something connected with the rate of expansion of the Universe. This is an interesting part of science. When Einstein formulated General Relativity, he found that if his equations were correct, the Universe should collapse due to gravity. It hasn’t so far, so to avoid that he introduced a term Λ, the so-called cosmological constant, which was a straight-out fudge with no basis other than that of avoiding the obvious mistake that the universe had not collapsed and did not look like doing so. Then, when he found from observations that the Universe was actually expanding, he tore that up. In General Relativity, Λ represents the energy density of empty space.

We think the Universe expansion is accelerating because when we look back in time by looking at ancient galaxies, we can measure the velocity of their motion relative to us through the so-called red shift of light, and all the distant galaxies are going away from us, and seemingly faster the further away they are. We can also work out how far away they are by taking light sources and measuring how bright they are, and provided we know how bright they were when they started, the dimming gives us a measure of how far away they are. What two research groups found in 1998 is that the expansion of the Universe was accelerating, which won them the 2011 Nobel prize for physics. 

The next question is, how accurate are these measurements and what assumptions are inherent? The red shift can be measured accurately because the light contains spectral lines, and as long as the physical constants have remained constant, we know exactly their original frequencies, and consequently the shift when we measure the current frequencies. The brightness relies on what are called standard candles. We know of a class of supernovae called type 1a, and these are caused by one star gobbling the mass of another until it reaches the threshold to blow up. This mass is known to be fairly constant, so the energy output should be constant.  Unfortunately, as often happens, the 1a supernovae are not quite as standard as you might think. They have been separated into three classes: standard 1a, dimmer 1a , and brighter 1a. We don’t know why, and there is an inherent problem that the stars of a very long time ago would have had a lower fraction of elements from previous supernovae. They get very bright, then dim with time, and we cannot be certain they always dim at the same rate. Some have different colour distributions, which makes specific luminosity difficult to measure. Accordingly, some consider the evidence is inadequate and it is possible there is no acceleration at all. There is no way for anyone outside the specialist field to resolve this. Such measurements are made at the limits of our ability, and a number of assumptions tend to be involved.

The net result of this is that if the universe is really expanding, we need a value for Λ because that will describe what is pushing everything apart. That energy of the vacuum is called dark energy, and if we consider the expansion and use relativity to compare this energy with the mass of the Universe we can see, dark energy makes up 70% of the total Universe. That is, assuming the expansion is real. If not, 70% of the Universe just disappears! So what is it, if real?

The only real theory that can explain why the vacuum has energy at all and has any independent value is quantum field theory. By independent value, I mean it explains something else. If you have one observation and you require one assumption, you effectively assume the answer. However, quantum field theory is not much help here because if you calculate Λ using it, the calculation differs from observation by a factor of 120 orders of magnitude, which means ten multiplied by itself 120 times. To put that in perspective, if you were to count all the protons, neutrons and electrons in the entire universe that we can see, you would multiply ten by itself about 83 times to express the answer. This is the most dramatic failed prediction in all theoretical physics and is so bad it tends to be put in the desk drawer and ignored/forgotten about.So the short answer is, we haven’t got a clue what dark energy is, and to make matters worse, it is possible there is no need for it at all. But it most certainly is a great excuse for scientific speculation.

Living Near Ceres

Some will have heard of Gerard O’Neill’s book, “The High Frontier”. If not, see https://en.wikipedia.org/wiki/The_High_Frontier:_Human_Colonies_in_Space. The idea was to throw material up from the surface of the Moon to make giant cylinders that would get artificial gravity from rotation, and people could live their lives in the interior with energy being obtained in part by solar energy. The concept was partly employed in the TV series “Babylon 5”, but the original concept was to have open farmland as well. Looks like science fiction, you say, and in fairness I have included such a proposition in a science fiction novel I am currently writing, However, I have also read a scientific paper on this topic (arXiv:2011.07487v3) which appears to have been posted on the 14th January, 2021. The concept is to put such a space settlement using material obtained from the asteroid Ceres, and orbiting near Ceres.

The proposal is ambitious, if nothing else. The idea is to build a number of habitats, and to ensure such habitats are not too big but they stay together they are tethered to a megasatellite, which in turn will grow and new settlements are built. The habitats spin in such a way to attain a “gravity” of 1 g, and are attached to their tethers by magnetic bearings that have no physical contact between faces, and hence never wear. A system of travel between habitats proceeds along the tethers. Rockets would be unsustainable because the molecules they throw out to space would be lost forever.

The habitats would have a radius of 1 km, a length of 10 km, and have a population of 56,700, with 2,000 square meters per person, just under 45% of which would be urban. Slightly more scary would be the fact it has to rotate every 1.06 minutes. The total mass per person would be just under 10,000 t, requiring an energy to produce it of 1 MJ/kg, or about 10 GJ.

The design aims to produce an environment for the settlers that has Earth-like radiation shielding, gravity, and atmosphere. It will have day/night on a 24 hr cycle with 130 W/m^2 insolation, similar to southern Germany, and a population density of 500/km^2, similar to the Netherlands. There would be fields, parks, and forests, no adverse weather, no natural disasters and ultimately it could have a greater living area than Earth. It will be long-term sustainable. To achieve that, animals, birds and insects will be present, i.e.  a proper ecosystem. Ultimately it could provide more living area than Earth. As can be seen, that is ambitious. The radiation shielding involves 7600 kg/m^2, of which 20% is water and the rest silicate regolith. The rural spaces have a 1.5 m depth of soil, which is illuminated by the sunlight. The sunlight is collected and delivered from mirrors into light guides. Ceres is 2.77 times as far as Earth from the sun, which means the sunlight is only about 13% as strong as at Earth, so over eight times the mirror collecting are is required for every unit area to be illuminated to get equivalent energy. 

The reason cited for proposing this to be at Ceres is that Ceres has nitrogen. Actually, there are other carbonaceous asteroids, and one that is at least 100 km in size could be suitable. Because Ceres’ gravity is 0.029 times that of Earth, a space elevator could be feasible to bring material cheaply from the dwarf planet, while a settlement 100,000 km from the surface would be expected to have a stable orbit.

In principle, there could be any number of these habitats, all linked together. You could have more people living there than on Earth. Of course there are some issues with the calculation. The tethering of habitats, and of giving the habitats sufficient strength requires about 5% of the total mass in the form of steel. Where does the iron come from? The asteroids have plenty of iron, but the form is important. How will it be refined? If it is on the form of olivine or pyroxene, then with difficulty. Vesta apparently has an iron core, but Vesta is not close, and most of the time, because it has a different orbital period, it is very far away.But the real question is, would you want to live in such a place? How much would you pay for the privilege? The cost of all this was not estimated, but it would be enormous so most people could not afford it. In my opinion, cost alone is sufficient that this idea will not see the light of day.

Free Will

You will see many discussions regarding free will. The question is, do you have it, or are we in some giant computer program. The problem is that classical physics is deterministic, and you will often see claims that Newtonian physics demands that the Universe works like some finely tuned machine, following precise laws of motion. And indeed, we can predict quite accurately when eclipses of the sun will occur, and where we should go to view them. The presence of eclipses in the future is determined now. Now let us extrapolate. If planets follow physical laws, and hence their behaviour can be determined, then so do snooker or pool balls, even if we cannot in practice calculate all that will happen on a given break. Let us take this further. Heat is merely random kinetic energy, but is it truly random? It seems that way, but the laws of motion are quite clear: we can calculate exactly what will happen in any collision and it is just in practice the calculations are too complicated to even consider doing it. You bring in chaos theory, but this does nothing for you; the calculations may be utterly impossible to carry out, but they are governed solely by deterministic physics, so ultimately what happens was determined and it is just that we do not know how to calculate it. Electrodynamics and quantum theory are deterministic, even if quantum theory has random probability distributions. Quantum behaviour always follows strict conservation laws and the Schrödinger equation is actually deterministic. If you know ψ and know the change of conditions, you know the new ψ. Further, all chemistry is deterministic. If I go into the lab, take some chemicals and mix them and if necessary heat them according to some procedure, every time I follow exactly the same procedures, I shall end up with the same result.

So far, so good. Every physical effect follows from a physical cause. Therefore, the argument goes, since our brain works on physical and chemical effects and these are deterministic, what our brains do is determined exactly by those conditions. But those conditions were determined by what went before, and those before that, and so on. Extrapolating, everything was predetermined at the time of the big bang! At this point the perceptive may feel that does not seem right, and it is not. Consider nuclear decay. We know that particles, say neutrons, are emitted with a certain probability over an extended period of time. They will be emitted, but we cannot say exactly, or even roughly, when. The nuclei have angular uncertainty, therefore it follows that you cannot know what direction it is emitted because according to the laws of physics that is not determined until it is emitted. You may say, so what? That is trivial. No, the so what is that when you find one exception, you falsify the overall premise that everythingwas determined at the big bang. Which means something else introduced causes. Also, the emitted neutron may now generate new causes that could not be predetermined.

Now we start to see a way out. Every physical effect follows from a physical cause, but where do the causes come from? Consider stretching a wire with ever increasing force; eventually it breaks. It usually breaks at the weakest point, which in principle is predictable, but suppose we have a perfect wire with no point weaker than any other. It must still break, but where? At the instant of breaking some quantum effect, such as molecular vibration, will offer momentarily weaker and stronger spots. One with the greatest weakness will go, but due to the Uncertainty Principle that the given spot is unpredictable.

Take evolution. This proceeds by variation in the nucleic acids, but where in the chain is almost certainly random because each phosphate ester linkage that has to be broken is equivalent, just like the points in the “ideal wire”. Most resultant mutations die out. Some survive, and those that survive long enough to reproduce contribute to an evolutionary change. But again, which survives depends on where it is. Thus a change that provides better heat insulation at the expense of mobility may survive in polar regions, but it offers nothing in the equatorial rain forest. There is nothing that determines where what mutation will arise; it is a random event.Once you cannot determine everything, even in principle, it follows you must accept that not every cause is determined by previous events. Once you accept that, since we have no idea how the mind works, you cannot insist the way my mind works was determined at the time of the big bang. The Universe is mechanical and predictable in terms of properties obeying the conservation laws, but not necessarily anything else. I have free will, and so do you. Use it well.

Unravelling Stellar Fusion

Trying to unravel many things in modern science is painstaking, as will be seen from the following example, which makes looking for a needle in a haystack relatively easy. Here, the requirement for careful work and analysis can be seen, although less obvious is the need for assumptions during the calculations, and these are not always obviously correct. The example involves how our sun works. The problem is, how do we form the neutrons needed for fusion in the star’s interior? 

In the main process, the immense pressures force two protons form the incredibly unstable 2He (a helium isotope). Besides giving off a lot of heat there are two options: a proton can absorb an electron and give off a neutrino (to conserve leptons) or a proton can give off a positron and a neutrino. The positron would react with an electron to give two gamma ray photons, which would be absorbed by the star and converted to energy. Either way, energy is conserved and we get the same result, except the neutrinos may have different energies. 

The dihydrogen starts to operate at about 4 million degrees C. Gravitational collapse of a star starts to reach this sort of temperature if the star has a mass at least 80 times that of Jupiter. These are the smaller of the red dwarfs. If it has a mass of approximately 16 – 20 times that of Jupiter, it can react deuterium with protons, and this supplies the heat to brown dwarfs. In this case, the deuterium had to come from the Big Bang, and hence is somewhat limited in supply, but again it only reacts in the centre where the pressure is high enough, so the system will continue for a very long time, even if not very strongly.

If the temperatures reach about 17 million degrees C, another reaction is possible, which is called the CNO cycle. What this does is start with 12C (standard carbon, which has to come from accretion dust). It then adds a proton to make 13N, which loses a positron and a neutrino to make 13C. Then come a sequence of proton additions to make 14N (most stable nitrogen), then 15O, which loses a positron and a neutrino to make 15N, and when this is struck by a proton, it spits out 4He and returns to 12C. We have gone around in a circle, BUT converted four hydrogen nuclei to 4helium, and produced 25 MeV of energy. So there are two ways of burning hydrogen, so can the sun do both? Is it hot enough at the centre? How do we tell?

Obviously we cannot see the centre of the star, but we know for the heat generated it will be close to the second cycle. However, we can, in principle, tell by observing the neutrinos. Neutrinos from the 2He positron route can have any energy but not more than a little over 0.4 MeV. The electron capture neutrinos are up to approximately 1.1 MeV, while the neutrinos from 15O are from anywhere up to about 0.3 MeV more energetic, and those from 13N are anywhere up to 0.3 MeV less energetic than electron capture. Since these should be of the same intensity, the energy difference allows a count. The sun puts out a flux where the last three are about the same intensity, while the 2He neutrino intensity is at least 100 times higher. (The use of “at least” and similar terms is because such determinations are very error prone, and you will see in the literature some relatively different values.) So all we have to do is detect the neutrinos. That is easier said than done if they can pass through a star unimpeded. The way it is done is if a neutrino accidentally hits certain substances capable of scintillation it may give off a momentary flash of light.

The first problem then is, anything hitting those substances with enough energy will do it. Cosmic rays or nuclear decay are particularly annoying. So in Italy they built a neutrino detector under1400 meters of rock (to block cosmic rays). The detector is a sphere containing 300 t of suitable liquid and the flashes are detected by photomultiplier tubes. While there is a huge flux of neutrinos from the star, very few actually collide. The signals from spurious sources had to be eliminated, and a “neutrino spectrum” was collected for the standard process. Spurious sources included radioactivity from the rocks and liquid. These are rare, but so are the CNO neutrinos. Apparently only a few counts per day were recorded. However, the Italians ran the experiment for 1000 hours, and claimed to show that the sun does use this CNO cycle, which contributes about 1% of the energy. For bigger stars, this CNO cycle becomes more important. This is quite an incredible effort, right at the very edge of detection capability. Just think of the patience required, and the care needed to be sure spurious signals were not counted.

An Example of How Science Works: Where Does Gold Come From?

Most people seem to think that science marches on inexorably, gradually uncovering more and more knowledge, going in a straight line towards “the final truth”. Actually, it is far from that, and it is really a lurch from one point to another. It is true science continues to make a lot of measurements, and these fill our banks of data. Thus in organic chemistry, over the odd century or so we have collected an enormous number of melting points. These were obtained so someone else could check whether something else he had could be the same material, so it was not pointless. However, our attempts to understand what is going on have been littered with arguments, false leads, wrong turns, debates, etc. Up until the mid twentieth century, such debates were common, but now much less so. The system has coalesced in acceptance of the major paradigms, until awkward information comes to light that is sufficiently important that it cannot be ignored.

As an example, currently, there is a debate going on relating to how elements like gold were formed. All elements heavier than helium, apart from traces of lithium, were formed in stars. The standard theory says we start with hydrogen, and in the centre of a star, where the temperatures and pressures are sufficient two hydrogen atoms combine to form, for a very brief instant, helium 2 (two protons). An electron is absorbed, and we get deuterium, which is a proton and a neutron combined. The formation of a neutron from a proton and an electron is difficult because it needs about 1.3 MeV of energy to force it in, which is about a third of a million times bigger than the energy of any chemical bond. The diproton is a bit easier because the doubling of the positive field provides some supplementary energy. Once we get deuterium, we can do more and eventually get to helium 4 (two protons, two neutrons) and then it stops because the energy produced prevents the pressure from rising. The inside of the sun is an equilibrium, and in any given volume, a surprisingly few fusion reactions take place. The huge amount of energy is simply because of size. However, when the centre starts to run out of hydrogen, the star collapses further, and if it is big enough, it can start burning helium to make carbon and oxygen. Once the supply of helium becomes insufficient, if the star is large enough, a greater collapse happens, but this refuses to form an equilibrium. Atoms fuse at a great rate and produce the enormous amount of energy in a supernova.

What has happened in the scientific community is that once the initial theory was made, it was noticed that iron is at an energy minimum, and making elements heavier than iron absorb energy, nevertheless we know there are elements like uranium, gold, etc, because we use them. So how did they form? The real short answer is, we don’t know, but scientists with computers like to form models and publish lots of papers. The obvious way was that in stars, we could add a sequence of helium nuclei, or protons, or even, maybe, neutrons, but these would be rare events. However, in the aftermath of a supernova, huge amounts of energy are released, and, it is calculated, a huge flux of neutrons. That 1.3 MeV is a bit of a squib to what is available in a supernova, and so the flux of neutrons could gradually add to nuclei, and when it contained too many neutrons it would decay by turning a neutron into a proton, and the next element up, and hence this would be available form further neutrons. The problem though, is there are only so many steps that can be carried out before the rapidly expanding neutron flux itself decays. At first sight, this does not produce enough elements like gold or uranium, but since we see them, it must have.

Or must it? In 2017, we detected gravitational wave from an event that we could observe and had to be attributed to the collision of two neutron stars. The problem for heavy elements from supernovae is, how do you get enough time to add all the protons and neutrons, more or less one at a time. That problem does not arise for a neutron star. Once it starts ejecting stuff into space, there is no shortage of neutrons, and these are in huge assemblies that simply decay and explode into fragments, which could be a shower of heavy elements. While fusion reactions favour forming lighter elements, this source will favour heavier ones. According to the scientific community, problem solved.

There is a problem: where did all the neutron stars come from? If the elements come from supernovae, all we need is big enough stars. However, neutron stars are a slightly different matter because to get the elements, the stars have to collide. Space is a rather big place. Let over all time the space density of supernovae be x, the density of neutron stars y, and the density of stars as z. All these are very small, but z is very much bigger than x and x is almost certainly bigger than y. The probability of two neutron stars colliding is proportional to y squared, while the probability of a collision of a neutron stars another star would be correspondingly proportional to yz. Given that y is extremely small, and z much bigger, but still small, most neutron stars will not collide with anything in a billion years, some will collide with a different star, while very few will collide with another neutron star. There have been not enough neutron stars to make our gold, or so the claims go.So what is it? I don’t know, but my feeling is that the most likely outcome is that both mechanisms will have occurred, together with possible mechanisms we have yet to consider. In this last respect, we have made elements by smashing nuclei together. These take a lot of energy and momentum, but anything we can make on Earth is fairly trivial compared with the heart of a supernova. Some supernovae are calculated to produce enormous pressure waves, and these could fuse any nuclei together, to subsequently decay, because the heavy ones would be too proton rich.  This is a story that is unfolding. In twenty years, it may be quite different again.

The Fermi Paradox: Where are the Aliens?

This question, as much as anything, illustrates why people have trouble thinking through problems when they cannot put their own self-importance to one side. Let us look at this problem not from our point of view.

The Fermi paradox is a statement that since there are so many stars, most of which probably have planets, and a reasonable number of them have life, more than half of those are likely to have been around longer than us and so should be more technically advanced, but we have seen no clue as to their presence. Why not? That question begs the obvious counter: why should we? First, while the number of planets is huge, most of them are in other galaxies, and of those in the Milky Way, stars are very well-separated. The nearest, Alpha Centauri, is a three star system: two rather close stars (A G-type star like our sun and a K1 star) and a more distant red dwarf, and these are 4.37 light years away. The two have distances that vary between 35.6 AU to 11.2 AU, i.e. on closest approach they come a little further apart than Saturn and the sun.  That close approach means that planets corresponding to our giants could not exist in stable orbits, and astronomers are fairly confident there are no giants closer to the star. Proxima Centauri has one planet in the habitable zone, but for those familiar with my ebook “Planetary Formation and Biogenesis” will know that in my opinion, the prospect for life originating there, or around most Red Dwarfs, is extremely low. So, could there be Earth-like planets around the two larger stars? Maybe, but our technology cannot find them. As it happens, if there were aliens there, they could not detect Earth with technology at our level either.  Since most stars are immensely further away, rocky planets are difficult to discover. We have found exoplanets, but they are generally giants, planets around M stars, or planets that inadvertently have their orbital planes aligned so we can see eclipses.

This is relevant, because if we are seeking a signal from another civilization, as Seti seeks, then either the signal is deliberate or accidental. An example of accidental is the electromagnetic radiation we send into space through radio and TV signals. According to tvtechnology.com “An average large transmitter transmits about 8kW per multiplex.” That will give “acceptable signal strength” over, say, 50 km. The signal strength attenuates according to the square of the distance, so while the signals will get to Alpha Centauri, they will be extremely weak, and because of bandwidth issues, broadcasts from well separated transmitters will interfere with each other. Weak signals can be amplified, but aliens at Alpha Centauri would get extremely faint noise that might be assignable to technology. 

Suppose you want to send a deliberate signal? Now, you want to boost the power, and the easiest way to get over the inverse square attenuation is to focus the signal. Now, however, you need to know exactly where the intended recipient will be. You might do this for one of your space ships, in which case you would send a slightly broader signal on a very high power level at an agreed frequency but as a short burst. To accidentally detect this, because you have a huge range of frequencies to monitor, you have to accidentally be on that frequency at the time of the burst. There is some chance of Seti detecting such a signal if the space ship was heading to Earth, but then why listen for such a signal, as opposed to waiting for the ship.

The next possible deliberate signal would be aimed at us. To do that, they would need to know we had potential, but let us suppose they did. Suppose it takes something like 4.5 billion years to get technological life, and at that nice round number, they peppered Earth with signals. Oops! We are still in the Cretaceous. Such a move would require a huge power output so as to flood whatever we were using, a guess as to what frequencies we would find of interest, and big costs. Why would they do that, when it may take hundreds or thousands of years for a response? It makes little sense for any “person” to go to all that trouble and know they could never know whether it worked or not. We take the cheap option of listening with telescopes, but if everyone is listening, nobody is sending.

How do they choose a planet? My “Planetary Formation and Biogenesis” concludes you need a rocky planet with major felsic deposits, which is most probable around the G type star (but still much less than 50% of them). So you would need some composition data, and in principle you can get that from spectroscopy (but with much better technology than we have). What could you possibly see? Oxygen is obvious, except it gives poor signals. In the infrared spectra, you might detect ozone, and that would be definitive. You often see statements that methane should be detectable. Yes, but Titan has methane and no life. Very low levels of carbon dioxide is a strong indication, as it suggests large amounts of water to fix it, and plate tectonics to renew it. Obviously, signals from chlorophyll would be proof, but they are not exactly strong. So if they are at anything but the very closest stars they would not know whether we are here, so why waste that expense. The Government accountants would never fund such a project with such a low probability of getting a return on investment. Finally, suppose you decided a planet might have technology, why would you send a signal? As Hawking remarked, an alien species might decide this would be a good planet to eradicate all life and transform it suitable for the aliens to settle. You say that is unlikely, but with all those planets, it only needs one such race. So simple game theory suggests “Don’t do it!” If we assume they are more intelligent than us, they won’t transmit because there is no benefit for those transmitting.