What is nothing?

Shakespeare had it right – there has been much ado about nothing, at least in the scientific world. In some of my previous posts I have advocated the use of the scientific method on more general topics, such as politics. That method involves the rigorous evaluation of evidence, of making propositions in accord with that evidence, and most importantly, rejecting those that are clearly false. It may appear that for ordinary people, that might be too hard, but at least that method would be followed by scientists, right? Er, not necessarily. In 1962 Thomas Kuhn published a work, “The structure of scientific revolutions” and in it he argued that science itself has a very high level of conservatism. It is extremely difficult to change a current paradigm. If evidence is found that would do so, it is more likely to be secreted away in the bottom drawer, included in a scientific paper in a place where it is most likely to be ignored, or, if it is published, ignored anyway, and put in the bottom drawer of the mind. The problem seems to be, there is a roadblock towards thinking that something not in accord with expectations might be significant. With that in mind, what is nothing?

An obvious answer to the title question is that a vacuum is nothing. It is what is left when all the “somethings” are removed. But is there “nothing” anywhere? The ancient Greek philosophers argued about the void, and the issue was “settled” by Aristotle, who argued in his Physica that there could not be a void, because if there were, anything that moved in it would suffer no resistance, and hence would continue moving indefinitely. With such excellent thinking, he then, for some reason, refused to accept that the planets were moving essentially indefinitely, so they could be moving through a void, and if they were moving, they had to be moving around the sun. Success was at hand, especially if he realized that feathers did not fall as fast as stones because of wind resistance, but for some reason, having made such a spectacular start, he fell by the wayside, sticking to his long held prejudices. That raises the question, are such prejudices still around?

The usual concept of “nothing” is a vacuum, but what is a vacuum? Some figures from Wikipedia may help. A standard cubic centimetre of atmosphere has 2.5 x 10^19 molecules in it. That’s plenty. For those not used to “big figures”, 10^19 means the number where you write down 10 and follow it with 19 zeros, or you multiply 10 by itself nineteen times. Our vacuum cleaner gets the concentration of molecules down to 10^19, that is, the air pressure is two and a half times less in the cleaner. The Moon “atmosphere” has 4 x 10^5 molecules per cubic centimetre, so the Moon is not exactly in vacuum. Interplanetary space has 11 molecules per cubic centimetre, interstellar space has 1 molecule per cubic centimetre, and the best vacuum, intergalactic space, needs a million cubic centimetres to find one molecule.

The top of the Earth’s atmosphere, the thermosphere goes from 10^14 to 10^7. That is a little suspect at the top because you would expect it to gradually go down to that of interplanetary space. The reason there is a boundary is not because there is a sharp boundary, but rather it is the point where gas pressure is more or less matched by solar radiation pressure and that from solar winds, so it is difficult to make firm statements about further distances. Nevertheless, we know there is atmosphere out to a few hundred kilometres because there is a small drag on satellites.

So, intergalactic space is most certainly almost devoid of matter, but not quite. However, even without that, we are still not quite there with “nothing”. If nothing else, we know there are streams of photons going through it, probably a lot of cosmic rays (which are very rapidly moving atomic nuclei, usually stripped of some of their electrons, and accelerated by some extreme cosmic event) and possibly dark matter and dark energy. No doubt you have heard of dark matter and dark energy, but you have no idea what it is. Well, join the club. Nobody knows what either of them are, and it is just possible neither actually exist. This is not the place to go into that, so I just note that our nothing is not only difficult to find, but there may be mysterious stuff spoiling even what little there is.

However, to totally spoil our concept of nothing, we need to see quantum field theory. This is something of a mathematical nightmare, nevertheless conceptually it postulates that the Universe is full of fields, and particles are excitations of these fields. Now, a field at its most basic level is merely something to which you can attach a value at various coordinates. Thus a gravitational field is an expression such that if you know where you are and if you know what else is around you, you also know the force you will feel from it. However, in quantum field theory, there are a number of additional fields, thus there is a field for electrons, and actual electrons are excitations of such fields. While at this point the concept may seem harmless, if overly complicated, there is a problem. To explain how force fields behave, there needs to be force carriers. If we take the electric field as an example, the force carriers are sometimes called virtual photons, and these “carry” the force so that the required action occurs. If you have such force carriers, the Uncertainty Principle requires the vacuum to have an associated zero point energy. Thus a quantum system cannot be at rest, but must always be in motion and that includes any possible discrete units within the field. Again, according to Wikipedia, Richard Feynman and John Wheeler calculated there was enough zero point energy inside a light bulb to boil off all the water in the oceans. Of course, such energy cannot be used; to use energy you have to transfer it from a higher level to a lower level, when you get access to the difference. Zero point energy is at the lowest possible level.

But there is a catch. Recall Einstein’s E/c^2 = m? That means according to Einstein, all this zero point energy has the equivalent of inertial mass in terms of its effects on gravity. If so, then the gravity from all the zero point energy in vacuum can be calculated, and we can predict whether the Universe is expanding or contracting. The answer is, if quantum field theory is correct, the Universe should have collapsed long ago. The difference between prediction and observation is merely about 10^120, that is, ten multiplied by itself 120 times, and is the worst discrepancy between prediction and observation known to science. Even worse, some have argued the prediction was not right, and if it had been done “properly” they justified manipulating the error down to 10^40. That is still a terrible error, but to me, what is worse, what is supposed to be the most accurate theory ever is suddenly capable of turning up answers that differ by 10^80, which is roughly the same as the number of atoms in the known Universe.

Some might say, surely this indicates there is something wrong with the theory, and start looking elsewhere. Seemingly not. Quantum field theory is still regarded as the supreme theory, and such a disagreement is simply placed in the bottom shelf of the minds. After all, the mathematics are so elegant, or difficult, depending on your point of view. Can’t let observed facts get in the road of elegant mathematics!

Evidence that the Standard Theory of Planetary Formation is Wrong.

Every now and again, something happens that makes you feel both good and depressed at the same time. For me it was last week, when I looked up the then latest edition of Nature. There were two papers (Nature, vol 541 (Dauphas, pp 521 – 524; Fischer-Gödde and Kleine, pp 525 – 527) that falsified two of the most important propositions in the standard theory of planetary formation. What we actually know is that stars accrete from a disk of gas and dust, the disk lingers on for between a million years and 30 million years, depending on the star, then the star’s solar winds clear out the dust and gas. Somewhere in there, planets form. We can see evidence of gas giants growing, where the gas is falling into the giant planet, but the process by which smaller planets or the cores of giants form is unobservable because the bodies are too small, and the dust too opaque. Accordingly, we can only form theories to fill in the intermediate process. The standard theory, also called oligarchic growth, explains planetary formation in terms of dust accreting to planetesimals by some unknown mechanism, then these collide to form embryos, which in turn formed oligarchs or protoplanets (Mars sized objects) and these collided to form planets. If this happened, they would do a lot of bouncing around and everything would get well-mixed. Standard computer simulations argue that Earth would have formed from a distribution of matter from further out than Mars to inside Mercury’s orbit. Earth the gets its water from a “late veneer” from carbonaceous chondrites from the far side of the asteroid belt.

It is also well known that certain elements in bodies in the solar system have isotopes that vary their ratio depending on the distance from the star. Thus meteorites from Mars have different isotope ratios from meteorites from the asteroid belt, and again both are different from rocks from Earth and Moon. The cause of this isotope difference is unclear, but it is an established fact. This is where those two papers come in.

Dauphas showed that Earth accreted from a reasonably narrow zone throughout its entire accretion time. Furthermore, that zone was the same as that which formed enstatite chondrites, which appear to have originated from a region that was much hotter than the material that, say, formed Mars. Thus enstatite chondrites are reduced. What that means is that their chemistry was such that there was less oxygen. Mars has only a small iron core, and most of its iron is as iron oxide. Enstatite chondrites have free iron as iron, and, of course, Earth has a very large iron core. Enstatite chondrites also contain silicates with less magnesium, which will occur when the temperatures were too hot to crystallize out forsterite. (Forsterite melts at 1890 degrees C, but it will also dissolve to some extent in silica melts at lower temperatures.) Enstatite chondrites also are amongst the driest, so they did not provide Earth’s water.

Fischer-Gödde and Kleine showed that most of Earth’s water did not come from carbonaceous chondrites, the reason being, if it did, the non-water part would have added about 5% to the mass of Earth, and the last 5% is supposed to be from where the bulk of elements that dissolve in hot iron would have come from. The amounts arriving earlier would have dissolved in the iron and gone to the core. One of those elements is ruthenium, and the isotope ratios of Earth’s ruthenium rule out an origin from the asteroid belt.

Accordingly, this evidence rules out oligarchic growth. There used to be an alternative theory of planetary accretion called monarchic growth, but this was soon abandoned because it cannot explain first why we have the number of planets we have where they are, and second where our water came from. Calculations show it is possible to have three to five planets in stable orbit between Earth and Mars, assuming none are larger than Earth, and more out to the asteroid belt. But they are not there, so the question is, if planets only grow from a narrow zone, why are these zones empty?

This is where I felt good. A few years ago I published an ebook called “Planetary Formation and Biogenesis” and it required monarchic growth. It also required the planets in our solar system to be roughly where they are, at least until they get big enough to play gravitational billiards. The mechanism is that the planets accreted in zones where the chemistry of the matter permitted accretion, and that in turn was temperature dependent, so specific sorts of planets form in zones at specific distances from the star. Earth formed by accretion of rocks formed during the hot stage, and being in a zone near that which formed enstatite chondrites, the iron was present as a metal, which is why Earth has an iron core. The reason Earth has so much water is that accretion occurred from rocks that had been heat treated to about 1550 degrees Centigrade, in which case certain aluminosilicates phase separated out. These, when they take up water, form cement that binds other rocks to form a concrete. As far as I am aware, my theory is the only current one that requires these results.

So, why do I feel depressed? My ebook contained a review of over 600 references from journals until a few months before the ebook was published. The problem is, these references, if properly analysed, provided papers with plenty of evidence that these two standard theories were wrong, but each of the papers’ conclusions were ignored. In particular, there was a more convincing paper back in 2002 (Drake and Righter, Nature 416: 39-44) that came to exactly the same conclusions. As an example, to eliminate carbonaceous chondrites as the source of water, instead of ruthenium isotopes, it used osmium isotopes and other compositional data, but you see the point. So why was this earlier work ignored? I firmly believe that scientists prefer to ignore evidence that falsifies their cherished beliefs rather than change their minds. What I find worse is that neither of these papers cited the Drake and Righter paper. Either they did not want to admit they were confirming a previous conclusion, or they were not interested in looking thoroughly at past work other than that which supported their procedures.

So, I doubt these two papers will change much either. I might be wrong, but I am not holding my breath waiting for someone with enough prestige to come out and say enough to change the paradigm.

Chemical effects from strained molecules

The major evidence supporting the fact that cyclopropane permits electron delocalization was that like ethylene, it stabilizes adjacent positive charge, and it stabilizes the excited states of many molecules when the cyclopropane ring is adjacent to the unsaturation. My argument was that the same conclusion arises from standard electromagnetic theory.

Why is the ring strained (i.e. at a higher energy)? Either the molecule is based on distorted ordinary C – C bonds (the “strain” model) or it involves a totally new form of bonding (required for delocalization). If we assume the strain model, the orbitals around carbon are supposed to be at 109.4 degrees to each other, but the angle between nuclei is 60 degrees. The orbitals try to follow the reduced angle, but as they move inwards, there is increased electron-electron repulsion, and that is the source of the strain energy. That repulsion “lifts” the electron up from the line between the nuclei to form a “banana” shaped bond. Of the three atoms, each with two orbitals, four of those orbitals come closer to a substituent when the bonds are bent, and the two on the atom to which the substituent is attached maintain a more or less similar distance, because the movement is more rotational.

If so, the strained system should be stabilized by adjacent positive charge. Those four orbitals are destabilized by the electron repulsion from other electrons in the ring; the positive charge gives the opposite effect by reducing the repulsion energy. Alternatively, if four orbitals move towards a substituent carrying positive charge, then as they come closer to a point, the negative electric field is stronger at that point, in which case positive charge is stabilized. The problem is to put numbers on such a theory.

My idea was simple. The energy of such an interaction is stored in the electric field, and therefore it is the same for any given change of electric field, irrespective of how the change of field is generated. Suppose you were sitting on the substituent with a means of measuring the electric field, and the electrons were on the other side of a wall. You see an increase in electric field, but what generates it? It could be that the electrons have moved closer, and work is done by their doing so (because the change of field strength requires a change of energy) OR you could have left them in the same place but added charge, and now the work corresponding to the strain energy would be done by adding the charge. There is, of course, no added charge, BUT if you pretend there is, it makes the calculation that relates the strain energy to the effects on adjacent substituents a lot simpler. The concept is a bit like using centrifugal force in an orbital calculation. Strictly speaking, there is no such force – if you use it, it is called a pseudoforce – but it makes the calculations a lot easier. The same here, because if one represented the change of electric field as due to a pseudocharge, there is an analytic solution to the integration. One constant still has to be fixed, but fix it for one molecule and it applies to all the others. So an alternative reason why adjacent positive charge is stabilized was obtained, and my calculation was very close to the experimental value that was obtained. So far, so good.

The UV spectra could also be easily explained. From Maxwell’s electromagnetic theory, to absorb a photon and form an excited state, there has to be a change of dipole moment, so as long as the positive end of the dipole can be closer to the cyclopropane ring than the negative end, the excited state is stabilized. More importantly, when this effect was applied to various systems, the changes due to different strained rings were proportional to my calculated changes in electric field at substituents. Very good news.

If positive charge were stabilized due to delocalization, so should negative charge be stabilized, but if it were due to my proposed change of electric field, then negative charge should be destabilized. This is where wheels fell off, because a big name published asserting negative charge was stabilized (Maerker, A.; Roberts, J. D. J. Am. Chem.Soc. 1966, 88, 1742-1759.) They reported numerous experiments in which they tried to make the required anion, and they all failed. Not exactly a great sign of stabilization. If they used a method that cannot fail, the resultant anion rearranged. That is also not a great sign of stabilization, but equally it does not necessarily show destabilization because stabilization could be there, but it changes to something even more stable.

Their idea of a clinching experiment was to make an anion adjacent to a cyclopropane ring and two benzene rings. The anion could be made provided potassium was a counterion. How that got through peer review I have no idea because that anion would be far less stable than the same anion without the cyclopropane ring. Even one benzene ring adjacent to an anion is well known to stabilize it. The reason why potassium was required was because the large cation could not get near the nominal carbon atom carrying the charge and that ballowed the negative charge to be destabilized away from the cyclopropane.. If lithium were used, it would get closer, and focus the negative charge closer to the cyclopropane ring. This was a case of a big name able to publish just about anything, and everyone believed him because they wanted to.

Which is all very well, but it is one thing to argue an experiment could have been interpreted some other way, but that is hardly conclusive. However, there was an “out”. The very lowest frequency ultraviolet spectral absorptions of carbonyl compounds were found to involve charge moving from the oxygen towards the carbon atom, and the electric moment of the transition was measured for formaldehyde. My theory now could make a prediction: strained systems should move the transition to higher frequency, whereas if delocalization were applicable, it should move to lower frequency. My calculations got the change of frequency for cyclopropane as a substituent correct to within 1 nm, whereas the delocalization argument could not even get the direction of the shift correct. It also explained another oddity: if there were a highly strained system such as bicyclobutyl as a substituent, you did not see this transition. My reason was simple: the signal moved to such a higher frequency that it was buried in another signal. So, I was elated.

When my publications came out, however, there was silence. Nobody seemed to understand, or care, about what I had done. The issue was settled; no need to look further. So much for Popper’s philosophy. And this is one of the reasons I am less than enthused at the way alternative theories to the mainstream are considered. However, there is a reason why this is so. Besides the occasional good theory, there is a lot of quite spurious stuff circulating. It is easy to understand why nobody wants to divert their attention from the work required for them to get more funding. Self-interest triumphs.

Does it matter? It does if you want to devise new catalysts, or understand how enzymes work.

Dark Energy and Modern Science

Most people think the scientific endeavour is truly objective; scientists draw their conclusions from all the facts and are never swayed by fashion. Sorry, but that is not true, as I found out from my PhD work. I must post about that sometime, but the shortened version is that I entered a controversy, my results unambiguously supported one side, but the other side prevailed for two reasons. Some “big names” chose that side, and the review that settled the issue conveniently left out all reference to about sixty different sorts of observations (including mine) that falsified their position. Even worse, some of the younger scientists who were on the wrong side simply abandoned the field and tried to conveniently forget their work. But before I bore people with my own history, I thought it would be worth noting another issue, dark energy. Dark energy is supposed to make up about 70% of the Universe so it must be important, right?

Nobody knows, or can even speculate with some valid reason, what dark energy is, and there is one reason only for believing it even exists, and that is it is believed that the expansion of the Universe is accelerating. We are reasonably certain the Universe is expanding. Originally this was discovered by Hubble, who noticed that the spectra of distant galaxies have a red shift in their frequencies and the further away they are, the bigger the red shift. This means that the whole universe must be expanding.

Let me digress and try to explain the Doppler shift. If you think of someone beating a drum regularly, then the number of beats per unit time is the frequency. Now, suppose the drum is on the back of a truck. If you hear a beat, and expect the next one at, say, 1 second later, if the truck starts to move away, the beat will come slightly later because the sound has had further to go. If the truck goes away at a regular speed, the beats will be delayed from each other by the same interval, the frequency is less, and that is called a red shift in the frequency. Now, the sound intensity will also become quieter with distance as the sound spreads out. Thus you can determine how far away the drum is and how fast it is moving away. The same applies to light, and if the universe is expanding regularly, then the red shift should also give you the distance. Similarly, provided you know the source intensity, the measured light intensity should give you the distance.

That requires us to measure light from stars that produce a known light output, which are called standard candles. Fortunately, there is a type of very bright standard candle, or so they say, and that is the type 1A supernova. It was observed in the 1990s that the very distant supernovae were dimmer than they should be according to the red shift, which means they are further away than they should be, which means the expansion must be accelerating. To accelerate there must be some net force pushing everything apart. That something is called dark energy, and it is supposed to make up about two thirds of the Universe. The discoverers of this phenomenon won a Nobel prize, and that, of course, in many people’s eyes means it must be true.

The type 1A supernova is considered to arise when a white dwarf star starts to strip gas from a neighbouring star. The dwarf gradually increases in mass, and because its nuclear cycle has already burnt helium into carbon and oxygen, where because the mass of the dwarf is too low, the reactions stop. As the dwarf consumes its neighbour, eventually the mass becomes great enough to start the burning of carbon and oxygen; this is uncontrollable and the whole thing explodes. The important point here is that because the explosion point is reached because of the gradual addition of fresh mass, it will occur at the same point for all such situations, so you get a standard output, or so it is assumed.

My interest in this came when I went to hear a talk on the topic, and I asked the speaker a question relating to the metallicity of the companion star. (Metallicity is the fraction of elements heavier than helium, which in turn means the amount of the star made up of material that has already gone through supernovae.) What I considered was that if you think of the supernova as a bubble of extremely energetic material, what we actually see is the light from the outer surface nearest to us, and most of that surface will be the material of the companion. Since the light we see is the result of the heat and inner light promoting electrons to higher energy levels, the light should be dependent on the composition of the outer surface. To support that proposition, Lyman et al. (arXiv: 1602.08098v1 [astro-ph.HE] 2016) have shown that calcium-rich supernovae are dimmer than iron-rich ones. Thus the 1A supernova may not be such a standard candle, and the earlier it was, the lower the metallicity will be, and that metallicity will favour lighter atoms, which do not have as many energy levels from which to radiate so they will be less efficient at converting energy to light.

Accordingly, my question was, “Given that low metallicity leads to dimmer 1a supernovae, and given that the most distant stars are the youngest and hence will have the lowest metallicity, could not that be the reason the distant ones are dimmer?” The response was a crusty, “That was taken into account.” Implied: go away and learn the standard stuff. My problem with that was, how could they take into account something that was not discovered for another twenty years or so? Herein lies one of my gripes about modern science: the big names who are pledged to a position will strongly discourage anyone questioning that position if the question is a good one. Weak questions are highly desired, as the name can satisfactorily deal with it and make himself feel better.

So, besides this issue of metallicity, how strong is the evidence for this dark energy. Maybe not as strong as everyone seems to say. In a recent paper (Nielsen et al. arXiv:1506.01354v2) analysed data for a much larger number of supernovae and came to a somewhat surprising conclusion: so far, you cannot actually tell whether expansion is accelerating or not. One interesting point in this analysis is that we do not simply relate the measured magnitude to distance. In addition there are corrections for light curve shape and for colour, and each has an empirical constant attached to it, and the “constant” is assumed to be constant. There must also be corrections for intervening dust, and again it is a sheer assumption that the dust will be the same in the early universe as now, despite space being far more compact.

If we now analyse all the observed data carefully (the initial claims actually chose a rather select few) we find that any acceptable acceleration consistent with the data does not deviate significantly from no acceleration out to red shift 1, and that the experimental errors are such that to this point we cannot distinguish between the options.

Try this. Cabbolet (Astrophys. Space Sci. DOI 10.1007/s10509-014-1791-4) argues that from the Eöt-Wash experiment that if there is repulsive gravity (needed to accelerate the expansion), then quantum electrodynamics is falsified in its current formulation! Quantum electrodynamics is regarded as one of the most accurate theory ever produced. We can, of course, reject repulsive gravity, but that also rejects dark energy. So, if that argument is correct, then at least one of the two has to go, and maybe dark energy is the one more prone to go.

Another problem is that it is assumed that type 1a supernovae are standard because they all form by the gradual accretion of extra matter from a companion. But Olling et al. (Nature, 521: 332 – 335, 2015) argue that they have found three supernovae where the evidence is that the explosion occurred by one dwarf simply swallowing another, and now there is no standard mass, so the energy could be almost anyhting, depending on the mass of the companion.

Milne (ApJ 803 20. doi:10.1088/0004-637X/803/1/20) has shown there are two classes of 1a supernovae, and for one of those there is a significant underestimation of the optical luminosity of the NUV-blue SNe Ia, in particular, for the high-redshift cosmological sample. Not accounting for this effect should thus produce a distance bias that increases with redshift and could significantly bias measurements of cosmological parameters.

So why am I going on like this? I apologize to some for the details, but I think this shows a problem in that the scientific community is not always as objective as they should be. It appears to be, they do not wish to rock the boat holding the big names. All evidence should be subject to examination, and instead what we find is that only too much is “referred to the experts”. Experts are as capable of being wrong as anyone else, when there is something they did not know when they made their decision.

Hawking’s Genius

How do people reach conclusions? How can these conclusions be swayed? How do you know you are correct, as opposed to being manipulated? How could a TV programme about physics and cosmology tell us something about our current way of life, including politics?

I have recently been watching a series of TV programmes entitled Genius, in which Stephen Hawking starts out by suggesting that anyone can answer the big questions of nature, given a little help. He gives them some equipment for them to do some experiments, and they are supposed to work out how to use it and reach the right conclusion. As an aside, this procedure greatly impressed me; Hawking would make a magnificent teacher because he has the ability to make his subjects really register the points he is trying to make. Notwithstanding that, it was quite a problem to get them to see what they did not expect. In the first episode there was a flat lake, or that was what they thought. With more modern measuring devices, including a laser, they showed the surface of the lake was actually significantly curved. Even then, it was only the incontrovertible evidence that persuaded them that the effect was real, despite the fact they also knew the earth is a sphere. In another program, he gets the subjects to recognise relative distances, thus he gets the distances between the earth and the moon by eclipses, then the distance to the sun. The problem here is the eclipses only really give you angles; somewhere along the line you need a distance, and Hawking cheats by giving the relative sizes of the moon and the sun. He makes the point about relative distances very well, but he overlooks how to find the real distances in the first place, although, to be fair, in a TV program for the masses, he probably felt that there was only a limited amount to be covered in one hour.

It is a completely different thing to discover something like that for the first time. Hawking mentions that Aristarchus was the first to do this properly, and his method of getting the earth-moon distance was to wait for an eclipse of the moon, and get two observers some distance apart (from memory, I believe it was 500 miles) to measure the angle from the horizontal when the umbra first made its appearance. Now he had two angles and a length. The method was valid, although there were errors in the measurements, with the result he was out by a factor of two. To get the earth-sun distance, he measured the moon-earth-sun angle when the moon was precisely half shaded. The second angle would be a right angle, he knew the distance to the moon, so with Pythagorus, or, if he were into trigonometry that was not available then, secants, he could get the distance. There was a minor problem; the angle was so close to a right angle and the sun is not exactly a point, and the half-shading of the moon is rather difficult to get right, and of course his actual earth-moon distance was wrong, so he had errors here, and had the sun too close by a factor approaching 5. Nevertheless, with such primitive instruments that he had, he was on the right track.

Notwithstanding the slight cheat, Hawking’s demonstration made one important point. By giving the relative sizes and putting the moon about 5 meters away from the earth (the observer) to get a precise eclipse of the given sun, he showed what the immense distance really looks like proportionately. I know as a scientist I am often using quite monstrous numbers, but what do they mean? What does 10 to the power of twenty look like compared with 10? Hawking stunned his subjects when comparing four hundred billion with one, using grains of sand. Quite impressive.

All of which raises the question, how do you make a discovery, or perhaps how do discoveries get made? One way, in my opinion, is to ask questions, then try to answer them. Not just once, but every answer you can think of. The concept I put in my ebook on this topic was that for any phenomenon, there is most likely to be more than one theoretical explanation, but as you increase the number of different observations, the false ones will start to drop out as they cannot answer some of the observations. Ultimately, you can prove a theory in the event you can say, if and only if this theory is correct, will I see the set of observations X. The problem is to justify the “only if” part. This, of course, goes to any question that can be answered logically.

However, Hawking’s subjects would not have been capable of that because the first step in forming the theory is to see that it is possible. Seeing something for the first time when you have not been told is not easy, whereas if you are told what should be there, but only faintly, most of the time you will see it, even if it isn’t really there. There are a number of psychological tests that show people tend to see what they expect to see. Perhaps the most spectacular example was the canals on Mars. After the mention of canali, astronomers, and Lovell in particular, drew lovely maps of Mars, with lines that are simply not there.

Unfortunately, I feel there was a little cheating in Hawking’s programs, which showed up in a program that looked at whether determinism ruled our lives, i.e. were we pre-programmed to carry out our lives, or do we have free will? To do this, he had a whole lot of people line up, and at given times, they could move one square left or right, at their pleasure. After a few rounds, there was the expected scattered distribution. So, what did our “ordinary people” conclude? I expected them to conclude that that sort of behaviour was statistical, and there was really choice in what people do. But no. These people conclude there is a multiverse, and all the choices are made somewhere. I don’t believe that for an instant, but I also don’t believe three people picked off the street would reach that conclusion, unless they had been primed to reach it.

And the current relevance? Herein lies what I think is the biggest problem of our political system: people can be made to believe what they have been primed to believe, even if they really don’t understand anything relating to the issue.

Science and the afterlife.

Science is not about a collection of facts, but rather it is a way of analyzing what we observe. The concept is, if you have a good theory it will predict things, then you can go out, do experiments, and see if you are right. Of course, in general it is not so easy to form a theory without resorting to nature, which means that most theories largely explain what is known. Nevertheless, the objective is to make a limited number of predictions of what we do not know. Someone then goes out and carries out the experiments, tests or whatever and if the theory is correct, the observations are just what the theory predicted. That is great news for the theoretician.

That is all very well, but what happens when there is a phenomenon that cannot be the subject of an experiment. In a previous post (http://wp.me/p2IwTC-6q ) I remarked that my Guidance Wave interpretation of quantum mechanics permitted, but by no means required, life after death. This cannot be experimented on and reported, however there are a number of reports of people who claim to have “almost died” or momentarily died and who have been revived and then given quite strange and explicit stories. What can science say about them?

One comes from a book I was given to read. Written by a pastor Todd Burpo, it tells of what his son, who was just short of four years old at the time, reported after he had nearly died in hospital. The son made several statements, all of which entailed an “out of body experience” and most involved a short visit to heaven to sit on Jesus’ lap. However, two of the descriptions were of more interest to me, because they described how the “out of body” boy saw his parents, each in a different room, and described what they did. In principle, there is no way he could have this information. The story also has a very frustrating element. The boy described the marks on Jesus’ hands, from the crucifixion. The pastor took that as clear evidence, because how would the boy know where the marks were? The most obvious reply is, he lived in a religious house and there probably were images around. Further, and this is the frustrating part, the standard Roman crucifixion did not have nails through the hands, so the boy was wrong, right? The problem is, Jesus did not have a standard crucifixion. What usually happened was that the victim was left on the cross until the flesh had more or less rotted away or had been picked by the crows, then the residue was disposed of, but not given a burial. To cut the body down and give it for burial was never done, except this time. Accordingly, if it were non-standard, it may have been the soldiers put the nails where it would be easier to get them out later. In short the killer evidence essentially ends up as useless. There is then the added complication that if true, Jesus may have given the image the pastor wanted.

The second is also interesting. Harvard neurosurgeon Dr Eben Alexander was in a coma for several days caused by severe bacterial meningitis. During his coma, he too had a vivid journey, first into other rooms, from which he described people’s actions that he had no possibility of knowing about through his physical body, and then into what he knew to be the afterlife. Now he had previously been a skeptic about this, and considered such accounts to be hallucinations, but in his own case his neocortex was non-functional during his coma, and furthermore, he gives nine different scientific reasons why what he experienced cannot be due to such hallucinogens or imagination. Since I am not an expert in brain function, I cannot comment usefully on his analysis. On one hand he is a prominent neurosurgeon, and should be an expert on brain function, so his analysis should be taken seriously, nevertheless, as Richard Feynman remarked about science, the easiest person to fool is yourself.

Perhaps the most spectacular accounts have been presented by Elisabeth Kübler-Ross, MD, a psychiatrist. She had noted that if children have this experience, they always see their mother and father if the parent is dead, but never if they are still alive. Christian children often see Jesus; Jewish ones never do. One particularly unusual account came from a woman who described what people were doing trying to resuscitate her after an accident. She claimed to have had this out of body experience and had watched everything. What is unusual about this is the woman was blind; the out of body “her” could see everything, but when she was resuscitated, she reverted to being blind. Another unusual report was from someone who met one of his parents in this “afterlife”, who confirmed being dead. This parent had died only one hour previously, five hundred miles or so away.

At this point we should look at the structure of a scientific proposition. There are two conditional forms for a statement that apply to a proposition under a given set of conditions:
(a) If the hypothesis is correct, then we shall get a certain set of observations.
(b) If and only if the hypothesis is correct, then we shall get a certain set of observations.
The difference lies here. In (a) there may be a multiplicity of different hypotheses that could lead to the observation, such as a hallucination, or a memory dump. This would apply to observations that the person could have recalled or imagined. In (b) there is only one explanation possible, therefore the hypothesis must be correct. Obviously, it is difficult to assert there is only one possible explanation, nevertheless, seeing something in another room when nearly dead seems to only being explained by part of the person (the soul, say) travelling out of the body into the other room.

So, where does this leave us? Essentially, in the position that there can be no proof until you die. Before that it is all a matter of faith. Nevertheless, as I argued, my guidance wave interpretation of quantum mechanics at least makes this possible within the realms of physics, but it does not require it. Accordingly, you either believe or you do not. The one clear fact though, is that if you do believe, it will almost certainly make dying easier, and that in itself is no bad thing.

Why I question many scientific statements.

From a few of the previous posts, where I have ventured into science, it may be obvious that I am not putting forward standard views. That leaves three possibilities: I don’t know what I am talking about; I am wrong; I might even be right. One of those options makes a lot of people who listen to what I say uncomfortable. Comfort comes when everything falls into place with your preconceptions; a challenge to those preconceptions requires you to think, and it is surprising how few scientists want to be the first person to stand up and support a challenge. So, why am I like that?

It started with my PhD. My supervisor gave me a project; it was a good project, but unfortunately it got written up in the latest volume of Journal of the American Chemical Society after I have been three weeks into it. He gave me two new projects to choose from whereupon he went away on summer holidays. One was, as far as I could see, hopeless, and worse than that, it was highly dangerous. The second I could finish straight away! He wanted me to measure the rates of a reaction of certain materials, and according to the scientific journals, it did not go. So, I was told to design my own project, which I did. I entered a controversy that had emerged. For those who know some chemistry, the question was, does a cyclopropane ring engage in electronic conjugative effects with adjacent unsaturated substituents? (Don’t worry if that means nothing to you; it hardly affects the story. A very rough explanation is, do they slosh over to other groups outside the ring, or must they stay within the ring?) There were a number of properties of compounds that included this structure that had unusual properties and there seemed to be two choices: the proposed quantum effects, or the effects of the strain.

This looked fairly straightforward, but I soon found out that my desire to do something that would not be easily done by someone else had its price: the chemical compounds I wanted to use were difficult to make, but I made them. The first series of compounds were not exceptionally helpful because a key one decomposed during measurement of the effect, but I soon got some definitive measurements through a route I had not expected when I started. (Isn’t it always the way that the best way of doing something is not what you started out trying to do?) The results were very clear and very definitive: the answer to the question was, “No.”

The problem then was that the big names had decided that the answer was yes. My problem was, while I had shown conclusively (to my mind, at least) that it did not, nevertheless there were a number of properties that could not be explained by what everyone thought the alternative was, so I re-examined the alternative. I concluded that because the strain was caused by the electrical charge being moved towards the centre of the ring, the movement was responsible for the effects. Essentially, I was applying parts of Maxwell’s electromagnetic theory, which is a very sound part of physics.

What happened next was surprising. In my PhD thesis defence, there were no real questions about my theory. It was almost as if the examiner did not want to go down that path. I continued with my career, waiting for my supervisor to publish my work, but the only paper was one that kept away from controversy. Accordingly, I decided to publish papers on my own. Unfortunately, my first one was not very good. I wanted to get plenty of material in, and I had been told to be brief. Brevity was not a virtue, because I later found out nobody really understood the first part. That was my fault, thanks to the brevity, but the good news was, from my point of view, while that first paper used one piece of observational fact to fix a constant, and thus calculate the key variable, every time subsequently I took the theory into uncharted waters, it always came up with essentially correct agreement with observation. I calculated a sequence of spectral shifts to within almost exact agreement, while the quantum theory everyone else was using could not even get the direction of the shifts right. So I should have been happy, right?

What happened next was that a few years later, a review came out to settle the question, and it landed on the quantum side of things. It did so by ignoring everything that did not agree with it! I was meanwhile employed, and I could not devote time to this matter, but much later, I wrote a different review. The journals I submitted it to did not want it. One rejected it because there were too many mathematics; others said they did not want logic analyses. I posted it on the Chemweb preprint server, but that seems to be history because while it is supposedly still there, I cannot find it. If anyone wants to see it, enquire below. My key point is that the review shows over sixty different types of experiments that falsify the standard position, but nobody is interested. All the work that falsified the prevalent dogma has been buried. Yes, it is still in the literature, but if Google cannot even find my publication when I know the title and the date and the location, how can anyone else find what they do not know about?

So, this is an aberration? I wish. I shall continue in this vein from time to time.