A personal scientific low point.

When I started my PhD research, I was fairly enthusiastic about the future, but I soon got disillusioned. Before my supervisor went on summer holidays, he gave me a choice of two projects. Neither were any good, and when the Head of Department saw me, he suggested (probably to keep me quiet) that I find my own project. Accordingly, I elected to enter a major controversy, namely were the wave functions of a cyclopropane ring localized (i.e., each chemical bond could be described by wave interference between a given pair of atoms, but there was no further wave interference) or were they delocalized, (i.e. the wave function representing a pair of electrons spread over more than one pair of atoms) and in particular, did they delocalize into substituents? Now, without getting too technical, I knew my supervisor had done quite a bit of work on something called the Hammett equation, which measures the effect or substituents on reactive sites, and in which, certain substituents that had different values when such delocalization was involved. If I could make the right sort of compounds, this equation would actually solve a problem.

This was not to be a fortunate project. First, my reserve synthetic method took 13 steps to get to the desired product, and while no organic synthesis gives a yield much better than 95%, one of these struggled to get over 35%, and another was not as good as desirable, which meant that I had to start with a lot of material. I did explore some shorter routes. One involved a reaction that was published in a Letter by someone who would go on to win a Nobel prize. The very key requirement to get the reaction to work was omitted in the Letter. I got a second reaction to work, but I had to order special chemicals. They turned up after I had submitted my thesis. They travelled via Hong Kong, where they got put aside and forgotten. After discovering that my supervisor was not going to provide any useful advice on chemical synthesis, he went on sabbatical, and I was on my own. After a lot of travail, I did what I had set out to do, but an unexpected problem arose. The standard compounds worked well and I got the required straight line set with minimum deviation, but for the key compound at one extreme of the line, the substituent at one end reacted quickly with the other end in the amine form. No clear result.

My supervisor made a cameo appearance before heading back to North America, where he was looking for a better paying job, and he made a suggestion, which involved reacting carboxylic acids that I already had in toluene. These had already been reported in water and aqueous alcohol, but the slope of the line was too shallow to be conclusive. What the toluene did was to greatly amplify the effect. The results were clear: there was no delocalization.

The next problem was the controversy was settling down, and the general consensus that there was such delocalization. This was based on one main observational fact, namely adjacent positive charge was stabilized, and there were many papers stating that it must on theoretical grounds. The theory used was exactly the same type of programs that “proved” the existence of polywater. Now the interesting thing was that soon everybody admitted there was no polywater, but the theory was “obviously” right in this case. Of course I still had to explain the stabilization of positive charge, and I found a way, namely strain involved mechanical polarization.

So, where did this get me? Largely, nowhere. My supervisor did not want to stick his head above the parapet, so he never published the work on the acids that was my key finding. I published a sequence of papers based on the polarization hypothesis, but in my first one I made an error: I left out what I thought was too obvious to waste the time of the scientific community, and in any case, I badly needed the space to keep within page limits. Being brief is NOT always a virtue.

The big gain was that while both explanations explained why positive charge was stabilized, (and my theory got the energy of stabilization of the gas phase carbenium ion right, at least as measured by another PhD student in America) the two theories differed on adjacent negative charge. The theory involving quantum delocalization required it to be stabilized too, while mine required it to be destabilized. As it happens, negative charge adjacent to a cyclopropane ring is so unstable it is almost impossible to make it, but that may not be convincing. However, there is one UV transition where the excited state has more negative charge adjacent to the cyclopropane ring, and my calculations gave the exact spectral shift, to within 1 nm. The delocalization theory cannot even get the direction of the shift right. That was published.

So, what did I learn from this? First, my supervisor did not have the nerve to go against the flow. (Neither, seemingly, did the supervisor of the student who measured the energy of the carbenium ion, and all I could do was to rely on the published thesis.) My spectral shifts were dismissed by one reviewer as “not important” and they were subsequently ignored. Something that falsifies the standard theory is unimportant? I later met a chemist who rose to the top of the academic tree, and he had started with a paper that falsified the standard theory, but when it too was ignored, he moved on. I asked him about this, and he seemed a little embarrassed as he said it was far better to ignore that and get a reputation doing something more in accord with a standard paradigm.

Much later (I had a living to earn) I had the time to make a review. I found over 60 different types of experiment that falsified the standard theory that was now in textbooks. That could not get published. There are few review journals that deal with chemistry, and one rejected the proposal on the grounds the matter was settled. (No interest in finding out why that might be wrong.) For another, it exceeded their page limit. For another, not enough diagrams and too many equations. For others, they did not publish logic analyses. So there is what I have discovered about modern science: in practice it may not live up to its ideals.

Advertisements

Scientific low points: (2)

The second major low point from recent times is polywater. The history of polywater is brief and not particularly distinguished. Nikolai Fedyakin condensed water in, or repeatedly forced water through, quartz capillaries, and found that tiny traces of such water could be obtained that had an elevated boiling point, a depressed freezing point, and a viscosity approaching that of a syrup. Boris Deryagin improved production techniques (although he never produced more than very small amounts) and determined a freezing point of – 40 oC, a boiling point of » 150 oC, and a density of 1.1-1.2. Deryagin decided there were only two possible reasons for this anomalous behaviour: (a) the water had dissolved quartz, (b) the water had polymerized. Everybody “knew” water did not dissolve quartz, therefore it must have polymerized. From the vibrational spectrum of polywater, two new bands were observed at 1600 and 1400 cm-1. From force constant considerations this was explained in terms of each OH bond being of approximately 2/3 bond order. The spectrum was consistent with the water occurring in hexagonal planar units, and if so, the stabilization per water molecule was calculated to be in the order of 250-420 kJ/mol. For the benefit of the non-chemist, this is a massive change in energy, and it meant the water molecules were joined together with a strength comparable to the carbon – carbon bonds in diamonds. The fact that it had a reported boiling point of » 150 oC should have warned them that this had to be wrong, but when a bandwagon starts rolling, everyone wants to jump aboard without stopping to think. An NMR spectrum of polywater gave a broad, low intensity signal approximately 300 Hz from the main proton signal, which meant that either a new species had formed, or there was a significant impurity present. (This would have been a good time to check for impurities.) The first calculation employing “reliable” methodology involved ab initio SCF LCAO methodology, and water polymers were found to be stabilized by polymer size. The cyclic tetramer was stabilized by 177 kJ/mol, the cyclic pentamer by 244 kJ/mol, and the hexamer by 301.5 kJ/mol. One of the authors of this paper was John Pople, who went on to get a Nobel prize, although not for this little effort.

All of this drew incredible attention. It was even predicted that an escape of polywater into the environment could catalytically convert the Earth’s oceans into polywater, thus extinguishing life, and that this had happened on Venus. We had to be careful! Much funding was devoted to polywater, even from the US navy, who apparently saw significant defence applications. (One can only imagine the trapping of enemy submarines in a polymeric syrup, prior to extinguishing all life on Earth!)

It took a while for this to fall over. Pity one poor PhD candidate who had to prepare polywater, and all he could prepare was solutions of silica. His supervisor told him to try harder. Then, suddenly, polywater died. Someone notice the infrared spectrum quoted above bore a striking resemblance to that of sweat. Oops.

However if the experimentalists did not shine, theory was extraordinarily dim. First, the same methods in different hands produced a very wide range of results with no explanation of why the results differed, although of course none of them concluded there was no polywater. If there were no differences in the implied physics between methods that gave such differing results, then the calculation method was not physical. If there were differences in the physics, then these should have been clearly explained. One problem was, as with only too many calculations in chemical theory, the inherent physical relationships are never defined in the papers. It was almost amusing to see, when it was clear there was no polywater, a paper was published in which ab initio LCAO SCF calculations with Slater-type orbitals provide evidence against previous calculations supporting polywater. The planar symmetrical structure was found to be not stable. A tetrahedral structure made by four water molecules results in instability because of excessive deformation of bond angles. What does that mean, apart from face-covering for the methodology? If you cannot have roing structures when the bond angles are tetrahedral, sugar is therefore an impossible molecule. While there are health issues with sugar, impossibility of its existence is not in debate.

One problem with the theory was that molecular orbital theory was used to verify large delocalization of electron motion over the polymers. The problem is, MO theory assumes it in the first place. Verifying what you assume is one of the big naughties pointed out by Aristotle, and you would thing that after 2,400 years, something might have stuck. Part of the problem was that nobody could question any of these computations because nobody had any idea of what the assumed inputs and code were. We might also note that the more extreme of these claims tended to end up in what many would claim to be the most reputable of journals.

There were two major fall-outs from this. Anything that could be vaguely related to polywater was avoided. This has almost certainly done much to retard examination of close ordering on surfaces, or on very thin sections, which, of course, are of extreme importance to biochemistry. There is no doubt whatsoever that reproducible effects were produced in small capillaries. Water at normal temperatures and pressures does not dissolve quartz (try boiling a lump of quartz in water for however long) so why did it do so in small capillaries? The second was that suddenly journals became far more conservative. The referees now felt it was their God-given duty to ensure that another polywater did not see the light of day. This is not to say that the referee does not have a role, but it should not be to decide arbitrarily what is true and what is false, particularly on no better grounds than, “I don’t think this is right”. A new theory may not be true, but it may still add something.

Perhaps the most unfortunate fallout was to the career of Deryagin. Here was a scientist who was more capable than many of his detractors, but who made an unfortunate mistake. The price he paid in the eyes of his detractors seems out of all proportion to the failing. His detractors may well point out that they never made such a mistake. That might be true, but what did they make? Meanwhile, Pople, whose mistake was far worse, went on to win a Nobel Prize for developing molecular orbital theory and developing a cult following about it. Then there is the question, why avoid studying water in monolayers or bilayers? If it can dissolve quartz, it has some very weird properties, and understanding these monolayers and bilayers is surely critical if we want to understand enzymes and many biochemical and medical problems. In my opinion, the real failures here come from the crowd, who merely want to be comfortable. Understanding takes effort, and effort is often uncomfortable.

What is nothing?

Shakespeare had it right – there has been much ado about nothing, at least in the scientific world. In some of my previous posts I have advocated the use of the scientific method on more general topics, such as politics. That method involves the rigorous evaluation of evidence, of making propositions in accord with that evidence, and most importantly, rejecting those that are clearly false. It may appear that for ordinary people, that might be too hard, but at least that method would be followed by scientists, right? Er, not necessarily. In 1962 Thomas Kuhn published a work, “The structure of scientific revolutions” and in it he argued that science itself has a very high level of conservatism. It is extremely difficult to change a current paradigm. If evidence is found that would do so, it is more likely to be secreted away in the bottom drawer, included in a scientific paper in a place where it is most likely to be ignored, or, if it is published, ignored anyway, and put in the bottom drawer of the mind. The problem seems to be, there is a roadblock towards thinking that something not in accord with expectations might be significant. With that in mind, what is nothing?

An obvious answer to the title question is that a vacuum is nothing. It is what is left when all the “somethings” are removed. But is there “nothing” anywhere? The ancient Greek philosophers argued about the void, and the issue was “settled” by Aristotle, who argued in his Physica that there could not be a void, because if there were, anything that moved in it would suffer no resistance, and hence would continue moving indefinitely. With such excellent thinking, he then, for some reason, refused to accept that the planets were moving essentially indefinitely, so they could be moving through a void, and if they were moving, they had to be moving around the sun. Success was at hand, especially if he realized that feathers did not fall as fast as stones because of wind resistance, but for some reason, having made such a spectacular start, he fell by the wayside, sticking to his long held prejudices. That raises the question, are such prejudices still around?

The usual concept of “nothing” is a vacuum, but what is a vacuum? Some figures from Wikipedia may help. A standard cubic centimetre of atmosphere has 2.5 x 10^19 molecules in it. That’s plenty. For those not used to “big figures”, 10^19 means the number where you write down 10 and follow it with 19 zeros, or you multiply 10 by itself nineteen times. Our vacuum cleaner gets the concentration of molecules down to 10^19, that is, the air pressure is two and a half times less in the cleaner. The Moon “atmosphere” has 4 x 10^5 molecules per cubic centimetre, so the Moon is not exactly in vacuum. Interplanetary space has 11 molecules per cubic centimetre, interstellar space has 1 molecule per cubic centimetre, and the best vacuum, intergalactic space, needs a million cubic centimetres to find one molecule.

The top of the Earth’s atmosphere, the thermosphere goes from 10^14 to 10^7. That is a little suspect at the top because you would expect it to gradually go down to that of interplanetary space. The reason there is a boundary is not because there is a sharp boundary, but rather it is the point where gas pressure is more or less matched by solar radiation pressure and that from solar winds, so it is difficult to make firm statements about further distances. Nevertheless, we know there is atmosphere out to a few hundred kilometres because there is a small drag on satellites.

So, intergalactic space is most certainly almost devoid of matter, but not quite. However, even without that, we are still not quite there with “nothing”. If nothing else, we know there are streams of photons going through it, probably a lot of cosmic rays (which are very rapidly moving atomic nuclei, usually stripped of some of their electrons, and accelerated by some extreme cosmic event) and possibly dark matter and dark energy. No doubt you have heard of dark matter and dark energy, but you have no idea what it is. Well, join the club. Nobody knows what either of them are, and it is just possible neither actually exist. This is not the place to go into that, so I just note that our nothing is not only difficult to find, but there may be mysterious stuff spoiling even what little there is.

However, to totally spoil our concept of nothing, we need to see quantum field theory. This is something of a mathematical nightmare, nevertheless conceptually it postulates that the Universe is full of fields, and particles are excitations of these fields. Now, a field at its most basic level is merely something to which you can attach a value at various coordinates. Thus a gravitational field is an expression such that if you know where you are and if you know what else is around you, you also know the force you will feel from it. However, in quantum field theory, there are a number of additional fields, thus there is a field for electrons, and actual electrons are excitations of such fields. While at this point the concept may seem harmless, if overly complicated, there is a problem. To explain how force fields behave, there needs to be force carriers. If we take the electric field as an example, the force carriers are sometimes called virtual photons, and these “carry” the force so that the required action occurs. If you have such force carriers, the Uncertainty Principle requires the vacuum to have an associated zero point energy. Thus a quantum system cannot be at rest, but must always be in motion and that includes any possible discrete units within the field. Again, according to Wikipedia, Richard Feynman and John Wheeler calculated there was enough zero point energy inside a light bulb to boil off all the water in the oceans. Of course, such energy cannot be used; to use energy you have to transfer it from a higher level to a lower level, when you get access to the difference. Zero point energy is at the lowest possible level.

But there is a catch. Recall Einstein’s E/c^2 = m? That means according to Einstein, all this zero point energy has the equivalent of inertial mass in terms of its effects on gravity. If so, then the gravity from all the zero point energy in vacuum can be calculated, and we can predict whether the Universe is expanding or contracting. The answer is, if quantum field theory is correct, the Universe should have collapsed long ago. The difference between prediction and observation is merely about 10^120, that is, ten multiplied by itself 120 times, and is the worst discrepancy between prediction and observation known to science. Even worse, some have argued the prediction was not right, and if it had been done “properly” they justified manipulating the error down to 10^40. That is still a terrible error, but to me, what is worse, what is supposed to be the most accurate theory ever is suddenly capable of turning up answers that differ by 10^80, which is roughly the same as the number of atoms in the known Universe.

Some might say, surely this indicates there is something wrong with the theory, and start looking elsewhere. Seemingly not. Quantum field theory is still regarded as the supreme theory, and such a disagreement is simply placed in the bottom shelf of the minds. After all, the mathematics are so elegant, or difficult, depending on your point of view. Can’t let observed facts get in the road of elegant mathematics!

Evidence that the Standard Theory of Planetary Formation is Wrong.

Every now and again, something happens that makes you feel both good and depressed at the same time. For me it was last week, when I looked up the then latest edition of Nature. There were two papers (Nature, vol 541 (Dauphas, pp 521 – 524; Fischer-Gödde and Kleine, pp 525 – 527) that falsified two of the most important propositions in the standard theory of planetary formation. What we actually know is that stars accrete from a disk of gas and dust, the disk lingers on for between a million years and 30 million years, depending on the star, then the star’s solar winds clear out the dust and gas. Somewhere in there, planets form. We can see evidence of gas giants growing, where the gas is falling into the giant planet, but the process by which smaller planets or the cores of giants form is unobservable because the bodies are too small, and the dust too opaque. Accordingly, we can only form theories to fill in the intermediate process. The standard theory, also called oligarchic growth, explains planetary formation in terms of dust accreting to planetesimals by some unknown mechanism, then these collide to form embryos, which in turn formed oligarchs or protoplanets (Mars sized objects) and these collided to form planets. If this happened, they would do a lot of bouncing around and everything would get well-mixed. Standard computer simulations argue that Earth would have formed from a distribution of matter from further out than Mars to inside Mercury’s orbit. Earth the gets its water from a “late veneer” from carbonaceous chondrites from the far side of the asteroid belt.

It is also well known that certain elements in bodies in the solar system have isotopes that vary their ratio depending on the distance from the star. Thus meteorites from Mars have different isotope ratios from meteorites from the asteroid belt, and again both are different from rocks from Earth and Moon. The cause of this isotope difference is unclear, but it is an established fact. This is where those two papers come in.

Dauphas showed that Earth accreted from a reasonably narrow zone throughout its entire accretion time. Furthermore, that zone was the same as that which formed enstatite chondrites, which appear to have originated from a region that was much hotter than the material that, say, formed Mars. Thus enstatite chondrites are reduced. What that means is that their chemistry was such that there was less oxygen. Mars has only a small iron core, and most of its iron is as iron oxide. Enstatite chondrites have free iron as iron, and, of course, Earth has a very large iron core. Enstatite chondrites also contain silicates with less magnesium, which will occur when the temperatures were too hot to crystallize out forsterite. (Forsterite melts at 1890 degrees C, but it will also dissolve to some extent in silica melts at lower temperatures.) Enstatite chondrites also are amongst the driest, so they did not provide Earth’s water.

Fischer-Gödde and Kleine showed that most of Earth’s water did not come from carbonaceous chondrites, the reason being, if it did, the non-water part would have added about 5% to the mass of Earth, and the last 5% is supposed to be from where the bulk of elements that dissolve in hot iron would have come from. The amounts arriving earlier would have dissolved in the iron and gone to the core. One of those elements is ruthenium, and the isotope ratios of Earth’s ruthenium rule out an origin from the asteroid belt.

Accordingly, this evidence rules out oligarchic growth. There used to be an alternative theory of planetary accretion called monarchic growth, but this was soon abandoned because it cannot explain first why we have the number of planets we have where they are, and second where our water came from. Calculations show it is possible to have three to five planets in stable orbit between Earth and Mars, assuming none are larger than Earth, and more out to the asteroid belt. But they are not there, so the question is, if planets only grow from a narrow zone, why are these zones empty?

This is where I felt good. A few years ago I published an ebook called “Planetary Formation and Biogenesis” and it required monarchic growth. It also required the planets in our solar system to be roughly where they are, at least until they get big enough to play gravitational billiards. The mechanism is that the planets accreted in zones where the chemistry of the matter permitted accretion, and that in turn was temperature dependent, so specific sorts of planets form in zones at specific distances from the star. Earth formed by accretion of rocks formed during the hot stage, and being in a zone near that which formed enstatite chondrites, the iron was present as a metal, which is why Earth has an iron core. The reason Earth has so much water is that accretion occurred from rocks that had been heat treated to about 1550 degrees Centigrade, in which case certain aluminosilicates phase separated out. These, when they take up water, form cement that binds other rocks to form a concrete. As far as I am aware, my theory is the only current one that requires these results.

So, why do I feel depressed? My ebook contained a review of over 600 references from journals until a few months before the ebook was published. The problem is, these references, if properly analysed, provided papers with plenty of evidence that these two standard theories were wrong, but each of the papers’ conclusions were ignored. In particular, there was a more convincing paper back in 2002 (Drake and Righter, Nature 416: 39-44) that came to exactly the same conclusions. As an example, to eliminate carbonaceous chondrites as the source of water, instead of ruthenium isotopes, it used osmium isotopes and other compositional data, but you see the point. So why was this earlier work ignored? I firmly believe that scientists prefer to ignore evidence that falsifies their cherished beliefs rather than change their minds. What I find worse is that neither of these papers cited the Drake and Righter paper. Either they did not want to admit they were confirming a previous conclusion, or they were not interested in looking thoroughly at past work other than that which supported their procedures.

So, I doubt these two papers will change much either. I might be wrong, but I am not holding my breath waiting for someone with enough prestige to come out and say enough to change the paradigm.

Chemical effects from strained molecules

The major evidence supporting the fact that cyclopropane permits electron delocalization was that like ethylene, it stabilizes adjacent positive charge, and it stabilizes the excited states of many molecules when the cyclopropane ring is adjacent to the unsaturation. My argument was that the same conclusion arises from standard electromagnetic theory.

Why is the ring strained (i.e. at a higher energy)? Either the molecule is based on distorted ordinary C – C bonds (the “strain” model) or it involves a totally new form of bonding (required for delocalization). If we assume the strain model, the orbitals around carbon are supposed to be at 109.4 degrees to each other, but the angle between nuclei is 60 degrees. The orbitals try to follow the reduced angle, but as they move inwards, there is increased electron-electron repulsion, and that is the source of the strain energy. That repulsion “lifts” the electron up from the line between the nuclei to form a “banana” shaped bond. Of the three atoms, each with two orbitals, four of those orbitals come closer to a substituent when the bonds are bent, and the two on the atom to which the substituent is attached maintain a more or less similar distance, because the movement is more rotational.

If so, the strained system should be stabilized by adjacent positive charge. Those four orbitals are destabilized by the electron repulsion from other electrons in the ring; the positive charge gives the opposite effect by reducing the repulsion energy. Alternatively, if four orbitals move towards a substituent carrying positive charge, then as they come closer to a point, the negative electric field is stronger at that point, in which case positive charge is stabilized. The problem is to put numbers on such a theory.

My idea was simple. The energy of such an interaction is stored in the electric field, and therefore it is the same for any given change of electric field, irrespective of how the change of field is generated. Suppose you were sitting on the substituent with a means of measuring the electric field, and the electrons were on the other side of a wall. You see an increase in electric field, but what generates it? It could be that the electrons have moved closer, and work is done by their doing so (because the change of field strength requires a change of energy) OR you could have left them in the same place but added charge, and now the work corresponding to the strain energy would be done by adding the charge. There is, of course, no added charge, BUT if you pretend there is, it makes the calculation that relates the strain energy to the effects on adjacent substituents a lot simpler. The concept is a bit like using centrifugal force in an orbital calculation. Strictly speaking, there is no such force – if you use it, it is called a pseudoforce – but it makes the calculations a lot easier. The same here, because if one represented the change of electric field as due to a pseudocharge, there is an analytic solution to the integration. One constant still has to be fixed, but fix it for one molecule and it applies to all the others. So an alternative reason why adjacent positive charge is stabilized was obtained, and my calculation was very close to the experimental value that was obtained. So far, so good.

The UV spectra could also be easily explained. From Maxwell’s electromagnetic theory, to absorb a photon and form an excited state, there has to be a change of dipole moment, so as long as the positive end of the dipole can be closer to the cyclopropane ring than the negative end, the excited state is stabilized. More importantly, when this effect was applied to various systems, the changes due to different strained rings were proportional to my calculated changes in electric field at substituents. Very good news.

If positive charge were stabilized due to delocalization, so should negative charge be stabilized, but if it were due to my proposed change of electric field, then negative charge should be destabilized. This is where wheels fell off, because a big name published asserting negative charge was stabilized (Maerker, A.; Roberts, J. D. J. Am. Chem.Soc. 1966, 88, 1742-1759.) They reported numerous experiments in which they tried to make the required anion, and they all failed. Not exactly a great sign of stabilization. If they used a method that cannot fail, the resultant anion rearranged. That is also not a great sign of stabilization, but equally it does not necessarily show destabilization because stabilization could be there, but it changes to something even more stable.

Their idea of a clinching experiment was to make an anion adjacent to a cyclopropane ring and two benzene rings. The anion could be made provided potassium was a counterion. How that got through peer review I have no idea because that anion would be far less stable than the same anion without the cyclopropane ring. Even one benzene ring adjacent to an anion is well known to stabilize it. The reason why potassium was required was because the large cation could not get near the nominal carbon atom carrying the charge and that ballowed the negative charge to be destabilized away from the cyclopropane.. If lithium were used, it would get closer, and focus the negative charge closer to the cyclopropane ring. This was a case of a big name able to publish just about anything, and everyone believed him because they wanted to.

Which is all very well, but it is one thing to argue an experiment could have been interpreted some other way, but that is hardly conclusive. However, there was an “out”. The very lowest frequency ultraviolet spectral absorptions of carbonyl compounds were found to involve charge moving from the oxygen towards the carbon atom, and the electric moment of the transition was measured for formaldehyde. My theory now could make a prediction: strained systems should move the transition to higher frequency, whereas if delocalization were applicable, it should move to lower frequency. My calculations got the change of frequency for cyclopropane as a substituent correct to within 1 nm, whereas the delocalization argument could not even get the direction of the shift correct. It also explained another oddity: if there were a highly strained system such as bicyclobutyl as a substituent, you did not see this transition. My reason was simple: the signal moved to such a higher frequency that it was buried in another signal. So, I was elated.

When my publications came out, however, there was silence. Nobody seemed to understand, or care, about what I had done. The issue was settled; no need to look further. So much for Popper’s philosophy. And this is one of the reasons I am less than enthused at the way alternative theories to the mainstream are considered. However, there is a reason why this is so. Besides the occasional good theory, there is a lot of quite spurious stuff circulating. It is easy to understand why nobody wants to divert their attention from the work required for them to get more funding. Self-interest triumphs.

Does it matter? It does if you want to devise new catalysts, or understand how enzymes work.

Dark Energy and Modern Science

Most people think the scientific endeavour is truly objective; scientists draw their conclusions from all the facts and are never swayed by fashion. Sorry, but that is not true, as I found out from my PhD work. I must post about that sometime, but the shortened version is that I entered a controversy, my results unambiguously supported one side, but the other side prevailed for two reasons. Some “big names” chose that side, and the review that settled the issue conveniently left out all reference to about sixty different sorts of observations (including mine) that falsified their position. Even worse, some of the younger scientists who were on the wrong side simply abandoned the field and tried to conveniently forget their work. But before I bore people with my own history, I thought it would be worth noting another issue, dark energy. Dark energy is supposed to make up about 70% of the Universe so it must be important, right?

Nobody knows, or can even speculate with some valid reason, what dark energy is, and there is one reason only for believing it even exists, and that is it is believed that the expansion of the Universe is accelerating. We are reasonably certain the Universe is expanding. Originally this was discovered by Hubble, who noticed that the spectra of distant galaxies have a red shift in their frequencies and the further away they are, the bigger the red shift. This means that the whole universe must be expanding.

Let me digress and try to explain the Doppler shift. If you think of someone beating a drum regularly, then the number of beats per unit time is the frequency. Now, suppose the drum is on the back of a truck. If you hear a beat, and expect the next one at, say, 1 second later, if the truck starts to move away, the beat will come slightly later because the sound has had further to go. If the truck goes away at a regular speed, the beats will be delayed from each other by the same interval, the frequency is less, and that is called a red shift in the frequency. Now, the sound intensity will also become quieter with distance as the sound spreads out. Thus you can determine how far away the drum is and how fast it is moving away. The same applies to light, and if the universe is expanding regularly, then the red shift should also give you the distance. Similarly, provided you know the source intensity, the measured light intensity should give you the distance.

That requires us to measure light from stars that produce a known light output, which are called standard candles. Fortunately, there is a type of very bright standard candle, or so they say, and that is the type 1A supernova. It was observed in the 1990s that the very distant supernovae were dimmer than they should be according to the red shift, which means they are further away than they should be, which means the expansion must be accelerating. To accelerate there must be some net force pushing everything apart. That something is called dark energy, and it is supposed to make up about two thirds of the Universe. The discoverers of this phenomenon won a Nobel prize, and that, of course, in many people’s eyes means it must be true.

The type 1A supernova is considered to arise when a white dwarf star starts to strip gas from a neighbouring star. The dwarf gradually increases in mass, and because its nuclear cycle has already burnt helium into carbon and oxygen, where because the mass of the dwarf is too low, the reactions stop. As the dwarf consumes its neighbour, eventually the mass becomes great enough to start the burning of carbon and oxygen; this is uncontrollable and the whole thing explodes. The important point here is that because the explosion point is reached because of the gradual addition of fresh mass, it will occur at the same point for all such situations, so you get a standard output, or so it is assumed.

My interest in this came when I went to hear a talk on the topic, and I asked the speaker a question relating to the metallicity of the companion star. (Metallicity is the fraction of elements heavier than helium, which in turn means the amount of the star made up of material that has already gone through supernovae.) What I considered was that if you think of the supernova as a bubble of extremely energetic material, what we actually see is the light from the outer surface nearest to us, and most of that surface will be the material of the companion. Since the light we see is the result of the heat and inner light promoting electrons to higher energy levels, the light should be dependent on the composition of the outer surface. To support that proposition, Lyman et al. (arXiv: 1602.08098v1 [astro-ph.HE] 2016) have shown that calcium-rich supernovae are dimmer than iron-rich ones. Thus the 1A supernova may not be such a standard candle, and the earlier it was, the lower the metallicity will be, and that metallicity will favour lighter atoms, which do not have as many energy levels from which to radiate so they will be less efficient at converting energy to light.

Accordingly, my question was, “Given that low metallicity leads to dimmer 1a supernovae, and given that the most distant stars are the youngest and hence will have the lowest metallicity, could not that be the reason the distant ones are dimmer?” The response was a crusty, “That was taken into account.” Implied: go away and learn the standard stuff. My problem with that was, how could they take into account something that was not discovered for another twenty years or so? Herein lies one of my gripes about modern science: the big names who are pledged to a position will strongly discourage anyone questioning that position if the question is a good one. Weak questions are highly desired, as the name can satisfactorily deal with it and make himself feel better.

So, besides this issue of metallicity, how strong is the evidence for this dark energy. Maybe not as strong as everyone seems to say. In a recent paper (Nielsen et al. arXiv:1506.01354v2) analysed data for a much larger number of supernovae and came to a somewhat surprising conclusion: so far, you cannot actually tell whether expansion is accelerating or not. One interesting point in this analysis is that we do not simply relate the measured magnitude to distance. In addition there are corrections for light curve shape and for colour, and each has an empirical constant attached to it, and the “constant” is assumed to be constant. There must also be corrections for intervening dust, and again it is a sheer assumption that the dust will be the same in the early universe as now, despite space being far more compact.

If we now analyse all the observed data carefully (the initial claims actually chose a rather select few) we find that any acceptable acceleration consistent with the data does not deviate significantly from no acceleration out to red shift 1, and that the experimental errors are such that to this point we cannot distinguish between the options.

Try this. Cabbolet (Astrophys. Space Sci. DOI 10.1007/s10509-014-1791-4) argues that from the Eöt-Wash experiment that if there is repulsive gravity (needed to accelerate the expansion), then quantum electrodynamics is falsified in its current formulation! Quantum electrodynamics is regarded as one of the most accurate theory ever produced. We can, of course, reject repulsive gravity, but that also rejects dark energy. So, if that argument is correct, then at least one of the two has to go, and maybe dark energy is the one more prone to go.

Another problem is that it is assumed that type 1a supernovae are standard because they all form by the gradual accretion of extra matter from a companion. But Olling et al. (Nature, 521: 332 – 335, 2015) argue that they have found three supernovae where the evidence is that the explosion occurred by one dwarf simply swallowing another, and now there is no standard mass, so the energy could be almost anyhting, depending on the mass of the companion.

Milne (ApJ 803 20. doi:10.1088/0004-637X/803/1/20) has shown there are two classes of 1a supernovae, and for one of those there is a significant underestimation of the optical luminosity of the NUV-blue SNe Ia, in particular, for the high-redshift cosmological sample. Not accounting for this effect should thus produce a distance bias that increases with redshift and could significantly bias measurements of cosmological parameters.

So why am I going on like this? I apologize to some for the details, but I think this shows a problem in that the scientific community is not always as objective as they should be. It appears to be, they do not wish to rock the boat holding the big names. All evidence should be subject to examination, and instead what we find is that only too much is “referred to the experts”. Experts are as capable of being wrong as anyone else, when there is something they did not know when they made their decision.

Hawking’s Genius

How do people reach conclusions? How can these conclusions be swayed? How do you know you are correct, as opposed to being manipulated? How could a TV programme about physics and cosmology tell us something about our current way of life, including politics?

I have recently been watching a series of TV programmes entitled Genius, in which Stephen Hawking starts out by suggesting that anyone can answer the big questions of nature, given a little help. He gives them some equipment for them to do some experiments, and they are supposed to work out how to use it and reach the right conclusion. As an aside, this procedure greatly impressed me; Hawking would make a magnificent teacher because he has the ability to make his subjects really register the points he is trying to make. Notwithstanding that, it was quite a problem to get them to see what they did not expect. In the first episode there was a flat lake, or that was what they thought. With more modern measuring devices, including a laser, they showed the surface of the lake was actually significantly curved. Even then, it was only the incontrovertible evidence that persuaded them that the effect was real, despite the fact they also knew the earth is a sphere. In another program, he gets the subjects to recognise relative distances, thus he gets the distances between the earth and the moon by eclipses, then the distance to the sun. The problem here is the eclipses only really give you angles; somewhere along the line you need a distance, and Hawking cheats by giving the relative sizes of the moon and the sun. He makes the point about relative distances very well, but he overlooks how to find the real distances in the first place, although, to be fair, in a TV program for the masses, he probably felt that there was only a limited amount to be covered in one hour.

It is a completely different thing to discover something like that for the first time. Hawking mentions that Aristarchus was the first to do this properly, and his method of getting the earth-moon distance was to wait for an eclipse of the moon, and get two observers some distance apart (from memory, I believe it was 500 miles) to measure the angle from the horizontal when the umbra first made its appearance. Now he had two angles and a length. The method was valid, although there were errors in the measurements, with the result he was out by a factor of two. To get the earth-sun distance, he measured the moon-earth-sun angle when the moon was precisely half shaded. The second angle would be a right angle, he knew the distance to the moon, so with Pythagorus, or, if he were into trigonometry that was not available then, secants, he could get the distance. There was a minor problem; the angle was so close to a right angle and the sun is not exactly a point, and the half-shading of the moon is rather difficult to get right, and of course his actual earth-moon distance was wrong, so he had errors here, and had the sun too close by a factor approaching 5. Nevertheless, with such primitive instruments that he had, he was on the right track.

Notwithstanding the slight cheat, Hawking’s demonstration made one important point. By giving the relative sizes and putting the moon about 5 meters away from the earth (the observer) to get a precise eclipse of the given sun, he showed what the immense distance really looks like proportionately. I know as a scientist I am often using quite monstrous numbers, but what do they mean? What does 10 to the power of twenty look like compared with 10? Hawking stunned his subjects when comparing four hundred billion with one, using grains of sand. Quite impressive.

All of which raises the question, how do you make a discovery, or perhaps how do discoveries get made? One way, in my opinion, is to ask questions, then try to answer them. Not just once, but every answer you can think of. The concept I put in my ebook on this topic was that for any phenomenon, there is most likely to be more than one theoretical explanation, but as you increase the number of different observations, the false ones will start to drop out as they cannot answer some of the observations. Ultimately, you can prove a theory in the event you can say, if and only if this theory is correct, will I see the set of observations X. The problem is to justify the “only if” part. This, of course, goes to any question that can be answered logically.

However, Hawking’s subjects would not have been capable of that because the first step in forming the theory is to see that it is possible. Seeing something for the first time when you have not been told is not easy, whereas if you are told what should be there, but only faintly, most of the time you will see it, even if it isn’t really there. There are a number of psychological tests that show people tend to see what they expect to see. Perhaps the most spectacular example was the canals on Mars. After the mention of canali, astronomers, and Lovell in particular, drew lovely maps of Mars, with lines that are simply not there.

Unfortunately, I feel there was a little cheating in Hawking’s programs, which showed up in a program that looked at whether determinism ruled our lives, i.e. were we pre-programmed to carry out our lives, or do we have free will? To do this, he had a whole lot of people line up, and at given times, they could move one square left or right, at their pleasure. After a few rounds, there was the expected scattered distribution. So, what did our “ordinary people” conclude? I expected them to conclude that that sort of behaviour was statistical, and there was really choice in what people do. But no. These people conclude there is a multiverse, and all the choices are made somewhere. I don’t believe that for an instant, but I also don’t believe three people picked off the street would reach that conclusion, unless they had been primed to reach it.

And the current relevance? Herein lies what I think is the biggest problem of our political system: people can be made to believe what they have been primed to believe, even if they really don’t understand anything relating to the issue.