Is science being carried out properly?

How do scientists carry out science, and how should they? These are questions that have been raised by reviewers in a recent edition of Science magazine, one of the leading science journals. One of the telling quotes is “resources (that) influence the course of science are still more rooted in traditions and intuitions than in evidence.” What does that mean? In my opinion, it is along the lines, for those who have, much will be given. “Much” here refers to much of what is available. Government funding can be tight. And in fairness, those who provide funds want to see something for their efforts, and they are more likely to see something from someone who has produced results consistently in the past. The problem is, the bureaucrats responsible for providing the finds have no idea of the quality of what is produced, so they tend to count scientific papers. This favours the production of fairly ordinary stuff, or even rubbish. Newbies are given a chance, but there is a price: they cannot afford to produce nothing. So what tends to happen is that funds are driven towards something that is difficult to fail, except maybe for some very large projects, like the large hadron collider. The most important thing required is that something is measured, and that something is more or less understandable and acceptable by a scientific journal, for that is a successful result. In some cases, the question, “Why was that measured?” would best be answered, “Because it was easy.” Even the large hadron collider fell into that zone. Scientists wanted to find the Higgs boson, and supersymmetry particles. They found the first, and I suppose when the question of building the collider, the reference (totally not apt) to the “God Particle” did not hurt.

However, while getting research funding for things to be measured is difficult, getting money for analyzing what we know, or for developing theories (other than doing applied mathematics on existing theories), is virtually impossible. I believe this is a problem, and particularly for analyzing what we know. We are in this quite strange position that while in principle we have acquired a huge amount of data, we are not always sure of what we know. To add to our problems, anything found more than twenty years ago is as likely as not to be forgotten.

Theory is thus stagnating. With the exception of cosmic inflation, there have been no new major theories that have taken hold since about 1970. Yet far more scientists have been working during this period than in all of previous history. Of course this may merely be due to the fact that new theories have been proposed, but nobody has accepted them. A quote from Max Planck, who effectively started quantum mechanics may show light on this: “A new scientific truth does not triumph
by convincing its opponents and making them see the light, but rather because its opponents eventually die.” Not very encouraging. Another reason may be that it failed to draw attention to itself. No scientist these days can read more than an extremely tiny fraction of what is written, as there are tens of millions of scientific papers in chemistry alone. Computer searching helps, but only for well-defined problems, such as a property of some material. How can you define carefully what you do not know exists?

Further information from this Science article provided some interest. An investigation led to what then non-scientists might consider a highly odd result, namely for scientific papers to be a hit, it was found that usually at least 90 per cent of what is written is well established. Novelty might be prized, but unless well mixed with the familiar, nobody will read it, or even worse, it will not be published. That, perforce, means that in general there will be no extremely novel approach, but rather anything new will be a tweak on what is established. To add to this, a study of “star” scientists who had premature deaths led to an interesting observation: the output of their collaborators fell away, which indicates that only the “star” was contributing much intellectual effort, and probably actively squashing dissenting views, whereas new entrants to the field who were starting to shine tended not to have done much in that field before the “star” died.

A different reviewer noticed that many scientists put in very little effort to cite past discoveries, and when citing literature, the most important is about five years old. There will be exceptions, usually through citing papers by the very famous, but I rather suspect in most cases these are cited more to show the authors in a good light than for any subject illumination. Another reviewer noted that scientists appeared to be narrowly channeled in their research by the need to get recognition, which requires work familiar to the readers, and reviewers, particularly those that review funding applications. The important thing is to keep up an output of “good work”, and that tends to mean only too many go after something that they more or less already now the answer. Yes, new facts are reported, but what do they mean? This, of course, fits in well with Thomas Kuhn’s picture of science, where the new activities are generally puzzles that are to be solved, but not puzzles that will be exceedingly difficult to solve. What all this appears to mean is that science is becoming very good at confirming that which would have been easily guessed, but not so good at coming up with the radically new. Actually, there is worse, but that is for the next post.

Advertisements

Have you got what it takes to form a scientific theory?

Making a scientific theory is actually more difficult than you might think. The first step involves surveying what knowledge is already available. That comes in two subsets: the actual observational data and the interpretation of what everyone thinks that set of data means. I happen to think that set theory is a great start here. A set is a collection of data with something in common, together with the rule that suggests it should be put into one set, as opposed to several. That rule must arise naturally from any theory, so as you form a rule, you are well on your way to forming a theory. The next part is probably the hardest: you have to decide what interpretation that is allegedly established is in fact wrong. It is not that easy to say that the authority is wrong, and your idea is right, but you have to do that, and at the same time know that your version is in accord with all observational data and takes you somewhere else. Why I am going on about this now is I have written two novels that set a problem: how could you prove the Earth goes around the sun if you were an ancient Roman? This is a challenge if you want to test yourself as a theoretician. If you don’t. I like to think there is still an interesting story there.

From September 13 – 20, my novel Athene’s Prophecy will be discounted in the US and UK, and this blog will give some background information to make the reading easier as regards the actual story not regarding this problem. In this, my fictional character, Gaius Claudius Scaevola is on a quest, but he must also survive the imperium of a certain Gaius Julius Caesar, aka Caligulae, who suffered from “fake news”, and a bad subsequent press. First the nickname: no Roman would call him Caligula because even his worst enemies would recognize he had two feet, and his father could easily afford two bootlets. Romans had a number of names, but they tended to be similar. Take Gaius Julius Caesar. There were many of them, including the father, grandfather, great grandfather etc. of the one you recognize. Caligulae was also Gaius Julius Caesar. Gaius is a praenomen, like John. Unfortunately, there were not a lot of such names so there are many called Gaius. Julius is the ancient family name, but it is more like a clan, and eventually there needed to be more, so most of the popular clans had a cognomen. This tended to be anything but grandiose. Thus for Marcus Tullius Cicero, Cicero means chickpea. Scaevola means “lefty”. It is less clear what Caesar means because in Latin the “ar” ending is somewhat unusual. Gaius Plinius Secundus interpreted it as coming from caesaries, which means “hairy”. Ironically, the most famous Julius Caesar was bald. Incidentally, in pronunciation, the latin “C” is the equivalent of the Greek gamma, so it is pronounced as a “G” or “K” – the difference is small and we have now way of knowing. “ae” is pronounced as in “pie”. So Caesar is pronounced something like the German Kaiser.

Caligulae is widely regarded as a tyrant of the worst kind, but during his imperium he was only personally responsible for thirteen executions, and he had three failed coup attempts on his life, the leaders of which contributed to that thirteen. That does not sound excessively tyrannical. However, he did have the bad habit of making outrageous comments (this is prior to a certain President tweeting, but there are strange similarities). He made his horse a senator. That was not mad; it was a clear insult to the senators.

He is accused of making a fatuous invasion of Germany. Actually, the evidence is he got two rebellious legions to build bridges over the Rhine, go over, set up camp, dig lots of earthworks, march around and return. This is actually a text-book account of imposing discipline and carrying out an exercise, following the methods of his brother-in-law Gnaeus Domitius Corbulo, one of the stronger Roman Generals on discipline. He then took these same two legions and ordered them to invade Britain. The men refused to board what are sometimes called decrepit ships. Whatever, Caligulae gave them the choices between “conquering Neptune” and collecting a mass of sea shells, invading Britain, or face decimation. They collected sea shells. The exercise was not madness: it was a total humiliation for the two legions to have to carry these through Rome in the form of a “triumph”. This rather odd behaviour ended legionary rebellion, but it did not stop the coups. The odd behaviour and the fact he despised many senators inevitably led to bad press because it was the senatorial class that wrote histories, but like a certain president, he seemed to go out of his way to encourage the bad press. However, he was not seen as a tyrant by the masses. When he died the masses gave a genuine outpouring of anger at those who killed him. Like the more famous Gaius Julius Caesar, Caligulae had great support from the masses, but not from the senators. I have collected many of his most notorious acts, and one of the most bizarre political incidents I have heard of is quoted in the novel more or less as reported by Philo of Alexandria, with only minor changes for style consistency, and, of course, to report it in English.

As for showing how scientific theory can be developed, in TV shows you find scientists sitting down doing very difficult mathematics, and while that may be needed when theory is applied, all major theories start with relatively simple concepts. If we take quantum mechanics as an example of a reasonably difficult piece of theoretical physics, thus to get to the famous Schrödinger equation, start with the Hamilton-Jacobi equation from classical physics. Now the mathematician Hamilton had already shown you can manipulated that into a wave-like equation, but that went nowhere useful. However, the French physicist de Broglie had argued that there was real wave-like behaviour, and he came up with an equation in which the classical action (momentum times distance in this case) for a wave length was constant, specifically in units of h (Planck’s quantum of action). All that Schrödinger had to do was to manipulate Hamilton’s waves and ensure that the action came in units of h per wavelength. That may seem easy, but everything was present for some time before Schrödinger put that together. Coming up with an original concept is not at all easy.

Anyway, in the novel, Scaevola has to prove the Earth goes around the sun, with what was available then. (No telescopes that helped Galileo.) The novel gives you the material avaiable, including the theory and measurements of Aristarchus. See if you can do it. You, at least, have the advantage you know it does. (And no, you do not have to invent calculus or Newtonian mechanics.)

The above is, of course, merely the background. The main part of the story involves life in Egypt, the aanti-Jewish riots in Egypt, then the religious problems of Judea as Christianty starts.

Origin of the Rocky Planet Water, Carbon and Nitrogen

The most basic requirement for life to start is a supply of the necessary chemicals, mainly water, reduced carbon and reduced nitrogen on a planet suitable for life. The word reduced means the elements are at least partly bound with hydrogen. Methane and ammonia are reduced, but so are hydrocarbons, and aminoacids are at least partly reduced. The standard theory of planetary formation has it (wrongly, in my opinion) that none of these are found on a rocky planet and have to come from either comets, or carbonaceous asteroids. So, why am I certain this is wrong? There are four requirements that must be met. The first is, the material delivered must be the same as the proposed source; the second is they must come in the same proportions, the third is the delivery method must leave the solar system as it is now, and the fourth is that other things that should have happened must have.

As it happens, oxygen, carbon, hydrogen and nitrogen are not the same through the solar system. Each exists in more than one isotope (different isotopes have different numbers of neutrons), and the mix of isotopes in an element varies in radial distance from the star. Thus comets from beyond Neptune have far too much deuterium compared with hydrogen. There are mechanisms by which you can enhance the D/H ratio, such as UV radiation breaking bonds involving hydrogen, and hydrogen escaping to space. The chemical bonds to deuterium tend to be several kJ/mol. stronger than bonds to hydrogen. The chemical bond strength is actually the same, but the lighter hydrogen has more zero point energy so it more easily breaks and gets lost to space. So while you can increase the deuterium to hydrogen ratio, there is no known way to decrease it by natural causes. The comets around Jupiter also have more deuterium than our water, so they cannot be the source. The chondrites have the same D/H ratio as our water, which has encouraged people to believe that is where our water came from, but the nitrogen in the chondrites has too much 15N, so it cannot be the source of our nitrogen. Further, the isotope ratios of certain heavy elements such as osmium do not match those on Earth. Interestingly, it has been argued that if the material was subducted and mixed in the mantle, it would be just possible. Given that the mantle mixes very poorly and the main sources of osmium now come from very ancient plutonic extrusions, I have doubts on that.

If we look at the proportions, if comets delivered the water or carbon, we should have five times more nitrogen, and twenty thousand times more argon. Comets from the Jupiter zone get around this excess by having no significant nitrogen or argon, and insufficient carbon. For chondrites, there should be four times as much carbon and nitrogen to account for the hydrogen and chlorine on Earth. If these volatiles did come from chondrites, Earth has to be struck by at least 10^23 kg of material (that is, ten followed by 23 zeros). Now, if we accept that these chondrites don’t have some steering system, based on area the Moon should have been struck by about 7×10^21 kg, which is approximately 9.5% of the Moon’s mass. The Moon does not subduct such material, and the moon rocks we have found have exactly the same isotope ratios as Earth. That mass of material is just not there. Further, the lunar anorthosite is magmatic in origin and hence primordial for the Moon, and would retain its original isotope ratios, which should give a set of isotopes that so not involve the late veneer, if it occurred at all.

The third problem is that we are asked to believe that there was a narrow zone in the asteroid belt that showered a deluge of asteroids onto the rocky planets, but for no good reason they did not accrete into anything there, and while this was going on, they did not disturb the asteroids that remain, nor did they disturb or collide with asteroids closer to the star, which now is most of them. The hypothesis requires a huge amount of asteroids formed in a narrow region for no good reason. Some argue the gravitational effect of Jupiter dislodged them, but the orbits of such asteroids ARE stable. Gravitational acceleration is independent of the body’s mass, and the remaining asteroids are quite untroubled. (The Equivalence Principle – all bodies fall at the same rate, other than when air resistance applies.)

Associated with this problem is there is a number of elements like tungsten that dissolve in liquid iron. The justification for this huge barrage of asteroids (called the late veneer) is that when Earth differentiated, the iron would have dissolved these elements and taken them to the core. However, they, and iron, are here, so it is argued something must have brought them later. But wait. For the isotope ratios this asteroid material has to be subducted; for them to be on the continents, they must not be subducted. We need to be self-consistent.

Finally, what should have happened? If all the volatiles came from these carbonaceous chondrites, the various planets should have the same ratio of volatiles, should they not? However, the water/carbon ratio of Earth appears to be more than 2 orders of magnitude greater than that originally on Venus, while the original water/carbon ratio of Mars is unclear, as neither are fully accounted for. The N/C ratio of Earth and Venus is 1% and 3.5% respectively. The N/C ratio of Mars is two orders of magnitude lower than 1-2%. Thus if the atmospheres came from carbonaceous chondrites:

Only the Earth is struck by large wet planetesimals,

Venus is struck by asteroidal bodies or chondrites that are rich in C and especially rich in N and are approximately 3 orders of magnitude drier than the large wet planetesimals,

Either Earth is struck by a low proportion of relatively dry asteroidal bodies or chondrites that are rich in C and especially rich in N and by the large wet planetesimals having moderate levels of C and essentially no N, or the very large wet planetesimals have moderate amounts of carbon and lower amounts of nitrogen as the dry asteroidal bodies or chondrites, and Earth is not struck by the bodies that struck Venus,

Mars is struck only infrequently by a third type of asteroidal body or chondrite that is relatively wet but is very nitrogen deficient, and this does not strike the other bodies in significant amounts,

The Moon is struck by nothing,

See why I find this hard to swallow? Of course, these elements had to come from somewhere, so where? That is for a later post. In the meantime, see why I think science has at times lost hold of its methodology? It is almost as if people are too afraid to go against the establishment.

Science Communication and the 2018 Australasian Astrobiology Meeting

Earlier this week I presented a talk at the 2018 Australasian Astrobiology Meeting, with the objective of showing where life might be found elsewhere in the Universe, and as a consequence I shall do a number of posts here to expand on what I thought about this meeting. One presentation that made me think about how to start this series actually came near the end, and the topic included why do scientists write blogs like this for the general public? I thought about this a little, and I think at least part of the answer, at least for me, is to show how science works, and how scientists think. The fact of the matter is that there are a number of topics where the gap between what scientists think and what the general public think is very large. An obvious one is climate change; the presenter came up with a figure that something like 50% of the general public don’t think that carbon dioxide is responsible for climate change while I think the figures she showed were that 98% of scientists are convinced it does. So why is there a difference, and what should be done about it?

In my opinion, there are two major ways to go wrong. The first is to simply take someone else’s word. In these days, you can find someone who will say anything. The problem then is that while it is all very well to say look at the evidence, most of the time the evidence is inaccessible, and even if you overcome that, the average person cannot make head or tail of it. Accordingly, you have to trust someone to interpret it for you. The second way to go wrong is to get swamped with information. The data can be confusing, but the key is to find critical data. This means that when making a decision as to what causes what, you put aside facts that can mean a lot of different things, and concentrate on those that have, at best, one explanation. Now the average person cannot recognize that, but they can recognize whether the “expert” recognizes it. As an example of a critical fact, back to climate change. The fact that I regard as critical is that there was a long-term series of measurements that showed the world’s oceans were receiving a net power input of 0.6 watt per square meter. That may not sound like much, but multiply it over the earth’s ocean area, and it is a rather awful lot of heat.

Another difficulty is that for any given piece of information, either there may be several interpretations for what caused it, or there may be issues assigning significance. As a specific example from the conference, try to answer the question, “Are we alone”? The answer from Seth Shostak, from SETI, is, so far, yes, at least to the extent we have no evidence to the contrary, but of course if you were looking for radio transmissions, Earth would have failed to show signs until about a hundred years ago. There were a number of other reasons given, but one of the points Seth made was a civilization at a modest distance would have to devote a few hundred MW power to send us a signal. Why would they do that? This reminds me of what I wrote in one of my SF novels. The exercise is a waste of time because everyone is listening; listening is cheap but nobody is sending, and simple economics kills the scheme.

As Seth showed, there are an awful lot of reasons why SETI is not finding anything, and that proves nothing. Absence of evidence is not evidence of absence, but merely evidence that you haven’t hit the magic button yet. Which gets me back to scientific arguments. You will hear people say science cannot prove anything. That is rubbish. The second law of thermodynamics proves conclusively that if you put your dinner on the table it won’t spontaneously drop a couple of degrees in temperature as it shoots upwards and smears itself over the ceiling.

As an example of the problems involved with conveying such information, consider what it takes to get a proof? Basically, a theory starts with a statement. There are several forms of this, but the one I prefer is you say, “If theory A is correct, and I do a set of experiments B, under conditions C, and if B and C are very large sets, then theory A will predict a set of results R. You do the experiments and collect a large set of observations O. Now, if there is no element of O that is not an element of R, then your theory is plausible. If the sets are large enough, they are very plausible, but you still have to be careful you have an adequate range of conditions. Thus Newtonian mechanics are correct within a useful range of conditions, but expand that enough and you need either relativity or quantum mechanics. You can, however, prove a theory if you replace “if” in the above with “if and only if”.

Of course, that could be said more simply. You could say a theory is plausible if every time you use it, what you see complies with your theory’s predictions, and you can prove a theory if you can show there is no alternative, although that is usually very difficult. So why do scientists not write in the simpler form? The answer is precision. The example I used above is general so it can be reduced to a simpler form, but sometimes the statements only apply under very special circumstances, and now the qualifiers can make for very turgid prose. The takeaway message now is, while a scientist likes to write in a way that is more precise, if you want to have notice taken, you have to be somewhat less formal. What do you think? Is that right?

Back to the conference, in the case of SETI. Seth will not be proven wrong, ever, because the hypothesis that there are civilizations out there but they are not broadcasting to us in a way we can detect cannot be faulted. So for the next few weeks I shall look more at what I gathered from this conference.

A personal scientific low point.

When I started my PhD research, I was fairly enthusiastic about the future, but I soon got disillusioned. Before my supervisor went on summer holidays, he gave me a choice of two projects. Neither were any good, and when the Head of Department saw me, he suggested (probably to keep me quiet) that I find my own project. Accordingly, I elected to enter a major controversy, namely were the wave functions of a cyclopropane ring localized (i.e., each chemical bond could be described by wave interference between a given pair of atoms, but there was no further wave interference) or were they delocalized, (i.e. the wave function representing a pair of electrons spread over more than one pair of atoms) and in particular, did they delocalize into substituents? Now, without getting too technical, I knew my supervisor had done quite a bit of work on something called the Hammett equation, which measures the effect or substituents on reactive sites, and in which, certain substituents that had different values when such delocalization was involved. If I could make the right sort of compounds, this equation would actually solve a problem.

This was not to be a fortunate project. First, my reserve synthetic method took 13 steps to get to the desired product, and while no organic synthesis gives a yield much better than 95%, one of these struggled to get over 35%, and another was not as good as desirable, which meant that I had to start with a lot of material. I did explore some shorter routes. One involved a reaction that was published in a Letter by someone who would go on to win a Nobel prize. The very key requirement to get the reaction to work was omitted in the Letter. I got a second reaction to work, but I had to order special chemicals. They turned up after I had submitted my thesis. They travelled via Hong Kong, where they got put aside and forgotten. After discovering that my supervisor was not going to provide any useful advice on chemical synthesis, he went on sabbatical, and I was on my own. After a lot of travail, I did what I had set out to do, but an unexpected problem arose. The standard compounds worked well and I got the required straight line set with minimum deviation, but for the key compound at one extreme of the line, the substituent at one end reacted quickly with the other end in the amine form. No clear result.

My supervisor made a cameo appearance before heading back to North America, where he was looking for a better paying job, and he made a suggestion, which involved reacting carboxylic acids that I already had in toluene. These had already been reported in water and aqueous alcohol, but the slope of the line was too shallow to be conclusive. What the toluene did was to greatly amplify the effect. The results were clear: there was no delocalization.

The next problem was the controversy was settling down, and the general consensus that there was such delocalization. This was based on one main observational fact, namely adjacent positive charge was stabilized, and there were many papers stating that it must on theoretical grounds. The theory used was exactly the same type of programs that “proved” the existence of polywater. Now the interesting thing was that soon everybody admitted there was no polywater, but the theory was “obviously” right in this case. Of course I still had to explain the stabilization of positive charge, and I found a way, namely strain involved mechanical polarization.

So, where did this get me? Largely, nowhere. My supervisor did not want to stick his head above the parapet, so he never published the work on the acids that was my key finding. I published a sequence of papers based on the polarization hypothesis, but in my first one I made an error: I left out what I thought was too obvious to waste the time of the scientific community, and in any case, I badly needed the space to keep within page limits. Being brief is NOT always a virtue.

The big gain was that while both explanations explained why positive charge was stabilized, (and my theory got the energy of stabilization of the gas phase carbenium ion right, at least as measured by another PhD student in America) the two theories differed on adjacent negative charge. The theory involving quantum delocalization required it to be stabilized too, while mine required it to be destabilized. As it happens, negative charge adjacent to a cyclopropane ring is so unstable it is almost impossible to make it, but that may not be convincing. However, there is one UV transition where the excited state has more negative charge adjacent to the cyclopropane ring, and my calculations gave the exact spectral shift, to within 1 nm. The delocalization theory cannot even get the direction of the shift right. That was published.

So, what did I learn from this? First, my supervisor did not have the nerve to go against the flow. (Neither, seemingly, did the supervisor of the student who measured the energy of the carbenium ion, and all I could do was to rely on the published thesis.) My spectral shifts were dismissed by one reviewer as “not important” and they were subsequently ignored. Something that falsifies the standard theory is unimportant? I later met a chemist who rose to the top of the academic tree, and he had started with a paper that falsified the standard theory, but when it too was ignored, he moved on. I asked him about this, and he seemed a little embarrassed as he said it was far better to ignore that and get a reputation doing something more in accord with a standard paradigm.

Much later (I had a living to earn) I had the time to make a review. I found over 60 different types of experiment that falsified the standard theory that was now in textbooks. That could not get published. There are few review journals that deal with chemistry, and one rejected the proposal on the grounds the matter was settled. (No interest in finding out why that might be wrong.) For another, it exceeded their page limit. For another, not enough diagrams and too many equations. For others, they did not publish logic analyses. So there is what I have discovered about modern science: in practice it may not live up to its ideals.

Scientific low points: (2)

The second major low point from recent times is polywater. The history of polywater is brief and not particularly distinguished. Nikolai Fedyakin condensed water in, or repeatedly forced water through, quartz capillaries, and found that tiny traces of such water could be obtained that had an elevated boiling point, a depressed freezing point, and a viscosity approaching that of a syrup. Boris Deryagin improved production techniques (although he never produced more than very small amounts) and determined a freezing point of – 40 oC, a boiling point of » 150 oC, and a density of 1.1-1.2. Deryagin decided there were only two possible reasons for this anomalous behaviour: (a) the water had dissolved quartz, (b) the water had polymerized. Everybody “knew” water did not dissolve quartz, therefore it must have polymerized. From the vibrational spectrum of polywater, two new bands were observed at 1600 and 1400 cm-1. From force constant considerations this was explained in terms of each OH bond being of approximately 2/3 bond order. The spectrum was consistent with the water occurring in hexagonal planar units, and if so, the stabilization per water molecule was calculated to be in the order of 250-420 kJ/mol. For the benefit of the non-chemist, this is a massive change in energy, and it meant the water molecules were joined together with a strength comparable to the carbon – carbon bonds in diamonds. The fact that it had a reported boiling point of » 150 oC should have warned them that this had to be wrong, but when a bandwagon starts rolling, everyone wants to jump aboard without stopping to think. An NMR spectrum of polywater gave a broad, low intensity signal approximately 300 Hz from the main proton signal, which meant that either a new species had formed, or there was a significant impurity present. (This would have been a good time to check for impurities.) The first calculation employing “reliable” methodology involved ab initio SCF LCAO methodology, and water polymers were found to be stabilized by polymer size. The cyclic tetramer was stabilized by 177 kJ/mol, the cyclic pentamer by 244 kJ/mol, and the hexamer by 301.5 kJ/mol. One of the authors of this paper was John Pople, who went on to get a Nobel prize, although not for this little effort.

All of this drew incredible attention. It was even predicted that an escape of polywater into the environment could catalytically convert the Earth’s oceans into polywater, thus extinguishing life, and that this had happened on Venus. We had to be careful! Much funding was devoted to polywater, even from the US navy, who apparently saw significant defence applications. (One can only imagine the trapping of enemy submarines in a polymeric syrup, prior to extinguishing all life on Earth!)

It took a while for this to fall over. Pity one poor PhD candidate who had to prepare polywater, and all he could prepare was solutions of silica. His supervisor told him to try harder. Then, suddenly, polywater died. Someone notice the infrared spectrum quoted above bore a striking resemblance to that of sweat. Oops.

However if the experimentalists did not shine, theory was extraordinarily dim. First, the same methods in different hands produced a very wide range of results with no explanation of why the results differed, although of course none of them concluded there was no polywater. If there were no differences in the implied physics between methods that gave such differing results, then the calculation method was not physical. If there were differences in the physics, then these should have been clearly explained. One problem was, as with only too many calculations in chemical theory, the inherent physical relationships are never defined in the papers. It was almost amusing to see, when it was clear there was no polywater, a paper was published in which ab initio LCAO SCF calculations with Slater-type orbitals provide evidence against previous calculations supporting polywater. The planar symmetrical structure was found to be not stable. A tetrahedral structure made by four water molecules results in instability because of excessive deformation of bond angles. What does that mean, apart from face-covering for the methodology? If you cannot have roing structures when the bond angles are tetrahedral, sugar is therefore an impossible molecule. While there are health issues with sugar, impossibility of its existence is not in debate.

One problem with the theory was that molecular orbital theory was used to verify large delocalization of electron motion over the polymers. The problem is, MO theory assumes it in the first place. Verifying what you assume is one of the big naughties pointed out by Aristotle, and you would thing that after 2,400 years, something might have stuck. Part of the problem was that nobody could question any of these computations because nobody had any idea of what the assumed inputs and code were. We might also note that the more extreme of these claims tended to end up in what many would claim to be the most reputable of journals.

There were two major fall-outs from this. Anything that could be vaguely related to polywater was avoided. This has almost certainly done much to retard examination of close ordering on surfaces, or on very thin sections, which, of course, are of extreme importance to biochemistry. There is no doubt whatsoever that reproducible effects were produced in small capillaries. Water at normal temperatures and pressures does not dissolve quartz (try boiling a lump of quartz in water for however long) so why did it do so in small capillaries? The second was that suddenly journals became far more conservative. The referees now felt it was their God-given duty to ensure that another polywater did not see the light of day. This is not to say that the referee does not have a role, but it should not be to decide arbitrarily what is true and what is false, particularly on no better grounds than, “I don’t think this is right”. A new theory may not be true, but it may still add something.

Perhaps the most unfortunate fallout was to the career of Deryagin. Here was a scientist who was more capable than many of his detractors, but who made an unfortunate mistake. The price he paid in the eyes of his detractors seems out of all proportion to the failing. His detractors may well point out that they never made such a mistake. That might be true, but what did they make? Meanwhile, Pople, whose mistake was far worse, went on to win a Nobel Prize for developing molecular orbital theory and developing a cult following about it. Then there is the question, why avoid studying water in monolayers or bilayers? If it can dissolve quartz, it has some very weird properties, and understanding these monolayers and bilayers is surely critical if we want to understand enzymes and many biochemical and medical problems. In my opinion, the real failures here come from the crowd, who merely want to be comfortable. Understanding takes effort, and effort is often uncomfortable.

What is nothing?

Shakespeare had it right – there has been much ado about nothing, at least in the scientific world. In some of my previous posts I have advocated the use of the scientific method on more general topics, such as politics. That method involves the rigorous evaluation of evidence, of making propositions in accord with that evidence, and most importantly, rejecting those that are clearly false. It may appear that for ordinary people, that might be too hard, but at least that method would be followed by scientists, right? Er, not necessarily. In 1962 Thomas Kuhn published a work, “The structure of scientific revolutions” and in it he argued that science itself has a very high level of conservatism. It is extremely difficult to change a current paradigm. If evidence is found that would do so, it is more likely to be secreted away in the bottom drawer, included in a scientific paper in a place where it is most likely to be ignored, or, if it is published, ignored anyway, and put in the bottom drawer of the mind. The problem seems to be, there is a roadblock towards thinking that something not in accord with expectations might be significant. With that in mind, what is nothing?

An obvious answer to the title question is that a vacuum is nothing. It is what is left when all the “somethings” are removed. But is there “nothing” anywhere? The ancient Greek philosophers argued about the void, and the issue was “settled” by Aristotle, who argued in his Physica that there could not be a void, because if there were, anything that moved in it would suffer no resistance, and hence would continue moving indefinitely. With such excellent thinking, he then, for some reason, refused to accept that the planets were moving essentially indefinitely, so they could be moving through a void, and if they were moving, they had to be moving around the sun. Success was at hand, especially if he realized that feathers did not fall as fast as stones because of wind resistance, but for some reason, having made such a spectacular start, he fell by the wayside, sticking to his long held prejudices. That raises the question, are such prejudices still around?

The usual concept of “nothing” is a vacuum, but what is a vacuum? Some figures from Wikipedia may help. A standard cubic centimetre of atmosphere has 2.5 x 10^19 molecules in it. That’s plenty. For those not used to “big figures”, 10^19 means the number where you write down 10 and follow it with 19 zeros, or you multiply 10 by itself nineteen times. Our vacuum cleaner gets the concentration of molecules down to 10^19, that is, the air pressure is two and a half times less in the cleaner. The Moon “atmosphere” has 4 x 10^5 molecules per cubic centimetre, so the Moon is not exactly in vacuum. Interplanetary space has 11 molecules per cubic centimetre, interstellar space has 1 molecule per cubic centimetre, and the best vacuum, intergalactic space, needs a million cubic centimetres to find one molecule.

The top of the Earth’s atmosphere, the thermosphere goes from 10^14 to 10^7. That is a little suspect at the top because you would expect it to gradually go down to that of interplanetary space. The reason there is a boundary is not because there is a sharp boundary, but rather it is the point where gas pressure is more or less matched by solar radiation pressure and that from solar winds, so it is difficult to make firm statements about further distances. Nevertheless, we know there is atmosphere out to a few hundred kilometres because there is a small drag on satellites.

So, intergalactic space is most certainly almost devoid of matter, but not quite. However, even without that, we are still not quite there with “nothing”. If nothing else, we know there are streams of photons going through it, probably a lot of cosmic rays (which are very rapidly moving atomic nuclei, usually stripped of some of their electrons, and accelerated by some extreme cosmic event) and possibly dark matter and dark energy. No doubt you have heard of dark matter and dark energy, but you have no idea what it is. Well, join the club. Nobody knows what either of them are, and it is just possible neither actually exist. This is not the place to go into that, so I just note that our nothing is not only difficult to find, but there may be mysterious stuff spoiling even what little there is.

However, to totally spoil our concept of nothing, we need to see quantum field theory. This is something of a mathematical nightmare, nevertheless conceptually it postulates that the Universe is full of fields, and particles are excitations of these fields. Now, a field at its most basic level is merely something to which you can attach a value at various coordinates. Thus a gravitational field is an expression such that if you know where you are and if you know what else is around you, you also know the force you will feel from it. However, in quantum field theory, there are a number of additional fields, thus there is a field for electrons, and actual electrons are excitations of such fields. While at this point the concept may seem harmless, if overly complicated, there is a problem. To explain how force fields behave, there needs to be force carriers. If we take the electric field as an example, the force carriers are sometimes called virtual photons, and these “carry” the force so that the required action occurs. If you have such force carriers, the Uncertainty Principle requires the vacuum to have an associated zero point energy. Thus a quantum system cannot be at rest, but must always be in motion and that includes any possible discrete units within the field. Again, according to Wikipedia, Richard Feynman and John Wheeler calculated there was enough zero point energy inside a light bulb to boil off all the water in the oceans. Of course, such energy cannot be used; to use energy you have to transfer it from a higher level to a lower level, when you get access to the difference. Zero point energy is at the lowest possible level.

But there is a catch. Recall Einstein’s E/c^2 = m? That means according to Einstein, all this zero point energy has the equivalent of inertial mass in terms of its effects on gravity. If so, then the gravity from all the zero point energy in vacuum can be calculated, and we can predict whether the Universe is expanding or contracting. The answer is, if quantum field theory is correct, the Universe should have collapsed long ago. The difference between prediction and observation is merely about 10^120, that is, ten multiplied by itself 120 times, and is the worst discrepancy between prediction and observation known to science. Even worse, some have argued the prediction was not right, and if it had been done “properly” they justified manipulating the error down to 10^40. That is still a terrible error, but to me, what is worse, what is supposed to be the most accurate theory ever is suddenly capable of turning up answers that differ by 10^80, which is roughly the same as the number of atoms in the known Universe.

Some might say, surely this indicates there is something wrong with the theory, and start looking elsewhere. Seemingly not. Quantum field theory is still regarded as the supreme theory, and such a disagreement is simply placed in the bottom shelf of the minds. After all, the mathematics are so elegant, or difficult, depending on your point of view. Can’t let observed facts get in the road of elegant mathematics!