Is science being carried out properly?

How do scientists carry out science, and how should they? These are questions that have been raised by reviewers in a recent edition of Science magazine, one of the leading science journals. One of the telling quotes is “resources (that) influence the course of science are still more rooted in traditions and intuitions than in evidence.” What does that mean? In my opinion, it is along the lines, for those who have, much will be given. “Much” here refers to much of what is available. Government funding can be tight. And in fairness, those who provide funds want to see something for their efforts, and they are more likely to see something from someone who has produced results consistently in the past. The problem is, the bureaucrats responsible for providing the finds have no idea of the quality of what is produced, so they tend to count scientific papers. This favours the production of fairly ordinary stuff, or even rubbish. Newbies are given a chance, but there is a price: they cannot afford to produce nothing. So what tends to happen is that funds are driven towards something that is difficult to fail, except maybe for some very large projects, like the large hadron collider. The most important thing required is that something is measured, and that something is more or less understandable and acceptable by a scientific journal, for that is a successful result. In some cases, the question, “Why was that measured?” would best be answered, “Because it was easy.” Even the large hadron collider fell into that zone. Scientists wanted to find the Higgs boson, and supersymmetry particles. They found the first, and I suppose when the question of building the collider, the reference (totally not apt) to the “God Particle” did not hurt.

However, while getting research funding for things to be measured is difficult, getting money for analyzing what we know, or for developing theories (other than doing applied mathematics on existing theories), is virtually impossible. I believe this is a problem, and particularly for analyzing what we know. We are in this quite strange position that while in principle we have acquired a huge amount of data, we are not always sure of what we know. To add to our problems, anything found more than twenty years ago is as likely as not to be forgotten.

Theory is thus stagnating. With the exception of cosmic inflation, there have been no new major theories that have taken hold since about 1970. Yet far more scientists have been working during this period than in all of previous history. Of course this may merely be due to the fact that new theories have been proposed, but nobody has accepted them. A quote from Max Planck, who effectively started quantum mechanics may show light on this: “A new scientific truth does not triumph
by convincing its opponents and making them see the light, but rather because its opponents eventually die.” Not very encouraging. Another reason may be that it failed to draw attention to itself. No scientist these days can read more than an extremely tiny fraction of what is written, as there are tens of millions of scientific papers in chemistry alone. Computer searching helps, but only for well-defined problems, such as a property of some material. How can you define carefully what you do not know exists?

Further information from this Science article provided some interest. An investigation led to what then non-scientists might consider a highly odd result, namely for scientific papers to be a hit, it was found that usually at least 90 per cent of what is written is well established. Novelty might be prized, but unless well mixed with the familiar, nobody will read it, or even worse, it will not be published. That, perforce, means that in general there will be no extremely novel approach, but rather anything new will be a tweak on what is established. To add to this, a study of “star” scientists who had premature deaths led to an interesting observation: the output of their collaborators fell away, which indicates that only the “star” was contributing much intellectual effort, and probably actively squashing dissenting views, whereas new entrants to the field who were starting to shine tended not to have done much in that field before the “star” died.

A different reviewer noticed that many scientists put in very little effort to cite past discoveries, and when citing literature, the most important is about five years old. There will be exceptions, usually through citing papers by the very famous, but I rather suspect in most cases these are cited more to show the authors in a good light than for any subject illumination. Another reviewer noted that scientists appeared to be narrowly channeled in their research by the need to get recognition, which requires work familiar to the readers, and reviewers, particularly those that review funding applications. The important thing is to keep up an output of “good work”, and that tends to mean only too many go after something that they more or less already now the answer. Yes, new facts are reported, but what do they mean? This, of course, fits in well with Thomas Kuhn’s picture of science, where the new activities are generally puzzles that are to be solved, but not puzzles that will be exceedingly difficult to solve. What all this appears to mean is that science is becoming very good at confirming that which would have been easily guessed, but not so good at coming up with the radically new. Actually, there is worse, but that is for the next post.

Advertisements

Volatiles on Rocky Planets

If we accept the mechanism I posted before is how the rocky planets formed, we still do not have the chemicals for life. So far, all we have is water and rocks with some planets having an iron core. The mechanism means that until the planet gets gravitationally big enough to attract gas it only accretes solids, together with the water that bonded to the silicates. There re two issues: how the carbon and nitrogen arrived, and if these arrived as solids, which is the only available mechanism, what happened next?

In the outer parts of the solar system the carbon occurs as carbon monoxide, methanol, some carbon dioxide, and “carbon”, which essentially many forms but looks like tar, is partially graphite, and there are even mini diamonds. There are also polyaromatic hydrocarbons, and even alkanes, and some other miscellaneous organic chemicals. Nitrogen occurs as nitrogen gas, ammonia, and some cyanide. As this comes closer to the star, and in the region of the carbonaceous chondrites, it starts getting hot enough for some of this to condense and react on the silicates, which is why these have the aminoacids, etc. However, as you get closer to the star, it gets too hot and seemingly the inner asteroids are mainly just silicates. At this point, the carbon is largely converted to carbon monoxide, and the nitrogenous compounds to nitrogen. However, on some metal oxides or metals, carbon forms carbides, nitrogen nitrides, and some other materials, such as cyanamides are also formed. These are solids, and accordingly these too will be accreted with the dust and be incorporated within the planet.

As the interior of the planet gets hotter, the water gets released from the silicates and they lose their amorphous structure and become rocks. The water reacts with these chemicals and to a first approximation initially produces carbon monoxide, methane and ammonia. Carbon monoxide reacts with water on certain metals and silicates to make hydrocarbons, formaldehyde, which in turn condenses to other aldehydes (on the path to making sugars) ammonia (on the path to make aminoacids) and so on. The chemistry is fairly involved, but basically given the initial mix, temperature and pressure, both in ready supply below the Earth’s surface, what we need for life emerges and will make its way to the surface. Assuming this mechanism is correct, then provided everything is present in an adequate mix, then life should evolve. That leaves open the question, how broad is the “right mix” zone?

Before considering that, it is obvious this mechanism relies on the temperature being correct on at least two times during the planetary evolution. Initially it has to get hot enough to make the cements, and the nitrides and carbides. Superficially, that applies to all rocky planets, but maybe not for the nitrides. The problem here is Mars has very little nitrogen, so either it has gone somewhere, or it was never there. If Mars had ammonia, since it dissolves in ice down to minus 80 degrees C, ammonia on Mars would solve the problem of how could water flow there when it is so cold. However, if that is the case, the nitrogen has to be in some solid form buried below the surface. In my opinion, it was carried there as urea dissolved in water, which is why I would love to see some deep digging there.

The second requirement is that later the temperature has to be cool enough that water can set the cements. The problem with Venus is argued that it was hotter and it only just managed to absorb some water, but not enough. One counter to that is that the hydrogen on Venus has an extremely high deuterium content. The usual explanation for this is that if water gets to the top of the atmosphere, it may be hit with UV which may knock off a hydrogen atom, which is lost to space, and solar wind may take the whole molecule, however water with deuterium is less likely to get there because the heavier molecules are enhanced in the lower atmosphere, or the oceans. If this were true, for Venus to have the deuterium levels it must have started with a huge amount of water, and the mechanism above would be wrong. An embarrassing problem is where is the oxygen from that massive amount of water.

However, the proposed mechanism also predicts a very large deuterium enhancement. The carbon and nitrogen in the atmosphere and in living things has to be liberated from rocks by reaction with water, and what happens is as the water transfers hydrogen to either carbon or nitrogen it also leaves a hydroxyl attached to any metal. Two hydroxyls liberate water and leave an oxide. At this point we recall that chemical bond to deuterium is stronger than that to hydrogen, the reason being that although in theory the two are identical from the electromagnetic interactions, quantum mechanics requires there to be a zero point energy, and somewhat oversimplifying, the amount of such energy is inversely proportional to the square root of the mass of the light atom. Since deuterium is twice the mass of hydrogen, the zero point energy is less, and being less, its bond is stronger. That means there is a preference for the hydrogen to be the one that transfers, and the deuterium eventually turns up in the water. This preferential retaining of deuterium is called the chemical isotope effect. The resultant gases, methane and ammonia as examples, break down with UV radiation and make molecular nitrogen and carbon dioxide, with the hydrogen going to space. The net result of this is the rocky planet’s hydrogen gradually becomes richer in deuterium.

The effects of the two mechanisms are different. For Venus, the first one requires huge oceans; the second one little more than enough water to liberate the gases. If we look at the rocky planets, Earth should have a modest deuterium enhancement with both mechanisms because we know it has retained a very large amount of water. Mars is more tricky, because it started with less water under the proposed accretion of water mechanism, and it has less gravity and we know that all gases there, including carbon dioxide and nitrogen have enhanced heavier isotopes. That its deuterium is enhanced is simply expected from the other enhancements. Venus has about half as much CO2 again as Earth, and three times the amount of nitrogen, little water, and a very high deuterium enhancement. In my mechanism, Venus never had much water in the first place because it was too hot. Most of what it had was used up forming the atmosphere, and then providing the oxygen for the CO2. There was never much on the surface. To start with Venus was only a bit warmer than Earth, but as the CO2 began to build, whereas on Earth much of this would be dissolved in the ocean, where it would react with calcium silicate and also begin weathering the rocks that were more susceptible to weathering, such as dunite and peridotite. (I have discussed this previously: https://wordpress.com/post/ianmillerblog.wordpress.com/833 ), on Venus there were no oceans, and liquid water is needed to form these carbonates.

So, where will life be found? The answer is around any star where rocky planets formed with the two favourable temperature profiles, and ended up in the habitable zone. If more details as found in my ebook “Planetary Formation and Biogenesis” are correct, then this is most likely to occur around a G type star, like our sun, or a heavy K type star. The star also has to be one of the few that ejects it accretion disk remains early. Accordingly life should be fairly well spaced out, which may be why we have yet to run into other life forms.

What do Organic Compounds Found on Mars Mean?

Last week, NASA announced that organic compounds had been found on Mars. The question then is, what does this mean? First, organic compounds are essentially chemicals formed that involve carbon, which means Mars has carbon besides the carbon dioxide in the atmosphere. The name “organic” comes from the fact that such compounds found by early chemists, with the exception of a very few such as carbon dioxide, came from organisms, hence there is the question, do these materials indicate that Mars had life? The short answer is, the issue remains unresolved. One argument is that if there were no organic compounds on Mars, it obviously did not have life. That it has taken so long to find organic compounds does not say anything about the probability, though, because the surface of Mars is strongly oxidizing, and had any been there, they would have been turned into carbon dioxide. The atmosphere already has a lot of that. The reason none has been found, therefore, is because most of the rovers have not been able to dig very deeply.

I shall try to summarise the results that were reported [Eigenbrode et al., Science 360, 1096–1101 (2018)]. One important point is that the volatiles analysed were obtained by pyrolysing the mudstone the rover dug up, so what was detected may not be the same that was in the rock. The first compounds were identified as aliphatic hydrocarbons, from C1 (methane) to C5, and these were stated to be typical of that obtained from Kerogen or coal on Earth. One problem I had with these data was there were odd-numbered masses, BUT they all indicated that the cause was a fractured hydrocarbon, i.e. the pyrolysis had chopped that bit off something else and produced a radical.

One big problem was they could not say whether nitrogen or oxygen was present ” because mass spectra are not resolvable in EGA and other molecules share the diagnostic m/z values. ” I really don’t understand that. First, the identification of aliphatic hydrocarbons was almost certainly correct, because they form series of signals that are very recognizable to anyone who has done a bit of this work before. They stick out like an organ stop, so to speak. However, the presence of nitrogen species in any reasonable amount should be just as easily identified because while hydrocarbons, and their like with oxygen, basically give even mass signals, nitrogen, because of its valency of 3, gives odd numbered mass signals that is 1 bigger than a hydrocarbon. Now, a few of the fragmentation patterns of hydrocarbons give odd numbered mass signals, but if you cannot tell where the molecular ion is, you do not know what the mass of your molecule is. If all you have are fragmentation ions, then the instrument was somewhat poorly designed to go to Mars. With any experience, you can also tell whether you have oxygenated materials because hydrocarbons go up by adding 14 to the basic ion, and the atomic weight of oxygen is 16. If it has oxygen, it abd the fragments containing oxygen have an entirely different mass.

Of course the authors did note the presence of CO2 and CO. These could arise from the pyrolysis of carboxylic acids and ketones, but that does not mean life. Carboxylic acids would pyrolyse at about 400 – 550 degrees C and ketones a bit higher. They also found aromatic hydrocarbons, thiophenes and some other sulphur containing species. These were explained in terms of sulphur –bearing gases coming in contact, and further chemical reactions then taking place, in other words, these sulphur containing species such as hydrogen sulphide do not necessarily provide any information regarding what formed the original deposit. The sulphurization, however, was claimed to provide a preservative function by protecting against mild oxidation. If it carried out that function, it would be oxidized, and none of the observed materials were.

Unfortunately, the material is not directly associated with anything related to life. The remains of life can give rise to these sort of chemicals, as noted by our crude oil, which is basically hydrocarbon, and formed from life, but then altered by tens of millions of years change. These Martian deposits are believed to be in rocks 3.5 billion years old. However, the materials were also obtained by pyrolysis at temperatures exceeding 500 degrees C. The original molecules could have rearranged, and what we saw was the sort of compounds that organic compounds might rearrange to. Nevertheless, the absence of nitrogen is not encouraging. Nitrogen is present in all protein and nucleic acids, and there tends to be high levels of these in primitive life. Pyrolysis would be expected to produce pyrazines and pyridines, and these should be detectable. Pyrazines, having two nitrogen atoms, tend to give even numbered ions, and give the same mass as a ketone, but since neither was seen, that is irrelevant. Had there been such signals, the fragmentation patterns are quite distinctive if you have done this sort of work before.

Other possible sources of organic compounds, besides carbon, are from chondrites that have landed, and geochemically. It is hard to assess chondrites, because we do not have other information. It is possible to tell the difference between oxygen from chondrites from oxygen from other places (because of the different ratios of isotopes of mass 17 and 18 compared with 16), but they never found oxygen. The materials could be geochemical as well. The same reaction used by Germany to make synthetic petrol during WW2 can occur underground, and make hydrocarbons. So overall, while this is certainly interesting, as is often the case it raises more questions than it answers.

A Response to Climate Change, But Will it Work?

By now, if you have not heard that climate change is regarded as a problem, you must have been living under a flat rock. At least some of the politicians have recognized that this is a serious problem and they do what politicians do best: ban something. The current craze is to ban the manufacture of vehicles powered by liquid fuels in favour of electric vehicles, the electricity to be made from renewable resources. That sounds virtuous, but have they thought out the consequences?

The world consumption of petroleum for motor vehicles is in the order of 23,000 bbl/day. By my calculation, given some various conversion factors from the web, that requires approximately 1.6 GW of continuous extra electric consumption. In fact much more would be needed because the assumptions include 100% efficiency throughout. Note if you are relying on solar power, as many environmentalists want, you would need more than three times that amount because the sun does not shine at night, and worse, since this is to charge electric vehicles, which tend to be running in daytime, such electric energy would have to be stored for use at night. How do you store it?

The next problem is whether the grid could take that additional power. This is hardly an insurmountable problem, but I most definitely needs serious attention, and it would be more comforting if we thought the politicians had thought of this and were going to do something about it. Another argument is, since most cars would be charged at night, the normal grid could be used because there is significantly less consumption then. I think the peaks would still be a problem, and then we are back to where the power is coming from. Of course nuclear power, or even better, fusion power, would make production targets easily. But suppose, like New Zealand, you use hydro power? That is great for generating on demand, but each kWhr still requires the same amount of water availability. If the water is fully used now, and if you use this to charge at night, then you need some other source during the day.

The next problem for the politicians are the batteries, and this problem doubles if you use batteries to store electricity from solar to use at night. Currently, electric vehicles have ranges that are ideal for going to and from work each day, but not so ideal for long distance travel. The answer here is said to be “fast-charging” stops. The problem here is how do you get fast charging? The batteries have a fixed internal resistance, and you cannot do much about that. From Ohm’s law, given the resistance, the current flow, which is effectively the charge, can only be increased by increasing the voltage. At first sight you may think that is hardly a problem, but in fact there are two problems, both of which affect battery life. The first is, in general an overvoltage permits fresh electrochemistry to happen. Thus for the lithium ion battery you run the risk of what is called lithium plating. The lithium ions are supposed to go between what are called intercalation layers on the carbon anode, but if the current is too high, the ions cannot get in there quickly enough and they deposit outside, and cause irreversible damage. The second problem is too fast of charging causes heat to be generated, and that partially destroys the structural integrity of the electrodes.

The next problem is that batteries can be up to half the cost of the purely electric vehicle. Everybody claims battery prices are coming down, and they are. The lithium ion battery is about seven times cheaper than it was, but it will not necessarily get much cheaper because at present ingredients make up 70% of the cost. Ingredient prices are more likely to increase. Lithium is not particularly common, and a massive increase in production may be difficult. There are large deposits in Bolivia but as might be expected, there are other salts present in addition to the lithium salts. There is probably enough lithium but it has to be concentrated from brines and there are the salts you do not want that have to be disposed of, which reduces the “green-ness” of the exercise. Lithium prices can be assumed to go up significantly.

But the real elephant in the room is cobalt. Cobalt is not part of the chemistry of the battery, but it is necessary for the cathode. The battery works by shuttling lithium ions backwards and forwards between the cathode and anode. The cathode material needs to have the right structure to accommodate the ions, be stable so the ions can move in and out, have valence orbitals to accommodate the electron transfer, and the capacity to store as many lithium ions as possible. There are other materials that could replace cobalt, but cobalt is the only one where, when the lithium moves out, something does not move in to fill the spaces. Cobalt is essential for top performance. There are alternatives to use in current technology, but the cost is in poorer lifetimes, and there are alternative technologies, but nobody is sure they work. At present, a car needs somewhere between 7 – 20 kg of cobalt in its batteries, and as you reduce the cobalt content, you appear to reduce the life of the battery.

Cobalt is a problem because the current usage of cobalt in batteries is 48,000 t/a, while world production is about 100,000 t/a. The price is increasing rapidly as electric vehicles become more popular. At the beginning of 2017, a tonne of cobalt would cost $US 32,500; now it is at least $US 80,000. Over half the world’s production comes from the Democratic Republic of Congo, which may not be the most stable country, and worse, most of that 100,000 t/a comes as a byproduct from copper or nickel production. If there were to be a recession and the demand for stainless steel fell, then the production of cobalt would drop. The lithium ion batteries that would not be affected are the laptops and phones; they only need about 10 – 20 g of cobalt. Even worse, there are a lot of these batteries that currently are not being recycled.

In a previous post I noted there was not a single magic bullet to solve this problem. I stick to that opinion. We need a much broader approach than most of the politicians are considering. By broader, I do not mean the approach of denying we even have a problem.

This post is later than my usual, thanks to time demands approaching Easter, and I hope all my readers have a relaxing and pleasant Easter.

A personal scientific low point.

When I started my PhD research, I was fairly enthusiastic about the future, but I soon got disillusioned. Before my supervisor went on summer holidays, he gave me a choice of two projects. Neither were any good, and when the Head of Department saw me, he suggested (probably to keep me quiet) that I find my own project. Accordingly, I elected to enter a major controversy, namely were the wave functions of a cyclopropane ring localized (i.e., each chemical bond could be described by wave interference between a given pair of atoms, but there was no further wave interference) or were they delocalized, (i.e. the wave function representing a pair of electrons spread over more than one pair of atoms) and in particular, did they delocalize into substituents? Now, without getting too technical, I knew my supervisor had done quite a bit of work on something called the Hammett equation, which measures the effect or substituents on reactive sites, and in which, certain substituents that had different values when such delocalization was involved. If I could make the right sort of compounds, this equation would actually solve a problem.

This was not to be a fortunate project. First, my reserve synthetic method took 13 steps to get to the desired product, and while no organic synthesis gives a yield much better than 95%, one of these struggled to get over 35%, and another was not as good as desirable, which meant that I had to start with a lot of material. I did explore some shorter routes. One involved a reaction that was published in a Letter by someone who would go on to win a Nobel prize. The very key requirement to get the reaction to work was omitted in the Letter. I got a second reaction to work, but I had to order special chemicals. They turned up after I had submitted my thesis. They travelled via Hong Kong, where they got put aside and forgotten. After discovering that my supervisor was not going to provide any useful advice on chemical synthesis, he went on sabbatical, and I was on my own. After a lot of travail, I did what I had set out to do, but an unexpected problem arose. The standard compounds worked well and I got the required straight line set with minimum deviation, but for the key compound at one extreme of the line, the substituent at one end reacted quickly with the other end in the amine form. No clear result.

My supervisor made a cameo appearance before heading back to North America, where he was looking for a better paying job, and he made a suggestion, which involved reacting carboxylic acids that I already had in toluene. These had already been reported in water and aqueous alcohol, but the slope of the line was too shallow to be conclusive. What the toluene did was to greatly amplify the effect. The results were clear: there was no delocalization.

The next problem was the controversy was settling down, and the general consensus that there was such delocalization. This was based on one main observational fact, namely adjacent positive charge was stabilized, and there were many papers stating that it must on theoretical grounds. The theory used was exactly the same type of programs that “proved” the existence of polywater. Now the interesting thing was that soon everybody admitted there was no polywater, but the theory was “obviously” right in this case. Of course I still had to explain the stabilization of positive charge, and I found a way, namely strain involved mechanical polarization.

So, where did this get me? Largely, nowhere. My supervisor did not want to stick his head above the parapet, so he never published the work on the acids that was my key finding. I published a sequence of papers based on the polarization hypothesis, but in my first one I made an error: I left out what I thought was too obvious to waste the time of the scientific community, and in any case, I badly needed the space to keep within page limits. Being brief is NOT always a virtue.

The big gain was that while both explanations explained why positive charge was stabilized, (and my theory got the energy of stabilization of the gas phase carbenium ion right, at least as measured by another PhD student in America) the two theories differed on adjacent negative charge. The theory involving quantum delocalization required it to be stabilized too, while mine required it to be destabilized. As it happens, negative charge adjacent to a cyclopropane ring is so unstable it is almost impossible to make it, but that may not be convincing. However, there is one UV transition where the excited state has more negative charge adjacent to the cyclopropane ring, and my calculations gave the exact spectral shift, to within 1 nm. The delocalization theory cannot even get the direction of the shift right. That was published.

So, what did I learn from this? First, my supervisor did not have the nerve to go against the flow. (Neither, seemingly, did the supervisor of the student who measured the energy of the carbenium ion, and all I could do was to rely on the published thesis.) My spectral shifts were dismissed by one reviewer as “not important” and they were subsequently ignored. Something that falsifies the standard theory is unimportant? I later met a chemist who rose to the top of the academic tree, and he had started with a paper that falsified the standard theory, but when it too was ignored, he moved on. I asked him about this, and he seemed a little embarrassed as he said it was far better to ignore that and get a reputation doing something more in accord with a standard paradigm.

Much later (I had a living to earn) I had the time to make a review. I found over 60 different types of experiment that falsified the standard theory that was now in textbooks. That could not get published. There are few review journals that deal with chemistry, and one rejected the proposal on the grounds the matter was settled. (No interest in finding out why that might be wrong.) For another, it exceeded their page limit. For another, not enough diagrams and too many equations. For others, they did not publish logic analyses. So there is what I have discovered about modern science: in practice it may not live up to its ideals.

Scientific low points: (2)

The second major low point from recent times is polywater. The history of polywater is brief and not particularly distinguished. Nikolai Fedyakin condensed water in, or repeatedly forced water through, quartz capillaries, and found that tiny traces of such water could be obtained that had an elevated boiling point, a depressed freezing point, and a viscosity approaching that of a syrup. Boris Deryagin improved production techniques (although he never produced more than very small amounts) and determined a freezing point of – 40 oC, a boiling point of » 150 oC, and a density of 1.1-1.2. Deryagin decided there were only two possible reasons for this anomalous behaviour: (a) the water had dissolved quartz, (b) the water had polymerized. Everybody “knew” water did not dissolve quartz, therefore it must have polymerized. From the vibrational spectrum of polywater, two new bands were observed at 1600 and 1400 cm-1. From force constant considerations this was explained in terms of each OH bond being of approximately 2/3 bond order. The spectrum was consistent with the water occurring in hexagonal planar units, and if so, the stabilization per water molecule was calculated to be in the order of 250-420 kJ/mol. For the benefit of the non-chemist, this is a massive change in energy, and it meant the water molecules were joined together with a strength comparable to the carbon – carbon bonds in diamonds. The fact that it had a reported boiling point of » 150 oC should have warned them that this had to be wrong, but when a bandwagon starts rolling, everyone wants to jump aboard without stopping to think. An NMR spectrum of polywater gave a broad, low intensity signal approximately 300 Hz from the main proton signal, which meant that either a new species had formed, or there was a significant impurity present. (This would have been a good time to check for impurities.) The first calculation employing “reliable” methodology involved ab initio SCF LCAO methodology, and water polymers were found to be stabilized by polymer size. The cyclic tetramer was stabilized by 177 kJ/mol, the cyclic pentamer by 244 kJ/mol, and the hexamer by 301.5 kJ/mol. One of the authors of this paper was John Pople, who went on to get a Nobel prize, although not for this little effort.

All of this drew incredible attention. It was even predicted that an escape of polywater into the environment could catalytically convert the Earth’s oceans into polywater, thus extinguishing life, and that this had happened on Venus. We had to be careful! Much funding was devoted to polywater, even from the US navy, who apparently saw significant defence applications. (One can only imagine the trapping of enemy submarines in a polymeric syrup, prior to extinguishing all life on Earth!)

It took a while for this to fall over. Pity one poor PhD candidate who had to prepare polywater, and all he could prepare was solutions of silica. His supervisor told him to try harder. Then, suddenly, polywater died. Someone notice the infrared spectrum quoted above bore a striking resemblance to that of sweat. Oops.

However if the experimentalists did not shine, theory was extraordinarily dim. First, the same methods in different hands produced a very wide range of results with no explanation of why the results differed, although of course none of them concluded there was no polywater. If there were no differences in the implied physics between methods that gave such differing results, then the calculation method was not physical. If there were differences in the physics, then these should have been clearly explained. One problem was, as with only too many calculations in chemical theory, the inherent physical relationships are never defined in the papers. It was almost amusing to see, when it was clear there was no polywater, a paper was published in which ab initio LCAO SCF calculations with Slater-type orbitals provide evidence against previous calculations supporting polywater. The planar symmetrical structure was found to be not stable. A tetrahedral structure made by four water molecules results in instability because of excessive deformation of bond angles. What does that mean, apart from face-covering for the methodology? If you cannot have roing structures when the bond angles are tetrahedral, sugar is therefore an impossible molecule. While there are health issues with sugar, impossibility of its existence is not in debate.

One problem with the theory was that molecular orbital theory was used to verify large delocalization of electron motion over the polymers. The problem is, MO theory assumes it in the first place. Verifying what you assume is one of the big naughties pointed out by Aristotle, and you would thing that after 2,400 years, something might have stuck. Part of the problem was that nobody could question any of these computations because nobody had any idea of what the assumed inputs and code were. We might also note that the more extreme of these claims tended to end up in what many would claim to be the most reputable of journals.

There were two major fall-outs from this. Anything that could be vaguely related to polywater was avoided. This has almost certainly done much to retard examination of close ordering on surfaces, or on very thin sections, which, of course, are of extreme importance to biochemistry. There is no doubt whatsoever that reproducible effects were produced in small capillaries. Water at normal temperatures and pressures does not dissolve quartz (try boiling a lump of quartz in water for however long) so why did it do so in small capillaries? The second was that suddenly journals became far more conservative. The referees now felt it was their God-given duty to ensure that another polywater did not see the light of day. This is not to say that the referee does not have a role, but it should not be to decide arbitrarily what is true and what is false, particularly on no better grounds than, “I don’t think this is right”. A new theory may not be true, but it may still add something.

Perhaps the most unfortunate fallout was to the career of Deryagin. Here was a scientist who was more capable than many of his detractors, but who made an unfortunate mistake. The price he paid in the eyes of his detractors seems out of all proportion to the failing. His detractors may well point out that they never made such a mistake. That might be true, but what did they make? Meanwhile, Pople, whose mistake was far worse, went on to win a Nobel Prize for developing molecular orbital theory and developing a cult following about it. Then there is the question, why avoid studying water in monolayers or bilayers? If it can dissolve quartz, it has some very weird properties, and understanding these monolayers and bilayers is surely critical if we want to understand enzymes and many biochemical and medical problems. In my opinion, the real failures here come from the crowd, who merely want to be comfortable. Understanding takes effort, and effort is often uncomfortable.

Scientific low points: (1)

A question that should be asked more often is, do scientists make mistakes? Of course they do. The good news, however, is that when it comes to measuring something, they tend to be meticulous, and published measurements are usually correct, or, if they matter, they are soon found out if they are wrong. There are a number of papers, of course, where the findings are complicated and not very important, and these could well go for a long time, be wrong, and nobody would know. The point is also, nobody would care.

On the other hand, are the interpretations of experimental work correct? History is littered with examples of where the interpretations that were popular at the time are now considered a little laughable. Once upon a time, and it really was a long time ago, I did a post doctoral fellowship at The University, Southampton, and towards the end of the year I was informed that I was required to write a light-hearted or amusing article for a journal that would come out next year. (I may have had one put over me in this respect because I did not see the other post docs doing much.) Anyway, I elected to comply, and wrote an article called Famous Fatuous Failures.

As it happened, this article hardly became famous, but it was something of a fatuous failure. The problem was, I finished writing it a little before I left the country, and an editor got hold of it. In those days you wrote with pen on paper, unless you owned a typewriter, but when you are travelling from country to country, you tend to travel light, and a typewriter is not light. Anyway, the editor decided my spelling of a two French scientists’ names (Berthollet and Berthelot) was terrible and it was “obviously” one scientist. The net result was there was a section where there was a bitter argument, with one of them arguing with himself. But leaving that aside, I had found that science was continually “correcting” itself, but not always correctly.

An example that many will have heard of is phlogiston. This was a weightless substance that metals and carbon gave off to air, and in one version, such phlogisticated air was attracted to and stuck to metals to form a calx. This theory got rubbished by Lavoisier, who showed that the so-called calxes were combinations of the metal with oxygen, which was part of the air. A great advance? That is debatable. The main contribution of Lavoisier was he invented the analytical balance, and he decided this was so accurate there would be nothing that was “weightless”. There was no weight for phlogiston therefore it did not exist. If you think of this, if you replace the word “phlogiston” with “electron” you have an essential description of the chemical ionic bond, and how do you weigh an electron? Of course there were other versions of the phlogiston theory, but getting rid of that version may we’ll have held chemistry back for quite some time.

Have we improved? I should add that many of my cited failures were in not recognizing, or even worse, not accepting truth when shown. There are numerous examples where past scientists almost got there, but then somehow found a reason to get it wrong. Does that happen now? Since 1970, apart from cosmic inflation, as far as I can tell there have been no substantially new theoretical advances, although of course there have been many extensions of previous work. However, that may merely mean that some new truths have been uncovered, but nobody believes them so we know nothing of them. However, there have been two serious bloopers.

The first was “cold fusion”. Martin Fleischmann, a world-leading electrochemist, and Stanley Pons decided that if deuterium was electrolyzed under appropriate conditions you could get nuclear fusion. They did a range of experiments with palladium electrodes, which would strongly adsorb the deuterium, and sometimes they got unexplained but significant temperature rises. Thus they claimed they got nuclear fusion at room temperature. They also claimed to get helium and neutrons. The problem with this experiment was that they themselves admitted that whatever it was only worked occasionally; at other times, the only heat generated corresponded to the electrical power input. Worse, even when it worked, it would be for only so long, and that electrode would never do it again, which is perhaps a sign that there was some sort of impurity in their palladium that gave the heat from some additional chemical reaction.

What happened next was nobody could repeat their results. The problem then was that being unable to repeat a result when it is erratic at best may mean very little, other than, perhaps, better electrodes did not have the impurity. Also, the heat they got raised the temperature of their solutions from thirty to fifty degrees Centigrade. That would mean that at best, very few actual nuclei fused. Eventually, it was decided that while something might have happened, it was not nuclear fusion because nobody could get the required neutrons. That in turn is not entirely logical. The problem is that fusion should not occur because there was no obvious way to overcome the Coulomb repulsion between nuclei, and it required palladium to do “something magic”. If in fact palladium could do that, it follows that the repulsion energy is not overcome by impact force. If there were some other way to overcome the repulsive force, there is no reason why the nuclei would not form 4He, because that is far more stable than 3He, and if so, there would be no neutrons. Of course I do not believe palladium would overcome that electrical repulsion, so there would be no fusion possible.

Interestingly, the chemists who did this experiment and believed it would work protected themselves with a safety shield of Perspex. The physicists decided it had no show, but they protected themselves with massive lead shielding. They knew what neutrons were. All in all, a rather sad ending to the career of a genuinely skillful electrochemist.

More to follow.