Interpreting Observations

The ancients, with a few exceptions, thought the Earth was the centre of the Universe and everything rotated around it, thus giving day and night. Contrary to what many people think, this was not simply stupid; they reasoned that it could not be rotating. An obvious experiment that Aristotle performed was to throw a stone high into the air so that it reached its maximum height directly above. When it dropped, it landed directly underneath it and its path was vertical to the horizontal. Aristotle recognised that up at that height and dropped, if the Earth was rotating the angular momentum from the height should carry it eastwards, but it did not. Aristotle was a clever reasoner, but he was a poor experimenter. He also failed to consider consequences of some of his other reasoning. Thus he knew that the Earth was a sphere, and he knew the size of it and thanks to Eratosthenes this was a fairly accurate value. He had reasoned correctly why that was, which was that matter fell towards the centre. Accordingly, he should also have realised his stone should also fall slightly to the south. (He lived in Greece; if he lived here it would move slightly northwards.) When he failed to notice that he should have realized his technique was insufficiently accurate. What he failed to do was to put numbers onto his reasoning, and this is an error in reasoning we see all the time these days from politicians. As an aside, this is a difficult experiment to do. If you don’t believe me, try it. Exactly where is the point vertically below your drop point? You must not find it by dropping a stone!

He also realised that Earth could not orbit the sun, and there was plenty of evidence to show that it could not. First, there was the background. Put a stick in the ground and walk around it. What you see is the background moves and moves more the bigger the circle radius, and smaller the further away the object is. When Aristarchus proposed the heliocentric theory all he could do was make the rather unconvincing bleat that the stars in the background must be an enormous distance away. As it happens, they are. This illustrates another problem with reasoning – if you assume a statement in the reasoning chain, the value of the reasoning is only as good as the truth of the assumption. A further example was that Aristotle reasoned that if the earth was rotating or orbiting the sun, because air rises, the Universe must be full of air, and therefore we should be afflicted by persistent easterly winds. It is interesting to note that had he lived in the trade wind zone he might have come to the correct conclusion for entirely the wrong reason.

But if he did he would have a further problem because he had shown that Earth could not orbit the sun through another line of reasoning. As was “well known”, heavy things fall faster than light things, and orbiting involves an acceleration towards the centre. Therefore there should be a stream of light things hurling off into space. There isn’t, therefore Earth does not move. Further, you could see the tail of comets. They were moving, which proves the reasoning. Of course it doesn’t because the tail always goes away from the sun, and not behind the motion at least half the time. This was a simple thing to check and it was possible to carry out this checking far more easily than the other failed assumptions. Unfortunately, who bothers to check things that are “well known”? This shows a further aspect: a true proposition has everything that is relevant to it in accord with it. This is the basis of Popper’s falsification concept.

One of the hold-ups involved a rather unusual aspect. If you watch a planet, say Mars, it seems to travel across the background, then slow down, then turn around and go the other way, then eventually return to its previous path. Claudius Ptolemy explained this in terms of epicycles, but it is easily understood in term of both going around the sun provided the outer one is going slower. That is obvious because while Earth takes a year to complete an orbit, it takes Mars over two years to complete a cycle. So we had two theories that both give the correct answer, but one has two assignable constants to explain each observation, while the other relies on dynamical relationships that at the time were not understood. This shows another reasoning flaw: you should not reject a proposition simply because you are ignorant of how it could work.I went into a lot more detail of this in my ebook “Athene’s Prophecy”, where for perfectly good plot reasons a young Roman was ordered to prove Aristotle wrong. The key to settling the argument (as explained in more detail in the following novel, “Legatus Legionis”) is to prove the Earth moves. We can do this with the tides. The part closest to the external source of gravity has the water fall sideways a little towards it; the part furthest has more centrifugal force so it is trying to throw the water away. They may not have understood the mechanics of that, but they did know about the sling. Aristotle could not detect this because the tides where he lived are miniscule but in my ebook I had my Roman with the British invasion and hence had to study the tides to know when to sail. There you can get quite massive tides. If you simply assume the tide is cause by the Moon pulling the water towards it and Earth is stationary there would be only one tide per day; the fact that there are two is conclusive, even if you do not properly understand the mechanics.

Free Will

You will see many discussions regarding free will. The question is, do you have it, or are we in some giant computer program. The problem is that classical physics is deterministic, and you will often see claims that Newtonian physics demands that the Universe works like some finely tuned machine, following precise laws of motion. And indeed, we can predict quite accurately when eclipses of the sun will occur, and where we should go to view them. The presence of eclipses in the future is determined now. Now let us extrapolate. If planets follow physical laws, and hence their behaviour can be determined, then so do snooker or pool balls, even if we cannot in practice calculate all that will happen on a given break. Let us take this further. Heat is merely random kinetic energy, but is it truly random? It seems that way, but the laws of motion are quite clear: we can calculate exactly what will happen in any collision and it is just in practice the calculations are too complicated to even consider doing it. You bring in chaos theory, but this does nothing for you; the calculations may be utterly impossible to carry out, but they are governed solely by deterministic physics, so ultimately what happens was determined and it is just that we do not know how to calculate it. Electrodynamics and quantum theory are deterministic, even if quantum theory has random probability distributions. Quantum behaviour always follows strict conservation laws and the Schrödinger equation is actually deterministic. If you know ψ and know the change of conditions, you know the new ψ. Further, all chemistry is deterministic. If I go into the lab, take some chemicals and mix them and if necessary heat them according to some procedure, every time I follow exactly the same procedures, I shall end up with the same result.

So far, so good. Every physical effect follows from a physical cause. Therefore, the argument goes, since our brain works on physical and chemical effects and these are deterministic, what our brains do is determined exactly by those conditions. But those conditions were determined by what went before, and those before that, and so on. Extrapolating, everything was predetermined at the time of the big bang! At this point the perceptive may feel that does not seem right, and it is not. Consider nuclear decay. We know that particles, say neutrons, are emitted with a certain probability over an extended period of time. They will be emitted, but we cannot say exactly, or even roughly, when. The nuclei have angular uncertainty, therefore it follows that you cannot know what direction it is emitted because according to the laws of physics that is not determined until it is emitted. You may say, so what? That is trivial. No, the so what is that when you find one exception, you falsify the overall premise that everythingwas determined at the big bang. Which means something else introduced causes. Also, the emitted neutron may now generate new causes that could not be predetermined.

Now we start to see a way out. Every physical effect follows from a physical cause, but where do the causes come from? Consider stretching a wire with ever increasing force; eventually it breaks. It usually breaks at the weakest point, which in principle is predictable, but suppose we have a perfect wire with no point weaker than any other. It must still break, but where? At the instant of breaking some quantum effect, such as molecular vibration, will offer momentarily weaker and stronger spots. One with the greatest weakness will go, but due to the Uncertainty Principle that the given spot is unpredictable.

Take evolution. This proceeds by variation in the nucleic acids, but where in the chain is almost certainly random because each phosphate ester linkage that has to be broken is equivalent, just like the points in the “ideal wire”. Most resultant mutations die out. Some survive, and those that survive long enough to reproduce contribute to an evolutionary change. But again, which survives depends on where it is. Thus a change that provides better heat insulation at the expense of mobility may survive in polar regions, but it offers nothing in the equatorial rain forest. There is nothing that determines where what mutation will arise; it is a random event.Once you cannot determine everything, even in principle, it follows you must accept that not every cause is determined by previous events. Once you accept that, since we have no idea how the mind works, you cannot insist the way my mind works was determined at the time of the big bang. The Universe is mechanical and predictable in terms of properties obeying the conservation laws, but not necessarily anything else. I have free will, and so do you. Use it well.

Science is No Better than its Practitioners

Perhaps I am getting grumpy as I age, but I feel that much in science is not right. One place lies in the fallacy ad verecundiam. This is the fallacy of resorting to authority. As the motto of the Royal Society puts it, nullius in verba. Now, nobody expects you to personally check everything, and if someone has measured something and either clearly shows how he/she did it, or it is something that is done reasonably often, then you take their word for it. Thus if I want to know the melting point of benzoic acid I look it up, and know that if the reported value is wrong, someone would have noticed. However, a different problem arises with theory because you cannot measure it. Further, science has got so complicated that any expert is usually an expert in a very narrow field. The net result is that  because things have got so complicated, most scientists find theories too difficult to examine in detail and do defer to experts. In physics, this tends to be because there is a tendency for the theory to descend into obscure mathematics and worse, the proponents seem to believe that mathematics IS the basis of nature. That means there is no need to think of causes. There is another problem, that also drifts over to chemistry, and that is the results of a computer-driven calculation must be right. True, there will be no arithmetical mistake but as was driven into our heads in my early computer lectures: garbage in, garbage out.

This post was sparked by an answer I gave to a chemistry question on Quora. Chemical bonds are usually formed by taking two atoms with a single electron in an orbital. Think of that as a wave that can only have one or two electrons. The reason it can have only two electrons is the Pauli Exclusion Principle, which is a very fundamental principle in physics. If each atom has only one in  such an orbital, they can combine and form a wave with two electrons, and that binds the two atoms. Yes, oversimplified. So the question was, how does phosphorus pentafluoride form. The fluorine atoms have one such unpaired electron each, and the phosphorus has three, and additionally a pair in one wave. Accordingly, you expect phosphorus to form a trifluoride, which it does, but how come the pentafluoride? Without going into too many details, my answer was that the paired electrons are unpaired, one is put into another wave and to make this legitimate, an extra node is placed in the second wave, a process called hybridization. This has been a fairly standard answer in text books.

So, what happened next? I posted that, and also shared it to something called “The Chemistry Space”. A moderator there rejected it, and said he did so because he did not believe it. Computer calculations showed there was no extra node. Eh?? So I replied and asked how this computation got around the Exclusion Principle, then to be additionally annoying I asked how the computation set the constants of integration. If you look at John Pople’s Nobel lecture, you will see he set these constants for hydrocarbons by optimizing the results for 250 different hydrocarbons, but leaving aside the case that simply degenerates into a glorified empirical procedure, for phosphorus pentafluoride there is only one relevant compound. Needless to say, I received no answer, but I find this annoying. Sure, this issue is somewhat trivial, but it highlights the greater problem that some scientists are perfectly happy to hide behind obscure mathematics, or even more obscure computer programming.

It is interesting to consider what a theory should do. First, it should be consistent with what we see. Second, it should encompass as many different types of observation as possible. To show what I mean, in phosphorus pentafluoride example, the method I described can be transferred to other structures of different molecules. That does not make it right, but at least it is not obviously wrong. The problem with a computation is, unless you know the details of how it was carried out, it cannot be applied elsewhere, and interestingly I saw a recent comment in a publication by the Royal Society of Chemistry that computations from a couple of decades ago cannot be checked or used because the details of the code are lost. Oops. A third requirement, in my opinion, is it should assist in understanding what we see, and even lead to a prediction of something new.

Fundamental theories cannot be deduced; the principles have to come from nature. Thus mathematics describes what we see in quantum mechanics, but you could find an alternative mathematical description for anything else nature decided to do, for example, classical mechanics is also fully self-consistent. For relativity, velocities are either additive or they are not, and you can find mathematics either way. The problem then is that if someone draws a wrong premise early, mathematics can be made to fit a lot of other material to it. A major discovery and change of paradigm only occurs if there is a major fault discovered that cannot be papered over.

So, to finish this post in a slightly different way to usual: a challenge. I once wrote a novel, Athene’s Prophecy, in which the main character in the first century was asked by the “Goddess” Athene to prove that the Earth went around the sun. Can you do it, with what could reasonably be seen at the time? The details had already been worked out by Aristarchus of Samos, who also worked out the size and distance of the Moon and Sun, and the huge distances are a possible clue. (Thanks to the limits of his equipment, Aristarchus’ measurements are erroneous, but good enough to show the huge distances.) So there was already a theory that showed it might work. The problem was that the alternative also worked, as shown by Claudius Ptolemy. So you have to show why one is the true one. 

Problems you might encounter are as follows. Aristotle had shown that the Earth cannot rotate. The argument was that if you threw a ball into the air so that when it reached the top of its flight it would be directly above you, when the ball fell to the ground it would be to the east of you. He did it, and it wasn’t, so the Earth does not rotate. (Can you see what is wrong? Hint – the argument implies the conservation of angular momentum, and that is correct.) Further, if the Earth went around the sun, to do so orbital motion involves falling and since heavier things fall faster than light things, the Earth would fall to pieces. Comets may well fall around the Sun. Another point was that since air rises, the cosmos must be full of air, and if the Earth went around the Sun, there would be a continual easterly wind. 

So part of the problem in overturning any theory is first to find out what is wrong with the existing one. Then to assert you are correct, your theory has to do something the other theory cannot do, or show the other theory has something that falsifies it. The point of this challenge is to show by example just how difficult forming a scientific theory actually is, until you hear the answer and then it is easy.

Scientific Discoveries, How to Make Them, and COVID 19

An interesting problem for a scientist is how to discover something? The mediocre, of course, never even try to solve this while it is probably only a small percentage that gets there. Basically, it is done by observing clues then using logic to interpret them. The method is called induction, and it can lead to erroneous conclusions. Aristotle worked put how to do it, and then dropped the ball at least twice in his two biggest blunders when he forgot to follow his own advice. (In fairness, he probably made his blunders before he worked put his methodology, and lost interest in correcting them. The Physica was one of his earliest works.) 

The clues come from nature, and picking them up relies on keeping eyes open and more importantly, the mind open. The first step is to seek patterns in what you observe, and try to correlate your observations. The key here is Aristotle’s comment that the whole is more than the sum of the parts. That looks like New Age nonsense, but look at it from the mathematics of set theory. A set is simply a collection of data, usually expressed as numbers, but not anything should go into it. As an example, I could list all green things I can see, but that would be pointless. I could list all plants, and now I am making progress into botany. The point is, the set comprises all the elements inside it, together with the rule that conveys set membership. It is the rule that we seek if we wish to make a discovery and in effect we have to guess it by examining the data. This process is called induction, and if we get some true statements, we can move on to deduction. 

There are, of course, problems. Thus we could say:

All plants have chlorophyll

Chlorophyll is green

Therefore all plants are green.

That is untrue. The chlorophyll will be green, but the plant may have additional dyes/pigments. An obvious case is red seaweed. The problem here is the lazy “therefore”. Usually it is somewhat more difficult, especially in medicine.

Which, naturally in these times, it brings me to COVID-19. What we find is very young people, especially girls, are more or less untroubled. The old have a lot more trouble, and, it turns out more so old men. Now part of the trouble will be that the old have weaker immune systems, and often other weaknesses in their bodies. Unlike wine, age does not improve the body. That is probably a confusing observation, because it leads nowhere and is somewhat obvious.

Anyway, we have a new observation: if we restrict ourselves to severe cases in hospitals, there is a serious excess of bald men. Now, a correlation is not causative, and trying to work out the cause can be fraught with difficulty. In this case, we can immediately dismiss the idea that hair has anything to do with it. However, baldness is also correlated with higher levels of androgens, which are male sex hormones. It was also found that the severe cases in males also usually had high levels of androgens. By itself, we can show this is not a cause either.

So, this leads to a deeper investigation, and it is found that the virus uses an enzyme called TMPRSS2 to cleave the Sars-Cov-2 spike protein, and this permits the cleaved spike to attack the ACE2 receptors on the patient’s cells, and thus permit the viral RNA to enter the cell and begin replicating. What the androgens do is to activate a gene in the virus that expresses TMPRSS2, so what the androgens do is to increase the amount of enzyme necessary to attack a cell. This suggests as a treatment something that will inhibit the viral gene so no TMPRSS2 is expressed. We await developments. (Suppressing androgens in men is not a good idea – they start to grow breasts. However, it also suggests that ACE inhibitors, used to reduce hypertension, might offer some assistance.) Now, the value of a theory can be shown by whether it helps explains something else. In this case, it argues that since pre-puberty children should be more resistant, and girls keep this benefit longer. That is found. It does not prove we are correct, but it is comforting. That is an example of induced science. Induction does not necessarily produce the truth, and conclusions can be wrong. We find out by pursuing the consequences, and either finding we have discovered something, or go back to the drawing board.

The Sociodynamics of Science

The title is a bit of an exaggeration as to the importance of this post, nevertheless since I was at what was probably my last scientific conference (NZ Institute of Chemistry, at Christchurch) I could not resist looking around at behaviour as well as the science. I also gave two presentations. Speaking to an audience gives the speaker an opportunity to order the presentation so as to give the most force to the surprising parts of it, not that many took advantage of this. Overall, very few, if any (apart from yours truly) seemed to want to provide their audience with something that might be uncomfortable for their preconceived notions.

First, the general part provided great support for Thomas Kuhn’s analysis. I found most of the invited speakers and keynote speakers to illustrate an interesting aspect: why are they speaking? Very few actually wished to educate or convince anyone of anything in particular, and personally, I found the few that did to be by far the most interesting. Most of the presentations from academics could be summarised as, “I have a huge number of research students and here is what they have done.” What then followed was a very large amount of results, but there was seldom an interesting unifying principle. Chemistry tends to be susceptible to this, as a very common student research program is to try to make a variety of related compounds. This may well have been very useful, but if we do not see why this approach was taken, it tends to feel like filling up some compendium of compounds, or, as Rutherford put it rather acidly, “stamp collecting”. These types of talks are characterised by the speaker trying to get in as many compounds as they can, so they keep talking and use up the allocated question time. I suspect that one of the purposes of these presentations is to say, “Look at what we have done. This has given our graduate students a good number of scientific publications, so if you are thinking of being a grad student, why not come here?” I can readily understand that line of thinking, but its relevance for older scientists is questionable. There were a few presentations where the output would be of more general interest, though. I found the odd presentation that showed how to do something new, where it could have quite wide applications, to be of particular interest.

Now to the personal. My first presentation was a summary of my biogenesis approach. It may have had too much information across too wide a field, but the interesting point was that it generated a discussion at the end relating to my concept of how homochirality was generated. My argument is that reproduction depends on it because the geometry prevents the formation of a second strand if the first strand is not either entirely left-handed or right-handed in its pitch. So the issue then was, it was pure chance that D-ribose containing helices predominated, in part because the chance of getting a long-enough homochiral strand is very remote, and when one arises, then it takes up all the resources and predominates. The legitimate question then is, why doesn’t the other handed helix eventually arise? It may be slower to do so, but it is not necessarily impossible. My partial answer to that is the mer units are also used to bind to some other important units for life to give them solubility, and the wrong sort gets used up and does not build up concentration. Maybe that is so, but there is no evidence.

It was my second presentation that would be controversial, and it was interesting to watch the expressions. Part of the problem for me was it was the last such presentation (there were some closing speakers after me, and after morning tea) and there is something about conferences at the end – everyone is busy thinking about how to get to the airport, etc, so they tend to lose concentration. My first slide put up three propositions: the wave functions everyone uses for atomic orbitals are wrong; because of that, the calculation of the chemical bond requires the use of a hitherto unrecognised quantum effect (which is a very specific expression involving only universally recognised quantum numbers) and finally, the commonly held belief that relativistic effects on the inner electrons make a major effect on the valence electron of the heaviest elements is wrong. 

As you might expect, this was greeted initially with yawns and disinterest: this was going to be wrong. At least that seemed to be written over their faces. I then diverted to explain my guidance wave interpretation, which is essentially the de Broglie pilot wave concept, but with two additions: an application of Euler’s complex number theory that everyone seems to have missed, and secondly, I argued that if the wave really causes diffraction in the two-slit-type experiment, it has to travel at the same speed as the particle. These two points lead to serious simplifications in the calculation of properties of chemical bonds. The next step was to put up a lot of evidence for the different wave functions, with about 70 data points spanning a selection of atoms, of which about twenty supported the absence of any significant relativistic effect. (This does not say relativity is wrong, but merely that its effects on valence electrons are too small to be noticed at this level of analysis.) What this was effectively saying was that most of the current calculations only give agreement with observation when liberal use is made of assignable constants, which conveniently can be adjusted so you get the “right” answer.So, question time. One question surprised me: Does my new approach do anything new? I argued that the fact everyone is using the wrong wave functions, there is a quantum effect that nobody has recognised, and everyone is wrong with those relativistic effects could be considered new. Yes, but have you got a prediction? This was someone difficult to satisfy. Well, if you have access to a good physics lab, I suggested, here is where you can show that, assuming my theory is correct, make an adjustment to the delayed choice quantum eraser experiment (and I outlined the simple change) then you will reach the opposite conclusion. If you don’t agree with me, then you should do the experiment to prove I am wrong. The stunned expressions were worth the cost of going to the conference. Not that anyone will do the experiment. That would show interest in finding the truth, and in fairness, it is more a job for a physicist.

An Ugly Turn for Science

I suspect that there is a commonly held view that science progresses inexorably onwards, with everyone assiduously seeking the truth. However, in 1962 Thomas Kuhn published a book “The structure of scientific revolutions” that suggested this view is somewhat incorrect. He suggested that what actually happens is that scientists spend most of their time solving puzzles for which they believe they know the answer before they begin, in other words their main objective is to add confirming evidence to current theory and beliefs. Results tend to be interpreted in terms of the current paradigm and if it cannot, it tends to be placed in the bottom drawer and is quietly forgotten. In my experience of science, I believe that is largely true, although there is an alternative: the result is reported in a very small section two-thirds through the published paper with no comment, where nobody will notice it, although I once saw a result that contradicted standard theory simply reported with an exclamation mark and no further comment. This is not good, but equally it is not especially bad; it is merely lazy and ducking the purpose of science as I see it, which is to find the truth. The actual purpose seems at times merely to get more grants and not annoy anyone who might sit on a funding panel.

That sort of behaviour is understandable. Most scientists are in it to get a good salary, promotion, awards, etc, and you don’t advance your career by rocking the boat and missing out on grants. I know! If they get the results they expect, more or less, they feel they know what is going on and they want to be comfortable. One can criticise that but it is not particularly wrong; merely not very ambitious. And in the physical sciences, as far as I am aware, that is as far as it goes wrong. 

The bad news is that much deeper rot is appearing, as highlighted by an article in the journal “Science”, vol 365, p 1362 (published by the American Association for the Advancement of Science, and generally recognised as one of the best scientific publications). The subject was the non-publication of a dissenting report following analysis on the attack at Khan Shaykhun, in which Assad was accused of killing about 80 people with sarin, and led, 2 days later, to Trump asserting that he knew unquestionably that Assad did it, so he fired 59 cruise missiles at a Syrian base.

It then appeared that a mathematician, Goong Chen of Texas A&M University, elected to do some mathematical modelling using publicly available data, and he got concerned with what he found. If his modelling was correct, the public statements were wrong. He came into contact with Theodore Postol, an emeritus Professor from MIT and a world expert on missile defence and after discussion he, Postol, and five other scientists carried out an investigation. The end result was that they wrote a paper essentially saying that the conclusions that Assad had deployed chemical weapons did not match the evidence. The paper was sent to the journal “Science and Global Security” (SGS), and following peer review was authorised for publication. So far, science working as it should. The next step is if people do not agree, they should either dispute the evidence by providing contrary evidence, or dispute the analysis of the evidence, but that is not what happened.

Apparently the manuscript was put online as an “advanced publication”, and this drew the attention of Tulsi Gabbard, a Presidential candidate. Gabbard was a major in the US military and had been deployed in Syria in a sufficiently senior position to have a realistic idea of what went on. She has stated she believed the evidence was that Assad did not use chemical weapons. She has apparently gone further and said that Assad should be properly investigated, and if evidence is found he should be accused of war crimes, but if evidence is not found he should be left alone. That, to me, is a sound position: the outcome should depend on evidence. She apparently found the preprint and put it on her blog, which she is using in her Presidential candidate run. Again, quite appropriate: resolve an issue by examining the evidence. That is what science is all about, and it is great that a politician is advocating that approach.

Then things started to go wrong. This preprint drew a detailed critique from Elliot Higgins, the boss of Bellingcat, which has a history of being anti-Assad, and there was also an attack from Gregory Koblentz, a chemical weapons expert who says Postol has a pro-Assad line. The net result is that SGS decided to pull the paper, and “Science” states this was “amid fierce criticism and warnings that the paper would help Syrian President Bashar al-Assad and the Russian government.” Postol argues that Koblentz’s criticism is beside the point. To quote Postol: “I find it troubling that his focus seems to be on his conclusion that I am biased. The question is: what’s wrong with the analysis I used?” I find that to be well said.

According to the Science article, Koblentz admitted he was not qualified to judge the mathematical modelling, but he wrote to the journal editor more than once, urging him not to publish. Comments included: “You must approach this latest analysis with great caution”, the paper would be “misused to cover up the [Assad] regime’s crimes” and “permanently stain the reputation of your journal”. The journal then pulled the paper off the publication rank, at first saying they would edit it, but then they backtracked completely. The editor of the journal is quoted in Science as saying, “In hindsight we probably should have sent it to a different set of reviewers.” I find this comment particularly abhorrent. The editor should not select reviewers on the grounds they will deliver the verdict that the editor wants, or the verdict that happens to be most convenient; reviewers should be restricted to finding errors in the paper.I find it extremely troubling that a scientific institution is prepared to consider repressing an analysis solely on grounds of political expediency with no interest in finding the truth. It is also true that I hold a similar view relating to the incident. I saw a TV clip that was taken within a day of the event where people were taking samples from the hole where the sarin was allegedly delivered without any protection. If the hole had been the source of large amounts of sarin, enough would remain at the primary site to still do serious damage, but nobody was affected. But whether sarin was there or not is not my main gripe. Instead, I find it shocking that a scientific journal should reject a paper simply because some “don’t approve”. The reason for rejection of a paper should be that it is demonstrably wrong, or it is unimportant. The importance cannot be disputed, and if it is demonstrably wrong, then it should be easy to demonstrate where it is wrong. What do you all think?

Some Shortcomings of Science

In a previous post, in reference to the blog repost, I stated I would show some of the short-comings of science, so here goes.

One of the obvious failings is that people seem happy to ignore what should convince them. The first sign I saw of this type of problem was in my very early years as a scientist. Sir Richard Doll produced a report that convincingly (at least to me) linked smoking to cancer. Out came a number of papers rubbishing this, largely from people employed by the tobacco industry. Here we have a clear conflict, and while it is ethically correct to show that some hypothesis is wrong, it should be based on sound logic. Now I believe that there are usually a very few results, and maybe as few as one specific result, that makes the conclusion unassailable. In this case, chemists isolated the constituents of cigarette smoke and found over 200 suspected carcinogens, and trials with some of these on lab rats were conclusive: as an example one dab of pure 3,4-benzopyrene gave an almost 100% probability of inducing a tumour. Now that is a far greater concentration than any person will get smoking, and people are not rats, nevertheless this showed me that on any reasonable assessment, smoking is a bad idea. (It was also a bad idea for a young organic chemist: who needs an ignition source a few centimeters in front of the face when handling volatile solvents?) Yet fifty years or so later, people continue to smoke. It seems to be a Faustian attitude: the cancer will come decades later, or for some lucky ones, not at all, so ignore the warning.

A similar situation is occurring now with climate change. The critical piece of information for me is that during the 1990s and early 2000s (the period of the study) it was shown there is a net power input to the oceans of 0.64 W/m2. If there is a continuing net energy input to the oceans, they must be warming. Actually, the Tasman has been clearly warming, and the evidence from other oceans supports that. So the planet is heating. Yet there are a small number of “deniers” who put their head in the sand and refuse to acknowledge this, as if by doing so, the problem goes away. Scientists seem unable to make people fact up to the fact that the problem must be dealt with now but the price is not paid until much later. As an example, in 2014 US Senate majority leader Mitch McConnell said: “I am not a scientist. I’m interested in protecting Kentucky’s economy.” He forgot to add, now.

The problem of ignoring what you do not like is general and pervasive, as I quickly learned while doing my PhD. My PhD was somewhat unusual in that I chose the topic and designed the project. No need for details here, but I knew the department, and my supervisor, had spent a lot of effort establishing constants for something called the Hammett equation. There was a great debate going on whether the cyclopropane ring could delocalise electronic charge in the same way as a double bond, only mre weakly. This equation would actually address that question. The very limited use of it by others at the start of my project was inconclusive, for reasons we need not go into here. Anyway, by the time I finished, my results showed quite conclusively that it did not, but the general consensus, based essentially on the observation that positive electric charge was strongly stabilised by it, and on molecular orbital theory (which assumes it initially, so was hardly conclusive on this question) was that it did. My supervisor made one really good suggestion as to what to do when I ran into trouble, and this was the part that showed the effect the most. But when it became clear that everyone else was agreeing the opposite and he had moved to a new position, he refused to publish that part.

This was an example of what I believe is the biggest failing. The observation everyone clung to was unexpected and needed a new explanation, and what they came up with most certainly gave the right answer for that specific case. However, many times there is more than one possible explanation, and I came up with an alternative based on classical electric field theory, that also predicted positive charge would be stabilized, and by how much, but it also predicted negative charge would be destabilized. The delocalization concept required bothto be stabilised. So there was a means of distinguishing them, and there was a very small amount of clear evidence that negative charge was destabilised. Why a small amount of evidence. Well, most attempts at making such compounds failed outright, which is in accord with the compounds being unstable but it is not definitive.

So what happened? A review came out that “convincingly showed” the answer was yes. The convincing part was that it cited a deluge of “me too” work on the stabilization of positive charge. It ignored my work, and as I later found out when I wrote a review, it ignored over 60 different types of evidence that showed results that contradicted the “yes” answer. My review was not published because it appears chemistry journals do not publish logic analyses. I could not be bothered rewriting, although the draft document is on the web if anyone is interested.

The point this shows is that once a paradigm is embedded, even if on shaky grounds, it is very hard to dislodge, in accord with what Thomas Kuhn noted in “The structure of scientific revolutions”. One of the points Kuhn noted was if the paradigm had evidence, scientists would rush to write papers confirming the paradigm by doing minor variations on what worked. That happened above: they were not interested in testing the hypothesis; they were interested in getting easy papers published to advance their careers. Kuhn also noted that observations that contradict the paradigm are ignored as long as they can be. Maybe over 60 different types of observations that contradict, or falsify, the paradigm is a record? I don’t know, but I suspect the chemical community will not be interested in finding out.

Repost from Sabine Hossenfelder’s blog “Backreaction”

Two posts this week. The first is more for scientists, but I think it mentions points that people reading about science should recognise as possibly there. Sabine has been somewhat critical of some of modern science, and I feel she has a point. I shall do a post of my own on this topic soon, but it might be of interest to read the post following this to see what sort of things can go wrong.
Both bottom-up and top-down measures are necessary to improve the current situation. This is an interdisciplinary problem whose solution requires input from the sociology of science, philosophy, psychology, and – most importantly – the practicing scientists themselves. Details differ by research area. One size does not fit all. Here is what you can do to help.

As a scientist:

  • Learn about social and cognitive biases: Become aware of what they are and under which circumstances they are likely to occur. Tell your colleagues.
  • Prevent social and cognitive biases: If you organize conferences, encourage speakers to not only list motivations but also shortcomings. Don’t forget to discuss “known problems.” Invite researchers from competing programs. If you review papers, make sure open questions are adequately mentioned and discussed. Flag marketing as scientifically inadequate. Don’t discount research just because it’s not presented excitingly enough or because few people work on it.
  • Beware the influence of media and social networks: What you read and what your friends talk about affects your interests. Be careful what you let into your head. If you consider a topic for future research, factor in that you might have been influenced by how often you have heard others speak about it positively.
  • Build a culture of criticism: Ignoring bad ideas doesn’t make them go away, they will still eat up funding. Read other researchers’ work and make your criticism publicly available. Don’t chide colleagues for criticizing others or think of them as unproductive or aggressive. Killing ideas is a necessary part of science. Think of it as community service.
  • Say no: If a policy affects your objectivity, for example because it makes continued funding dependent on the popularity of your research results, point out that it interferes with good scientific conduct and should be amended. If your university praises its productivity by paper counts and you feel that this promotes quantity over quality, say that you disapprove of such statements.

As a higher ed administrator, science policy maker, journal editor, representative of funding body:

  • Do your own thing: Don’t export decisions to others. Don’t judge scientists by how many grants they won or how popular their research is – these are judgements by others who themselves relied on others. Make up your own mind, carry responsibility. If you must use measures, create your own. Better still, ask scientists to come up with their own measures.
  • Use clear guidelines: If you have to rely on external reviewers, formulate recommendations for how to counteract biases to the extent possible. Reviewers should not base their judgment on the popularity of a research area or the person. If a reviewer’s continued funding depends on the well-being of a certain research area, they have a conflict of interest and should not review papers in their own area. That will be a problem because this conflict of interest is presently everywhere. See next 3 points to alleviate it.
  • Make commitments: You have to get over the idea that all science can be done by postdocs on 2-year fellowships. Tenure was institutionalized for a reason and that reason is still valid. If that means fewer people, then so be it. You can either produce loads of papers that nobody will care about 10 years from now, or you can be the seed of ideas that will still be talked about in 1000 years. Take your pick. Short-term funding means short-term thinking.
  • Encourage a change of field: Scientists have a natural tendency to stick to what they know already. If the promise of a research area declines, they need a way to get out, otherwise you’ll end up investing money into dying fields. Therefore, offer reeducation support, 1-2 year grants that allow scientists to learn the basics of a new field and to establish contacts. During that period they should not be expected to produce papers or give conference talks.
  • Hire full-time reviewers: Create safe positions for scientists specialized in providing objective reviews in certain fields. These reviewers should not themselves work in the field and have no personal incentive to take sides. Try to reach agreements with other institutions on the number of such positions.
  • Support the publication of criticism and negative results: Criticism of other people’s work or negative results are presently underappreciated. But these contributions are absolutely essential for the scientific method to work. Find ways to encourage the publication of such communication, for example by dedicated special issues.
  • Offer courses on social and cognitive biases: This should be mandatory for anybody who works in academic research. We are part of communities and we have to learn about the associated pitfalls. Sit together with people from the social sciences, psychology, and the philosophy of science, and come up with proposals for lectures on the topic.
  • Allow a division of labor by specialization in task: Nobody is good at everything, so don’t expect scientists to be. Some are good reviewers, some are good mentors, some are good leaders, and some are skilled at science communication. Allow them to shine in what they’re good at and make best use of it, but don’t require the person who spends their evenings in student Q&A to also bring in loads of grant money. Offer them specific titles, degrees, or honors.

As a science writer or member of the public, ask questions:

  • You’re used to asking about conflicts of interest due to funding from industry. But you should also ask about conflicts of interest due to short-term grants or employment. Does the scientists’ future funding depend on producing the results they just told you about?
  • Likewise, you should ask if the scientists’ chance of continuing their research depends on their work being popular among their colleagues. Does their present position offer adequate protection from peer pressure?
  • And finally, like you are used to scrutinize statistics you should also ask whether the scientists have taken means to address their cognitive biases. Have they provided a balanced account of pros and cons or have they just advertised their own research?

You will find that for almost all research in the foundations of physics the answer to at least one of these questions is no. This means you can’t trust these scientists’ conclusions. Sad but true.


Reprinted from Lost In Math by Sabine Hossenfelder. Copyright © 2018. Available from Basic Books, an imprint of Perseus Books, a division of PBG Publishing, LLC, a subsidiary of Hachette Book Group, Inc

Phlogiston – Early Science at Work

One of the earlier scientific concepts was phlogiston, and it is of interest to follow why this concept went wrong, if it did. One of the major problems for early theory was that nobody knew very much. Materials had properties, and these were referred to as principles, which tended to be viewed either as abstractions, or as physical but weightless entities. We would not have such difficulties, would we? Um, spacetime?? Anyway, they then observed that metals did something when heated in air:

M   + air +  heat        ÞM(calx) ±  ???  (A calx was what we call an oxide.)

They deduced there had to be a metallic principle that gives the metallic properties, such as ductility, lustre, malleability, etc., but they then noticed that gold refuses to make a calx, which suggested there was something else besides the metallic principle in metals. They also found that the calx was not a mixture, thus rust did not lead to iron being attached to a lodestone. This may seem obvious to us now, but conceptually this was significant. For example, if you mix blue and yellow paint, you get green and they cannot readily be unmixed, nevertheless it is a mixture. Chemical compounds are not mixtures, even though you might make them by mixing two materials. Even more important was the work by Paracelsus, the significance of which is generally overlooked. He noted there were a variety of metals, calces and salts, and he generalized that acid plus metal or acid plus metal calx gave salts, and each salt was specifically different, and depended only on the acid and metal used. He also recognized that what we call chemical compounds were individual entities, that could be, and should be, purified.

It was then that Georg Ernst Stahl introduced into chemistry the concept of phlogiston. It was well established that certain calces reacted with charcoal to produce metals (but some did not) and the calx was usually heavier than the metal. The theory was, the metal took something from the air, which made the calx heavier. This is where things became slightly misleading because burning zinc gave a calx that was lighter than the metal. For consistency, they asserted it should have gained but as evidence poured in that it had not, they put that evidence in a drawer and did not refer to it. Their belief that it should have was correct, and indeed it did, but this avoiding the “data you don’t like” leads to many problems, not the least of which include “inventing” reasons why observations do not fit the theory without taking the trouble to abandon the theory. This time they were right, but that only encourages the act. As to why there was the problem, zinc oxide is relatively volatile and would fume off, so they lost some of the material. Problems with experimental technique and equipment really led to a lot of difficulties, but who amongst us would do better, given what they had?

Stahl knew that various things combusted, so he proposed that flammable substances must contain a common principle, which he called phlogiston. Stahl then argued that metals forming calces was in principle the same as materials like carbon burning, which is correct. He then proposed that phlogiston was usually bound or trapped within solids such as metals and carbon, but in certain cases, could be removed. If so, it was taken up by a suitable air, but because the phlogiston wanted to get back to where it came from, it got as close as it could and took the air with it. It was the phlogiston trying to get back from where it came that held the new compound together. This offered a logical explanation for why the compound actually existed, and was a genuine strength of this theory. He then went wrong by arguing the more phlogiston, the more flammable the body, which is odd, because if he said some but not all such materials could release phlogiston, he might have thought that some might release it more easily than others. He also argued that carbon was particularly rich in phlogiston, which was why carbon turned calces into metals with heat. He also realized that respiration was essentially the same process, and fire or breathing releases phlogiston, to make phlogisticated air, and he also realized that plants absorbed such phlogiston, to make dephlogisticated air.

For those that know, this is all reasonable, but happens to be a strange mix of good and bad conclusions. The big problem for Stahl was he did not know that “air” was a mixture of gases. A lesson here is that very seldom does anyone single-handedly get everything right, and when they do, it is usually because everything covered can be reduced to a very few relationships for which numerical values can be attached, and at least some of these are known in advance. Stahl’s theory was interesting because it got chemistry going in a systemic way, but because we don’t believe in phlogiston, Stahl is essentially forgotten.

People have blind spots. Priestley also carried out Lavoisier’s experiment:  2HgO  + heat   ⇌   2Hg  + O2and found that mercury was lighter than the calx, so argued phlogiston was lighter than air. He knew there was a gas there, but the fact it must also have weight eluded him. Lavoisier’s explanation was that hot mercuric oxide decomposed to form metal and oxygen. This is clearly a simpler explanation. One of the most important points made by Lavoisier was that in combustion, the weight increase of the products exactly matched the loss of weight by the air, although there is some cause to wonder about the accuracy of his equipment to get “exactly”. Measuring the weight of a gas with a balance is not that easy. However, Lavoisier established the fact that matter is conserved, and that in chemical reactions, various species react according to equivalent weights. Actually, the conservation of mass was discovered much earlier by Mikhail Lomonosov, but because he was in Russia, nobody took any notice. The second assertion caused a lot of trouble because it is not true without a major correction to allow for valence. Lavoisier also disposed of the weightless substance phlogiston simply by ignoring the problem of what held compounds together. In some ways, particularly in the use of the analytical balance, Lavoisier advanced chemistry, but in disposing of phlogiston he significantly retarded chemistry.

So, looking back, did phlogiston have merit as a concept? Most certainly! The metal gives off a weightless substance that sticks to a particular gas can be replaced with the metal gives off an electron to form a cation, and the oxygen accepts the electron to form an anion. Opposite charges attract and try to bind together. This is, for the time, a fair description of the ionic bond. As for weightless, nobody at the time could determine the weight difference between a metal and a metal less one electron, if they could work out how to make it. Of course the next step is to say that the phlogiston is a discrete particle, and now valence falls into place and modern chemistry is around the corner. Part of the problem there was that nobody believed in atoms. Again, Lomonosov apparently did, but as I noted above, nobody took any notice of him. Of course, is it is far easier to see these things in retrospect. My guess is very few modern scientists, if stripped of their modern knowledge and put back in time would do any better. If you think you could, recall that Isaac Newton spent a lot of time trying to unravel chemistry and got nowhere. There are very few ever that are comparable to Newton.

Is Science in as Good a Place as it Might Be?

Most people probably think that science progresses through all scientists diligently seeking the truth but that illusion was was shattered when Thomas Kuhn published “The Structure of Scientific Revolutions.” Two quotes:

(a) “Under normal conditions the research scientist is not an innovator but a solver of puzzles, and the puzzles upon which he concentrates are just those which he believes can be both stated and solved within the existing scientific tradition.”

(b) “Almost always the men who achieve these fundamental inventions of a new paradigm have been either very young or very new to the field whose paradigm they change. And perhaps that point need not have been made explicit, for obviously these are the men who, being little committed by prior practice to the traditional rules of normal science, are particularly likely to see that those rules no longer define a playable game and to conceive another set that can replace them.”

Is that true, and if so, why? I think it follows from the way science is learned and then funded. In general, scientists gain their expertise by learning from a mentor, and if you do a PhD, you work for several years in a very narrow field, and most of the time the student follows the instructions of the supervisor. He will, of course, discuss issues with the supervisor, but basically the young scientist will have acquired a range of techniques when finished. He will then go on a series of post-doctoral fellowships, generally in the same area because he has to persuade the new team leaders he is sufficiently skilled to be worth hiring. So he gains more skill in the same area, but invariably he also becomes more deeply submerged in the standard paradigm. At this stage of his life, it is extremely unusual for the young scientist to question whether the foundations of what he is doing is right, and since most continue in this field, they have the various mentors’ paradigm well ingrained. To continue, either they find a company or other organization to get an income, or they stay in a research organization, where they need funding. When they apply for it they keep well within the paradigm; first, it is the easiest way for success, and also boat rockers generally get sunk right then. To get funding, you have to show you have been successful; success is measured mainly by the number of scientific papers and the number of citations. Accordingly, you choose projects that you know will work and shuld not upset any apple-carts. You cite those close to you, and they will cite you; accuse them of being wrong and you will be ignored, and with no funding, tough. What all this means is that the system seems to have been designed to generate papers that confirm what you already suspect. There will be exceptions, such as “discovering dark matter” but all that has done so far is to design a parking place for what we do not understand. Because we do  not understand, all we can do is make guesses as to what it is, and the guesses are guided by our current paradigm, and so far our guesses are wrong.

One small example follows to show what I mean. By itself, it may not seem important, and perhaps it isn’t. There is an emerging area of chemistry called molecular dynamics. What this tries to do is to work out is how energy is distributed in molecules as this distribution alters chemical reaction rates, and this can be important for some biological processes. One such feature is to try to relate how molecules, especially polymers, can bend in solution. I once went to hear a conference presentation where this was discussed, and the form of the bending vibrations was assumed to be simple harmonic because for that the maths are simple, and anyhting wrong gets buried in various “constants”. All question time was taken up by patsy questions from friends, but I got hold of the speaker later, and pointed out that I had published paper a long time previously that showed the vibrations were not simple harmonic, although that was a good approximation for small vibrations. The problem is that small vibrations are irrelevant if you want to see significant chemical effects; they come from large vibrations. Now the “errors” can be fixed with a sequence of anharmonicity terms, each with their own constant, and each constant is worked around until the desired answer is obtained. In short you get the asnswer you need by adjusting the constants.

The net result is, it is claimed that good agreement with observation is found once the “constants” are found for the given situation. The “constants” appear to be only constant for a given situation, so arguably they are not constant, and worse, it can be near impossible to find out what they are from the average paper. Now, there is nothing wrong with using empirical relationships since if they work, they make it a lot easier to carry out calculations. The problem starts when, if you do not know whyit works, you may use it under circumstances when it no longer works.

Now, before you say that surely scientists want to understand, consider the problem for the scientist: maybe there is a better relationship, but to change to use it would involve re-writing a huge amount of computer code. That may take a year or so, in which time no publications are generated, and when the time for applications for further funding comes up, besides having to explain the inactivity, you have to explain why you were wrong before. Who is going to do that? Better to keep cranking the handle because nobody is going to know the difference. Does this matter? In most cases, no, because most science involves making something or measuring something, and most of the time it makes no difference, and also most of the time the underpinning theory is actually well established. The NASA rockets that go to Mars very successfully go exactly where planned using nothing but good old Newtonian dynamics, some established chemistry, some established structural and material properties, and established electromagnetism. Your pharmaceuticals work because they have been empirically tested and found to work (at least most of the time).

The point I am making is that nobody has time to go back and check whether anything is wrong at the fundamental level. Over history, science has been marked by a number of debates, and a number of treasured ideas overthrown. As far as I can make out, since 1970, far more scientific output has been made than in all previous history, yet there have been no fundamental ideas generated during this period that have been accepted, nor have any older ones been overturned. Either we have reached a stage of perfection, or we have ceased looking for flaws. Guess which!