The Sociodynamics of Science

The title is a bit of an exaggeration as to the importance of this post, nevertheless since I was at what was probably my last scientific conference (NZ Institute of Chemistry, at Christchurch) I could not resist looking around at behaviour as well as the science. I also gave two presentations. Speaking to an audience gives the speaker an opportunity to order the presentation so as to give the most force to the surprising parts of it, not that many took advantage of this. Overall, very few, if any (apart from yours truly) seemed to want to provide their audience with something that might be uncomfortable for their preconceived notions.

First, the general part provided great support for Thomas Kuhn’s analysis. I found most of the invited speakers and keynote speakers to illustrate an interesting aspect: why are they speaking? Very few actually wished to educate or convince anyone of anything in particular, and personally, I found the few that did to be by far the most interesting. Most of the presentations from academics could be summarised as, “I have a huge number of research students and here is what they have done.” What then followed was a very large amount of results, but there was seldom an interesting unifying principle. Chemistry tends to be susceptible to this, as a very common student research program is to try to make a variety of related compounds. This may well have been very useful, but if we do not see why this approach was taken, it tends to feel like filling up some compendium of compounds, or, as Rutherford put it rather acidly, “stamp collecting”. These types of talks are characterised by the speaker trying to get in as many compounds as they can, so they keep talking and use up the allocated question time. I suspect that one of the purposes of these presentations is to say, “Look at what we have done. This has given our graduate students a good number of scientific publications, so if you are thinking of being a grad student, why not come here?” I can readily understand that line of thinking, but its relevance for older scientists is questionable. There were a few presentations where the output would be of more general interest, though. I found the odd presentation that showed how to do something new, where it could have quite wide applications, to be of particular interest.

Now to the personal. My first presentation was a summary of my biogenesis approach. It may have had too much information across too wide a field, but the interesting point was that it generated a discussion at the end relating to my concept of how homochirality was generated. My argument is that reproduction depends on it because the geometry prevents the formation of a second strand if the first strand is not either entirely left-handed or right-handed in its pitch. So the issue then was, it was pure chance that D-ribose containing helices predominated, in part because the chance of getting a long-enough homochiral strand is very remote, and when one arises, then it takes up all the resources and predominates. The legitimate question then is, why doesn’t the other handed helix eventually arise? It may be slower to do so, but it is not necessarily impossible. My partial answer to that is the mer units are also used to bind to some other important units for life to give them solubility, and the wrong sort gets used up and does not build up concentration. Maybe that is so, but there is no evidence.

It was my second presentation that would be controversial, and it was interesting to watch the expressions. Part of the problem for me was it was the last such presentation (there were some closing speakers after me, and after morning tea) and there is something about conferences at the end – everyone is busy thinking about how to get to the airport, etc, so they tend to lose concentration. My first slide put up three propositions: the wave functions everyone uses for atomic orbitals are wrong; because of that, the calculation of the chemical bond requires the use of a hitherto unrecognised quantum effect (which is a very specific expression involving only universally recognised quantum numbers) and finally, the commonly held belief that relativistic effects on the inner electrons make a major effect on the valence electron of the heaviest elements is wrong. 

As you might expect, this was greeted initially with yawns and disinterest: this was going to be wrong. At least that seemed to be written over their faces. I then diverted to explain my guidance wave interpretation, which is essentially the de Broglie pilot wave concept, but with two additions: an application of Euler’s complex number theory that everyone seems to have missed, and secondly, I argued that if the wave really causes diffraction in the two-slit-type experiment, it has to travel at the same speed as the particle. These two points lead to serious simplifications in the calculation of properties of chemical bonds. The next step was to put up a lot of evidence for the different wave functions, with about 70 data points spanning a selection of atoms, of which about twenty supported the absence of any significant relativistic effect. (This does not say relativity is wrong, but merely that its effects on valence electrons are too small to be noticed at this level of analysis.) What this was effectively saying was that most of the current calculations only give agreement with observation when liberal use is made of assignable constants, which conveniently can be adjusted so you get the “right” answer.So, question time. One question surprised me: Does my new approach do anything new? I argued that the fact everyone is using the wrong wave functions, there is a quantum effect that nobody has recognised, and everyone is wrong with those relativistic effects could be considered new. Yes, but have you got a prediction? This was someone difficult to satisfy. Well, if you have access to a good physics lab, I suggested, here is where you can show that, assuming my theory is correct, make an adjustment to the delayed choice quantum eraser experiment (and I outlined the simple change) then you will reach the opposite conclusion. If you don’t agree with me, then you should do the experiment to prove I am wrong. The stunned expressions were worth the cost of going to the conference. Not that anyone will do the experiment. That would show interest in finding the truth, and in fairness, it is more a job for a physicist.

An Ugly Turn for Science

I suspect that there is a commonly held view that science progresses inexorably onwards, with everyone assiduously seeking the truth. However, in 1962 Thomas Kuhn published a book “The structure of scientific revolutions” that suggested this view is somewhat incorrect. He suggested that what actually happens is that scientists spend most of their time solving puzzles for which they believe they know the answer before they begin, in other words their main objective is to add confirming evidence to current theory and beliefs. Results tend to be interpreted in terms of the current paradigm and if it cannot, it tends to be placed in the bottom drawer and is quietly forgotten. In my experience of science, I believe that is largely true, although there is an alternative: the result is reported in a very small section two-thirds through the published paper with no comment, where nobody will notice it, although I once saw a result that contradicted standard theory simply reported with an exclamation mark and no further comment. This is not good, but equally it is not especially bad; it is merely lazy and ducking the purpose of science as I see it, which is to find the truth. The actual purpose seems at times merely to get more grants and not annoy anyone who might sit on a funding panel.

That sort of behaviour is understandable. Most scientists are in it to get a good salary, promotion, awards, etc, and you don’t advance your career by rocking the boat and missing out on grants. I know! If they get the results they expect, more or less, they feel they know what is going on and they want to be comfortable. One can criticise that but it is not particularly wrong; merely not very ambitious. And in the physical sciences, as far as I am aware, that is as far as it goes wrong. 

The bad news is that much deeper rot is appearing, as highlighted by an article in the journal “Science”, vol 365, p 1362 (published by the American Association for the Advancement of Science, and generally recognised as one of the best scientific publications). The subject was the non-publication of a dissenting report following analysis on the attack at Khan Shaykhun, in which Assad was accused of killing about 80 people with sarin, and led, 2 days later, to Trump asserting that he knew unquestionably that Assad did it, so he fired 59 cruise missiles at a Syrian base.

It then appeared that a mathematician, Goong Chen of Texas A&M University, elected to do some mathematical modelling using publicly available data, and he got concerned with what he found. If his modelling was correct, the public statements were wrong. He came into contact with Theodore Postol, an emeritus Professor from MIT and a world expert on missile defence and after discussion he, Postol, and five other scientists carried out an investigation. The end result was that they wrote a paper essentially saying that the conclusions that Assad had deployed chemical weapons did not match the evidence. The paper was sent to the journal “Science and Global Security” (SGS), and following peer review was authorised for publication. So far, science working as it should. The next step is if people do not agree, they should either dispute the evidence by providing contrary evidence, or dispute the analysis of the evidence, but that is not what happened.

Apparently the manuscript was put online as an “advanced publication”, and this drew the attention of Tulsi Gabbard, a Presidential candidate. Gabbard was a major in the US military and had been deployed in Syria in a sufficiently senior position to have a realistic idea of what went on. She has stated she believed the evidence was that Assad did not use chemical weapons. She has apparently gone further and said that Assad should be properly investigated, and if evidence is found he should be accused of war crimes, but if evidence is not found he should be left alone. That, to me, is a sound position: the outcome should depend on evidence. She apparently found the preprint and put it on her blog, which she is using in her Presidential candidate run. Again, quite appropriate: resolve an issue by examining the evidence. That is what science is all about, and it is great that a politician is advocating that approach.

Then things started to go wrong. This preprint drew a detailed critique from Elliot Higgins, the boss of Bellingcat, which has a history of being anti-Assad, and there was also an attack from Gregory Koblentz, a chemical weapons expert who says Postol has a pro-Assad line. The net result is that SGS decided to pull the paper, and “Science” states this was “amid fierce criticism and warnings that the paper would help Syrian President Bashar al-Assad and the Russian government.” Postol argues that Koblentz’s criticism is beside the point. To quote Postol: “I find it troubling that his focus seems to be on his conclusion that I am biased. The question is: what’s wrong with the analysis I used?” I find that to be well said.

According to the Science article, Koblentz admitted he was not qualified to judge the mathematical modelling, but he wrote to the journal editor more than once, urging him not to publish. Comments included: “You must approach this latest analysis with great caution”, the paper would be “misused to cover up the [Assad] regime’s crimes” and “permanently stain the reputation of your journal”. The journal then pulled the paper off the publication rank, at first saying they would edit it, but then they backtracked completely. The editor of the journal is quoted in Science as saying, “In hindsight we probably should have sent it to a different set of reviewers.” I find this comment particularly abhorrent. The editor should not select reviewers on the grounds they will deliver the verdict that the editor wants, or the verdict that happens to be most convenient; reviewers should be restricted to finding errors in the paper.I find it extremely troubling that a scientific institution is prepared to consider repressing an analysis solely on grounds of political expediency with no interest in finding the truth. It is also true that I hold a similar view relating to the incident. I saw a TV clip that was taken within a day of the event where people were taking samples from the hole where the sarin was allegedly delivered without any protection. If the hole had been the source of large amounts of sarin, enough would remain at the primary site to still do serious damage, but nobody was affected. But whether sarin was there or not is not my main gripe. Instead, I find it shocking that a scientific journal should reject a paper simply because some “don’t approve”. The reason for rejection of a paper should be that it is demonstrably wrong, or it is unimportant. The importance cannot be disputed, and if it is demonstrably wrong, then it should be easy to demonstrate where it is wrong. What do you all think?