More on Disruptive Science

No sooner do I publish a blog on disruptive science than what does Nature do? It publishes an editorial https://doi.org/10.1038/d41586-023-00183-1 questioning whether it is (which is fair enough) and then, rather bizarrely, does it matter? Does the journal not actually want science to advance, but merely restrict itself to the comfortable? Interestingly, their disruptive examples they cite come from Crick (1953, structure of DNA) and the first planet orbiting another star. In my opinion, these are not disruptive. In Crick’s case, he merely used Rosalind Franklin’s data, and in the second case, this had been expected for years, and indeed I had seen a claim about twenty years earlier for a Jupiter-style planet around Epsilon Eridani. (Unfortunately, I did not write down the reference because I was not involved in that yet.) This result was rubbished because it was claimed the data was too inaccurate, yet the result that I wrote down complied quite well with what we now accept. I am always suspicious of discounting a result when it not only got a good value for the semimajor axis but also proposed a significantly eccentric orbit. For me, these two papers are merely obvious advances on previous theory or logic.

The proposed test by Nature for a disruptive paper is based on citations, the idea being that if a disruptive paper is cited, it is less likely for its predecessors to be cited. If the paper is consolidating, the previously disruptive papers continue to be cited. If this were to be a criterion, probably one of the most disruptive papers would be on the EPR paradox (Einstein, A., Podolsky, B., Rosen, N. 1935. Can quantum-mechanical description of physical reality be considered complete?  Phys. Rev. 47: 777-780.) Yet the remarkable thing about this paper is that people fall over themselves to point out that Einstein “got it wrong”. (That they do not actually answer Einstein’s point seems to be irrelevant to them.)

Nature spoke to a number of scholars who study science and innovation. Some were worried by Park’s paper, one of the worries being that declining disruptiveness could be linked to sluggish productivity and economic growth being seen in many parts of the world. Sorry, but I find that quite strange. It is true that an absence of discoveries is not exactly helpful, but economic use of a scientific discovery usually takes decades after the discovery. There is prolonged engineering, and if it is novel, a market for the product has to be developed. Then they usually displace something else. Very little economic growth follows quickly from scientific discovery. No need to go down this rabbit hole.

Information overload was considered a reason, and it was suggested that artificial intelligence sift and sort useful information, to identify projects with potential for a breakthrough. I completely disagree with this regarding disruption. Anyone who has done a computer search of scientific papers will know that unless you have a very clear idea of what you are looking for, you get a bewildering amount of irrelevant stuff. Thus, if I want to know the specific value of some measurement, the computer search will give me in seconds what previously could have taken days. But if the search constraints are abstract, almost anything can come out, including the erroneous material, examples being in my previous post. The computer, so far, cannot make value judgments because it has no criteria for so doing. What it will do is to comply with established thinking because they will be the constraints for the search. Disruption is something that you did not expect. How can a computer search for what is neither expected nor known? Particularly if that which is unexpected is usually mentioned as an uncomfortable aside in papers and not mentioned in abstracts or keywords. The computer will have to thoroughly understand the entire subject to appreciate the anomaly, and artificial intelligence is still a long way from that.

In a similar vein, Nature published a news item dated January 18. Apparently, people have been analysing advertisements and have come across something both remarkable and depressing: there are apparently hundreds of advertisements offering the sale of authorship in a reputable journal for a scientific paper. Prices range from hundreds to thousands of USD depending on the research area and the journal’s prestige, and the advertisement often cites the title of the paper, the journal, when it will be published (how do they know that) and the position of the authorship slots. This is apparently a multimillion-dollar industry. Interestingly, this advertising that specifies what title in what journal immediately raises suspicion, and a number of papers have been retracted. Another flag is that following peer review, further authors are added. If the authors actually contributed to the paper, they should have been known at the start. The question then is, why would anyone pay good coin for that? Unfortunately, the reason is depressingly simple: you need more citations to get more money, promotion, prizes, tenure, etc. It is a scheme to make money from those whose desire for position exceeds their skill level. And it works because nobody ever reads these papers anyway. The chances of being asked by anyone for details is so low it would be extremely unlucky to be caught out that way. Such an industry, of course, will be anything but disruptive. It only works as long as nobody with enough skill to recognize an anomaly actually reads the papers, because then the paper becomes famous, and thoroughly examined. This industry works because of the citation-counting but not understanding the content is the method of evaluating science. In short, evaluation by ignorant committee.

Advertisement

My Introduction to the Scientific Method – as actually practised

It is often said if you can’t explain something to a six-year-old, you don’t understand.

I am not convinced, but maybe I don’t understand. Anyway, I thought I would follow from my previous post with an account of my PhD thesis. It started dramatically. My young supervisor gave me a choice of projects but only one looked sensible. I started that, then found the answer just published. Fortunately, only month wasted, but I had no project and supervisor was off on vacation. Head of Department suggested I find myself a project, so I did. There was a great debate going on whether the electrons in cyclopropane could delocalize into other parts of a molecule. To explain, carbon forms bonds at an angle of 109.5 degrees, but the three carbons of cyclopropane have to be formally at 60 degrees. In bending them around, the electrons come closer together and the resultant electric repulsions mean the overall energy is higher. The higher energy difference is called strain energy. One theory was the strain energy could be relieved if the electrons could get out and spread themselves over more space. Against that, there was no evidence of single bonds being able to do this.

My proposal was to put a substituted benzene ring on one corner, and an amine on the other. The idea was, amines are bases and react with acid, and when they do that the electrons on the amine are trapped. If the cyclopropane ring could delocalize electrons there was one substituent I could put on the benzene ring that would have different effects on that basicity depending on whether the cyclopropane ring did delocalize electrons or not. There was a test through something called the Hammett equation. My supervisor had published on this, but this would be the first time the equation might be used to do something of significance. Someone had tried that scheme with carboxylic acids, but with an extra carbon atom they were not very responsive and there were two reports with conflicting answers. My supervisor, when he came back, was not very thrilled with this proposal, but his best alternative was to measure the rates of a sequence of reactions for which I had found a report that said the reaction did not go. So he agreed. Maybe I should have been warned. Anyway, I had some long-winded syntheses to do.

When it came to reaching the end-position, my supervisor went to North America on sabbatical, and then sequentially looking for a new position in North America, so I was on my own. The amine results did not yield the desired result because the key substituent, a nitro group, reacted with the amine very quickly. That was a complete surprise. I could make the salt, but the solution with some amine quickly discoloured. However, in a fleeting visit my supervisor made a useful suggestion: react the acids in toluene with a diazo compound. While the acids previously had been too similar in properties in water, it turned out that toluene greatly amplified the differences. The results were clear: the cyclopropane ring did not delocalize electrons.

However, all did not go well. The quantum mechanical people who had shown the extreme stability of polywater through electron delocalization turned their techniques to this problem and asserted it did. In support, they showed that the cyclopropane ring stabilized adjacent positive charge. However, if the strain energy arose through increased electron repulsion, a positive charge would reduce that. There would be extra stability with a positive charge adjacent, BUT negative charge would destabilize it. So there were two possible explanations, and a clear means of telling the difference.

Anions on a carbon atom are common in organic chemistry. All attempts at making such an anion adjacent to a cyclopropane ring failed. A single carbon atom with two hydrogen atoms, and a benzene ring attached forms a very stable anion (called a benzyl anion). A big name replaced one of the hydrogen atoms of a benzyl anion with a cyclopropane ring, and finally made something that existed, although only barely. He published a paper and stated it was stabilized by delocalization. Yes, it was, and the stabilization would have come from the benzene ring. Compared with any other benzyl anion it was remarkably unstable. But the big names had spoken.

Interestingly, there is another test from certain spectra. In what is called an n->π* transition (don’t worry if that means nothing to you) there is a change of dipole moment with the negative end becoming stronger close to a substituent. I calculated the change based on the polarization theory, and came up with almost the correct answer. The standard theory using delocalization has the spectral shift due to the substituent in the opposite direction.

My supervisor, who never spoke to me again and was not present during the thesis write-up, wrote up a paper on the amines, which was safe because it never showed anything that would annoy the masses, but he never published the data that came from his only contribution!

So, what happened? Delocalization won. A review came out that ignored every paper that disagreed with its interpretation, including my papers. Another review dismissed the unexpected spectral shift I mentioned by saying “it is unimportant”. I ended up writing an analysis to show that there were approximately 60 different sorts of observation that were not in accord with the delocalization proposition. It was rejected by review journals as “This is settled” (that it was settled wrongly was irrelevant) and “We do not publish logic analyses.” Well, no, it seems they do not, and do not care that much.

The point I am trying to make here is that while this could be regarded as not exceptionally important, if this sort of wrong behaviour happens to one person, how much happens across the board? I believe I now know why science has stopped making big advances. None of those who are established want to hear anyone question their own work. The sad part is, that is not the only example I have.

Ossified Science

There was an interesting paper in the Proceedings of the National Academy of Sciences (118,  e2021636118,  https://doi.org/10.1073/pnas.2021636118 ) which argued that science is becoming ossified, and new ideas simply cannot emerge. My question is, why has it taken them this long to recognize it? That may seem a strange thing to say, but over the life of my career I have seen no radically new ideas get acknowledgement.

The argument in the paper basically fell down to one simple fact: over this period there has been a huge expansion in the quantity of scientists, research funding, and the number of publications. Progress in the career of a scientist depends on the number of papers produced. However, the more papers produced, the more likely is the science to stagnate because nobody has the time to read everything. People pick and choose what to read, the selection biased by the need not to omit people who may read your funding application. Reading is thus focused on established thinking. As the number of papers increase, citations flow increasingly towards the already well-cited papers. Lesser known authors are unlikely to ever become highly cited; if they do it is not through a cumulative process of analysis. New material is extremely unlikely to disrupt existing work, with the result that progress in large established scientific fields may be trapped in existing canon. That is fairly stern stuff.

It is important to note there are at least three major objectives relating to science. The first is developing methods to gain information, or, if you prefer, developing new experimental or observational techniques. The second is using those techniques to record more facts. The more scientists there are, the more successful these are, and over the period we have most certainly been successful in these objectives. The rapid provision of new vaccines for SARS-CoV-2 shows that when pushed, we find ways of how to do it. When I started my career, a very large clunky computer that was incredibly slow and had internal memory measured in bytes occupied a room. Now we have memory that stores terrabytes in something you can hold in your hand. So yes, we have learned how to do it, and we have acquired a huge amount of knowledge. There is a glut of facts available.

The third objective is to analyse those facts and derive theories so we can understand nature, and do not have to examine that mountain of data for any reason other than to verify that we are on the right track. That is where little has happened.

As the PNAS paper points out, policy reflects the “more is better” approach. Rewards are for the number of articles, and citations reflect the quality of them. The number of publications are easily counted, but the citations are more problematical. To get the numbers up, people carry out work most likely to reach a fast result. The citations are the ones most easily found, which means those that get a good start gather citations like crazy. There are also “citation games”: you cite mine, I’ll cite yours. These citations may have nothing in particular to add in terms of the science or logic, but they do add to the career prospects.

What happens when a paper is published? As the PNAS paper says, “cognitively overloaded reviewers and readers process new work only in relationship to existing exemplars”. If a new paper does not fit the existing dynamic, it will be ignored. If  the young researcher wants to advance, he or she must avoid trying to rock the boat. You may feel that the authors of this are overplaying a non-problem. Not so. One example shows how the scientific hierarchy thinks. One of the two major advances in theoretical physics in the twentieth century was quantum mechanics. Basically, all our advanced electronic technology depends on that theory, and in turn the theory is based on one equation published by Erwin Schrödinger. This equation is effectively a statement that energy is conserved, and that the energy is determined by a wave function ψ. It is too much to go into here, but the immediate consequence was the problem, what exactly does ψ represent?

Louis de Broglie was the first to propose that quantum motion was represented by a wave, and he came up with a different equation which stated the product of the momentum and wavelength was Planck’s constant, or the quantum of action. De Broglie then proposed that ψ was a physical wave, which he called the pilot wave. This was promptly ignored in favour of a far more complicated mathematical procedure that we can ignore for the present. Then, in the early 1950s David Bohm more or less came up with the same idea as de Broglie, which was quite different from the standard paradigm. So how was that received? I found a 1953 quote from J. R. Oppenheimer: “We consider it juvenile deviationism .. we don’t waste our time … [by] actually read[ing] the paper. If we cannot disprove Bohm, then we must agree to ignore him.” So much for rational analysis.

The standard theory states that if an electron is fired at two slits it goes through BOTH of them then gives an interference pattern. The pilot wave says the electron has a trajectory, goes through one slit only, and while it forms the same interference pattern, an electron going through the left slit never ends up in the right hand pattern. Observations have proved this to be correct (Kocsis, S. and 6 others. 2011. Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer Science 332: 1170 – 1173.) Does that change anyone’s mind? Actually, no. The pilot wave is totally ignored, except for the odd character like me, although my version is a little different (called a guidance wave) and it is ignored more.

Interpreting Observations

The ancients, with a few exceptions, thought the Earth was the centre of the Universe and everything rotated around it, thus giving day and night. Contrary to what many people think, this was not simply stupid; they reasoned that it could not be rotating. An obvious experiment that Aristotle performed was to throw a stone high into the air so that it reached its maximum height directly above. When it dropped, it landed directly underneath it and its path was vertical to the horizontal. Aristotle recognised that up at that height and dropped, if the Earth was rotating the angular momentum from the height should carry it eastwards, but it did not. Aristotle was a clever reasoner, but he was a poor experimenter. He also failed to consider consequences of some of his other reasoning. Thus he knew that the Earth was a sphere, and he knew the size of it and thanks to Eratosthenes this was a fairly accurate value. He had reasoned correctly why that was, which was that matter fell towards the centre. Accordingly, he should also have realised his stone should also fall slightly to the south. (He lived in Greece; if he lived here it would move slightly northwards.) When he failed to notice that he should have realized his technique was insufficiently accurate. What he failed to do was to put numbers onto his reasoning, and this is an error in reasoning we see all the time these days from politicians. As an aside, this is a difficult experiment to do. If you don’t believe me, try it. Exactly where is the point vertically below your drop point? You must not find it by dropping a stone!

He also realised that Earth could not orbit the sun, and there was plenty of evidence to show that it could not. First, there was the background. Put a stick in the ground and walk around it. What you see is the background moves and moves more the bigger the circle radius, and smaller the further away the object is. When Aristarchus proposed the heliocentric theory all he could do was make the rather unconvincing bleat that the stars in the background must be an enormous distance away. As it happens, they are. This illustrates another problem with reasoning – if you assume a statement in the reasoning chain, the value of the reasoning is only as good as the truth of the assumption. A further example was that Aristotle reasoned that if the earth was rotating or orbiting the sun, because air rises, the Universe must be full of air, and therefore we should be afflicted by persistent easterly winds. It is interesting to note that had he lived in the trade wind zone he might have come to the correct conclusion for entirely the wrong reason.

But if he did he would have a further problem because he had shown that Earth could not orbit the sun through another line of reasoning. As was “well known”, heavy things fall faster than light things, and orbiting involves an acceleration towards the centre. Therefore there should be a stream of light things hurling off into space. There isn’t, therefore Earth does not move. Further, you could see the tail of comets. They were moving, which proves the reasoning. Of course it doesn’t because the tail always goes away from the sun, and not behind the motion at least half the time. This was a simple thing to check and it was possible to carry out this checking far more easily than the other failed assumptions. Unfortunately, who bothers to check things that are “well known”? This shows a further aspect: a true proposition has everything that is relevant to it in accord with it. This is the basis of Popper’s falsification concept.

One of the hold-ups involved a rather unusual aspect. If you watch a planet, say Mars, it seems to travel across the background, then slow down, then turn around and go the other way, then eventually return to its previous path. Claudius Ptolemy explained this in terms of epicycles, but it is easily understood in term of both going around the sun provided the outer one is going slower. That is obvious because while Earth takes a year to complete an orbit, it takes Mars over two years to complete a cycle. So we had two theories that both give the correct answer, but one has two assignable constants to explain each observation, while the other relies on dynamical relationships that at the time were not understood. This shows another reasoning flaw: you should not reject a proposition simply because you are ignorant of how it could work.I went into a lot more detail of this in my ebook “Athene’s Prophecy”, where for perfectly good plot reasons a young Roman was ordered to prove Aristotle wrong. The key to settling the argument (as explained in more detail in the following novel, “Legatus Legionis”) is to prove the Earth moves. We can do this with the tides. The part closest to the external source of gravity has the water fall sideways a little towards it; the part furthest has more centrifugal force so it is trying to throw the water away. They may not have understood the mechanics of that, but they did know about the sling. Aristotle could not detect this because the tides where he lived are miniscule but in my ebook I had my Roman with the British invasion and hence had to study the tides to know when to sail. There you can get quite massive tides. If you simply assume the tide is cause by the Moon pulling the water towards it and Earth is stationary there would be only one tide per day; the fact that there are two is conclusive, even if you do not properly understand the mechanics.

Free Will

You will see many discussions regarding free will. The question is, do you have it, or are we in some giant computer program. The problem is that classical physics is deterministic, and you will often see claims that Newtonian physics demands that the Universe works like some finely tuned machine, following precise laws of motion. And indeed, we can predict quite accurately when eclipses of the sun will occur, and where we should go to view them. The presence of eclipses in the future is determined now. Now let us extrapolate. If planets follow physical laws, and hence their behaviour can be determined, then so do snooker or pool balls, even if we cannot in practice calculate all that will happen on a given break. Let us take this further. Heat is merely random kinetic energy, but is it truly random? It seems that way, but the laws of motion are quite clear: we can calculate exactly what will happen in any collision and it is just in practice the calculations are too complicated to even consider doing it. You bring in chaos theory, but this does nothing for you; the calculations may be utterly impossible to carry out, but they are governed solely by deterministic physics, so ultimately what happens was determined and it is just that we do not know how to calculate it. Electrodynamics and quantum theory are deterministic, even if quantum theory has random probability distributions. Quantum behaviour always follows strict conservation laws and the Schrödinger equation is actually deterministic. If you know ψ and know the change of conditions, you know the new ψ. Further, all chemistry is deterministic. If I go into the lab, take some chemicals and mix them and if necessary heat them according to some procedure, every time I follow exactly the same procedures, I shall end up with the same result.

So far, so good. Every physical effect follows from a physical cause. Therefore, the argument goes, since our brain works on physical and chemical effects and these are deterministic, what our brains do is determined exactly by those conditions. But those conditions were determined by what went before, and those before that, and so on. Extrapolating, everything was predetermined at the time of the big bang! At this point the perceptive may feel that does not seem right, and it is not. Consider nuclear decay. We know that particles, say neutrons, are emitted with a certain probability over an extended period of time. They will be emitted, but we cannot say exactly, or even roughly, when. The nuclei have angular uncertainty, therefore it follows that you cannot know what direction it is emitted because according to the laws of physics that is not determined until it is emitted. You may say, so what? That is trivial. No, the so what is that when you find one exception, you falsify the overall premise that everythingwas determined at the big bang. Which means something else introduced causes. Also, the emitted neutron may now generate new causes that could not be predetermined.

Now we start to see a way out. Every physical effect follows from a physical cause, but where do the causes come from? Consider stretching a wire with ever increasing force; eventually it breaks. It usually breaks at the weakest point, which in principle is predictable, but suppose we have a perfect wire with no point weaker than any other. It must still break, but where? At the instant of breaking some quantum effect, such as molecular vibration, will offer momentarily weaker and stronger spots. One with the greatest weakness will go, but due to the Uncertainty Principle that the given spot is unpredictable.

Take evolution. This proceeds by variation in the nucleic acids, but where in the chain is almost certainly random because each phosphate ester linkage that has to be broken is equivalent, just like the points in the “ideal wire”. Most resultant mutations die out. Some survive, and those that survive long enough to reproduce contribute to an evolutionary change. But again, which survives depends on where it is. Thus a change that provides better heat insulation at the expense of mobility may survive in polar regions, but it offers nothing in the equatorial rain forest. There is nothing that determines where what mutation will arise; it is a random event.Once you cannot determine everything, even in principle, it follows you must accept that not every cause is determined by previous events. Once you accept that, since we have no idea how the mind works, you cannot insist the way my mind works was determined at the time of the big bang. The Universe is mechanical and predictable in terms of properties obeying the conservation laws, but not necessarily anything else. I have free will, and so do you. Use it well.

Science is No Better than its Practitioners

Perhaps I am getting grumpy as I age, but I feel that much in science is not right. One place lies in the fallacy ad verecundiam. This is the fallacy of resorting to authority. As the motto of the Royal Society puts it, nullius in verba. Now, nobody expects you to personally check everything, and if someone has measured something and either clearly shows how he/she did it, or it is something that is done reasonably often, then you take their word for it. Thus if I want to know the melting point of benzoic acid I look it up, and know that if the reported value is wrong, someone would have noticed. However, a different problem arises with theory because you cannot measure it. Further, science has got so complicated that any expert is usually an expert in a very narrow field. The net result is that  because things have got so complicated, most scientists find theories too difficult to examine in detail and do defer to experts. In physics, this tends to be because there is a tendency for the theory to descend into obscure mathematics and worse, the proponents seem to believe that mathematics IS the basis of nature. That means there is no need to think of causes. There is another problem, that also drifts over to chemistry, and that is the results of a computer-driven calculation must be right. True, there will be no arithmetical mistake but as was driven into our heads in my early computer lectures: garbage in, garbage out.

This post was sparked by an answer I gave to a chemistry question on Quora. Chemical bonds are usually formed by taking two atoms with a single electron in an orbital. Think of that as a wave that can only have one or two electrons. The reason it can have only two electrons is the Pauli Exclusion Principle, which is a very fundamental principle in physics. If each atom has only one in  such an orbital, they can combine and form a wave with two electrons, and that binds the two atoms. Yes, oversimplified. So the question was, how does phosphorus pentafluoride form. The fluorine atoms have one such unpaired electron each, and the phosphorus has three, and additionally a pair in one wave. Accordingly, you expect phosphorus to form a trifluoride, which it does, but how come the pentafluoride? Without going into too many details, my answer was that the paired electrons are unpaired, one is put into another wave and to make this legitimate, an extra node is placed in the second wave, a process called hybridization. This has been a fairly standard answer in text books.

So, what happened next? I posted that, and also shared it to something called “The Chemistry Space”. A moderator there rejected it, and said he did so because he did not believe it. Computer calculations showed there was no extra node. Eh?? So I replied and asked how this computation got around the Exclusion Principle, then to be additionally annoying I asked how the computation set the constants of integration. If you look at John Pople’s Nobel lecture, you will see he set these constants for hydrocarbons by optimizing the results for 250 different hydrocarbons, but leaving aside the case that simply degenerates into a glorified empirical procedure, for phosphorus pentafluoride there is only one relevant compound. Needless to say, I received no answer, but I find this annoying. Sure, this issue is somewhat trivial, but it highlights the greater problem that some scientists are perfectly happy to hide behind obscure mathematics, or even more obscure computer programming.

It is interesting to consider what a theory should do. First, it should be consistent with what we see. Second, it should encompass as many different types of observation as possible. To show what I mean, in phosphorus pentafluoride example, the method I described can be transferred to other structures of different molecules. That does not make it right, but at least it is not obviously wrong. The problem with a computation is, unless you know the details of how it was carried out, it cannot be applied elsewhere, and interestingly I saw a recent comment in a publication by the Royal Society of Chemistry that computations from a couple of decades ago cannot be checked or used because the details of the code are lost. Oops. A third requirement, in my opinion, is it should assist in understanding what we see, and even lead to a prediction of something new.

Fundamental theories cannot be deduced; the principles have to come from nature. Thus mathematics describes what we see in quantum mechanics, but you could find an alternative mathematical description for anything else nature decided to do, for example, classical mechanics is also fully self-consistent. For relativity, velocities are either additive or they are not, and you can find mathematics either way. The problem then is that if someone draws a wrong premise early, mathematics can be made to fit a lot of other material to it. A major discovery and change of paradigm only occurs if there is a major fault discovered that cannot be papered over.

So, to finish this post in a slightly different way to usual: a challenge. I once wrote a novel, Athene’s Prophecy, in which the main character in the first century was asked by the “Goddess” Athene to prove that the Earth went around the sun. Can you do it, with what could reasonably be seen at the time? The details had already been worked out by Aristarchus of Samos, who also worked out the size and distance of the Moon and Sun, and the huge distances are a possible clue. (Thanks to the limits of his equipment, Aristarchus’ measurements are erroneous, but good enough to show the huge distances.) So there was already a theory that showed it might work. The problem was that the alternative also worked, as shown by Claudius Ptolemy. So you have to show why one is the true one. 

Problems you might encounter are as follows. Aristotle had shown that the Earth cannot rotate. The argument was that if you threw a ball into the air so that when it reached the top of its flight it would be directly above you, when the ball fell to the ground it would be to the east of you. He did it, and it wasn’t, so the Earth does not rotate. (Can you see what is wrong? Hint – the argument implies the conservation of angular momentum, and that is correct.) Further, if the Earth went around the sun, to do so orbital motion involves falling and since heavier things fall faster than light things, the Earth would fall to pieces. Comets may well fall around the Sun. Another point was that since air rises, the cosmos must be full of air, and if the Earth went around the Sun, there would be a continual easterly wind. 

So part of the problem in overturning any theory is first to find out what is wrong with the existing one. Then to assert you are correct, your theory has to do something the other theory cannot do, or show the other theory has something that falsifies it. The point of this challenge is to show by example just how difficult forming a scientific theory actually is, until you hear the answer and then it is easy.

Scientific Discoveries, How to Make Them, and COVID 19

An interesting problem for a scientist is how to discover something? The mediocre, of course, never even try to solve this while it is probably only a small percentage that gets there. Basically, it is done by observing clues then using logic to interpret them. The method is called induction, and it can lead to erroneous conclusions. Aristotle worked put how to do it, and then dropped the ball at least twice in his two biggest blunders when he forgot to follow his own advice. (In fairness, he probably made his blunders before he worked put his methodology, and lost interest in correcting them. The Physica was one of his earliest works.) 

The clues come from nature, and picking them up relies on keeping eyes open and more importantly, the mind open. The first step is to seek patterns in what you observe, and try to correlate your observations. The key here is Aristotle’s comment that the whole is more than the sum of the parts. That looks like New Age nonsense, but look at it from the mathematics of set theory. A set is simply a collection of data, usually expressed as numbers, but not anything should go into it. As an example, I could list all green things I can see, but that would be pointless. I could list all plants, and now I am making progress into botany. The point is, the set comprises all the elements inside it, together with the rule that conveys set membership. It is the rule that we seek if we wish to make a discovery and in effect we have to guess it by examining the data. This process is called induction, and if we get some true statements, we can move on to deduction. 

There are, of course, problems. Thus we could say:

All plants have chlorophyll

Chlorophyll is green

Therefore all plants are green.

That is untrue. The chlorophyll will be green, but the plant may have additional dyes/pigments. An obvious case is red seaweed. The problem here is the lazy “therefore”. Usually it is somewhat more difficult, especially in medicine.

Which, naturally in these times, it brings me to COVID-19. What we find is very young people, especially girls, are more or less untroubled. The old have a lot more trouble, and, it turns out more so old men. Now part of the trouble will be that the old have weaker immune systems, and often other weaknesses in their bodies. Unlike wine, age does not improve the body. That is probably a confusing observation, because it leads nowhere and is somewhat obvious.

Anyway, we have a new observation: if we restrict ourselves to severe cases in hospitals, there is a serious excess of bald men. Now, a correlation is not causative, and trying to work out the cause can be fraught with difficulty. In this case, we can immediately dismiss the idea that hair has anything to do with it. However, baldness is also correlated with higher levels of androgens, which are male sex hormones. It was also found that the severe cases in males also usually had high levels of androgens. By itself, we can show this is not a cause either.

So, this leads to a deeper investigation, and it is found that the virus uses an enzyme called TMPRSS2 to cleave the Sars-Cov-2 spike protein, and this permits the cleaved spike to attack the ACE2 receptors on the patient’s cells, and thus permit the viral RNA to enter the cell and begin replicating. What the androgens do is to activate a gene in the virus that expresses TMPRSS2, so what the androgens do is to increase the amount of enzyme necessary to attack a cell. This suggests as a treatment something that will inhibit the viral gene so no TMPRSS2 is expressed. We await developments. (Suppressing androgens in men is not a good idea – they start to grow breasts. However, it also suggests that ACE inhibitors, used to reduce hypertension, might offer some assistance.) Now, the value of a theory can be shown by whether it helps explains something else. In this case, it argues that since pre-puberty children should be more resistant, and girls keep this benefit longer. That is found. It does not prove we are correct, but it is comforting. That is an example of induced science. Induction does not necessarily produce the truth, and conclusions can be wrong. We find out by pursuing the consequences, and either finding we have discovered something, or go back to the drawing board.

The Sociodynamics of Science

The title is a bit of an exaggeration as to the importance of this post, nevertheless since I was at what was probably my last scientific conference (NZ Institute of Chemistry, at Christchurch) I could not resist looking around at behaviour as well as the science. I also gave two presentations. Speaking to an audience gives the speaker an opportunity to order the presentation so as to give the most force to the surprising parts of it, not that many took advantage of this. Overall, very few, if any (apart from yours truly) seemed to want to provide their audience with something that might be uncomfortable for their preconceived notions.

First, the general part provided great support for Thomas Kuhn’s analysis. I found most of the invited speakers and keynote speakers to illustrate an interesting aspect: why are they speaking? Very few actually wished to educate or convince anyone of anything in particular, and personally, I found the few that did to be by far the most interesting. Most of the presentations from academics could be summarised as, “I have a huge number of research students and here is what they have done.” What then followed was a very large amount of results, but there was seldom an interesting unifying principle. Chemistry tends to be susceptible to this, as a very common student research program is to try to make a variety of related compounds. This may well have been very useful, but if we do not see why this approach was taken, it tends to feel like filling up some compendium of compounds, or, as Rutherford put it rather acidly, “stamp collecting”. These types of talks are characterised by the speaker trying to get in as many compounds as they can, so they keep talking and use up the allocated question time. I suspect that one of the purposes of these presentations is to say, “Look at what we have done. This has given our graduate students a good number of scientific publications, so if you are thinking of being a grad student, why not come here?” I can readily understand that line of thinking, but its relevance for older scientists is questionable. There were a few presentations where the output would be of more general interest, though. I found the odd presentation that showed how to do something new, where it could have quite wide applications, to be of particular interest.

Now to the personal. My first presentation was a summary of my biogenesis approach. It may have had too much information across too wide a field, but the interesting point was that it generated a discussion at the end relating to my concept of how homochirality was generated. My argument is that reproduction depends on it because the geometry prevents the formation of a second strand if the first strand is not either entirely left-handed or right-handed in its pitch. So the issue then was, it was pure chance that D-ribose containing helices predominated, in part because the chance of getting a long-enough homochiral strand is very remote, and when one arises, then it takes up all the resources and predominates. The legitimate question then is, why doesn’t the other handed helix eventually arise? It may be slower to do so, but it is not necessarily impossible. My partial answer to that is the mer units are also used to bind to some other important units for life to give them solubility, and the wrong sort gets used up and does not build up concentration. Maybe that is so, but there is no evidence.

It was my second presentation that would be controversial, and it was interesting to watch the expressions. Part of the problem for me was it was the last such presentation (there were some closing speakers after me, and after morning tea) and there is something about conferences at the end – everyone is busy thinking about how to get to the airport, etc, so they tend to lose concentration. My first slide put up three propositions: the wave functions everyone uses for atomic orbitals are wrong; because of that, the calculation of the chemical bond requires the use of a hitherto unrecognised quantum effect (which is a very specific expression involving only universally recognised quantum numbers) and finally, the commonly held belief that relativistic effects on the inner electrons make a major effect on the valence electron of the heaviest elements is wrong. 

As you might expect, this was greeted initially with yawns and disinterest: this was going to be wrong. At least that seemed to be written over their faces. I then diverted to explain my guidance wave interpretation, which is essentially the de Broglie pilot wave concept, but with two additions: an application of Euler’s complex number theory that everyone seems to have missed, and secondly, I argued that if the wave really causes diffraction in the two-slit-type experiment, it has to travel at the same speed as the particle. These two points lead to serious simplifications in the calculation of properties of chemical bonds. The next step was to put up a lot of evidence for the different wave functions, with about 70 data points spanning a selection of atoms, of which about twenty supported the absence of any significant relativistic effect. (This does not say relativity is wrong, but merely that its effects on valence electrons are too small to be noticed at this level of analysis.) What this was effectively saying was that most of the current calculations only give agreement with observation when liberal use is made of assignable constants, which conveniently can be adjusted so you get the “right” answer.So, question time. One question surprised me: Does my new approach do anything new? I argued that the fact everyone is using the wrong wave functions, there is a quantum effect that nobody has recognised, and everyone is wrong with those relativistic effects could be considered new. Yes, but have you got a prediction? This was someone difficult to satisfy. Well, if you have access to a good physics lab, I suggested, here is where you can show that, assuming my theory is correct, make an adjustment to the delayed choice quantum eraser experiment (and I outlined the simple change) then you will reach the opposite conclusion. If you don’t agree with me, then you should do the experiment to prove I am wrong. The stunned expressions were worth the cost of going to the conference. Not that anyone will do the experiment. That would show interest in finding the truth, and in fairness, it is more a job for a physicist.

An Ugly Turn for Science

I suspect that there is a commonly held view that science progresses inexorably onwards, with everyone assiduously seeking the truth. However, in 1962 Thomas Kuhn published a book “The structure of scientific revolutions” that suggested this view is somewhat incorrect. He suggested that what actually happens is that scientists spend most of their time solving puzzles for which they believe they know the answer before they begin, in other words their main objective is to add confirming evidence to current theory and beliefs. Results tend to be interpreted in terms of the current paradigm and if it cannot, it tends to be placed in the bottom drawer and is quietly forgotten. In my experience of science, I believe that is largely true, although there is an alternative: the result is reported in a very small section two-thirds through the published paper with no comment, where nobody will notice it, although I once saw a result that contradicted standard theory simply reported with an exclamation mark and no further comment. This is not good, but equally it is not especially bad; it is merely lazy and ducking the purpose of science as I see it, which is to find the truth. The actual purpose seems at times merely to get more grants and not annoy anyone who might sit on a funding panel.

That sort of behaviour is understandable. Most scientists are in it to get a good salary, promotion, awards, etc, and you don’t advance your career by rocking the boat and missing out on grants. I know! If they get the results they expect, more or less, they feel they know what is going on and they want to be comfortable. One can criticise that but it is not particularly wrong; merely not very ambitious. And in the physical sciences, as far as I am aware, that is as far as it goes wrong. 

The bad news is that much deeper rot is appearing, as highlighted by an article in the journal “Science”, vol 365, p 1362 (published by the American Association for the Advancement of Science, and generally recognised as one of the best scientific publications). The subject was the non-publication of a dissenting report following analysis on the attack at Khan Shaykhun, in which Assad was accused of killing about 80 people with sarin, and led, 2 days later, to Trump asserting that he knew unquestionably that Assad did it, so he fired 59 cruise missiles at a Syrian base.

It then appeared that a mathematician, Goong Chen of Texas A&M University, elected to do some mathematical modelling using publicly available data, and he got concerned with what he found. If his modelling was correct, the public statements were wrong. He came into contact with Theodore Postol, an emeritus Professor from MIT and a world expert on missile defence and after discussion he, Postol, and five other scientists carried out an investigation. The end result was that they wrote a paper essentially saying that the conclusions that Assad had deployed chemical weapons did not match the evidence. The paper was sent to the journal “Science and Global Security” (SGS), and following peer review was authorised for publication. So far, science working as it should. The next step is if people do not agree, they should either dispute the evidence by providing contrary evidence, or dispute the analysis of the evidence, but that is not what happened.

Apparently the manuscript was put online as an “advanced publication”, and this drew the attention of Tulsi Gabbard, a Presidential candidate. Gabbard was a major in the US military and had been deployed in Syria in a sufficiently senior position to have a realistic idea of what went on. She has stated she believed the evidence was that Assad did not use chemical weapons. She has apparently gone further and said that Assad should be properly investigated, and if evidence is found he should be accused of war crimes, but if evidence is not found he should be left alone. That, to me, is a sound position: the outcome should depend on evidence. She apparently found the preprint and put it on her blog, which she is using in her Presidential candidate run. Again, quite appropriate: resolve an issue by examining the evidence. That is what science is all about, and it is great that a politician is advocating that approach.

Then things started to go wrong. This preprint drew a detailed critique from Elliot Higgins, the boss of Bellingcat, which has a history of being anti-Assad, and there was also an attack from Gregory Koblentz, a chemical weapons expert who says Postol has a pro-Assad line. The net result is that SGS decided to pull the paper, and “Science” states this was “amid fierce criticism and warnings that the paper would help Syrian President Bashar al-Assad and the Russian government.” Postol argues that Koblentz’s criticism is beside the point. To quote Postol: “I find it troubling that his focus seems to be on his conclusion that I am biased. The question is: what’s wrong with the analysis I used?” I find that to be well said.

According to the Science article, Koblentz admitted he was not qualified to judge the mathematical modelling, but he wrote to the journal editor more than once, urging him not to publish. Comments included: “You must approach this latest analysis with great caution”, the paper would be “misused to cover up the [Assad] regime’s crimes” and “permanently stain the reputation of your journal”. The journal then pulled the paper off the publication rank, at first saying they would edit it, but then they backtracked completely. The editor of the journal is quoted in Science as saying, “In hindsight we probably should have sent it to a different set of reviewers.” I find this comment particularly abhorrent. The editor should not select reviewers on the grounds they will deliver the verdict that the editor wants, or the verdict that happens to be most convenient; reviewers should be restricted to finding errors in the paper.I find it extremely troubling that a scientific institution is prepared to consider repressing an analysis solely on grounds of political expediency with no interest in finding the truth. It is also true that I hold a similar view relating to the incident. I saw a TV clip that was taken within a day of the event where people were taking samples from the hole where the sarin was allegedly delivered without any protection. If the hole had been the source of large amounts of sarin, enough would remain at the primary site to still do serious damage, but nobody was affected. But whether sarin was there or not is not my main gripe. Instead, I find it shocking that a scientific journal should reject a paper simply because some “don’t approve”. The reason for rejection of a paper should be that it is demonstrably wrong, or it is unimportant. The importance cannot be disputed, and if it is demonstrably wrong, then it should be easy to demonstrate where it is wrong. What do you all think?

Some Shortcomings of Science

In a previous post, in reference to the blog repost, I stated I would show some of the short-comings of science, so here goes.

One of the obvious failings is that people seem happy to ignore what should convince them. The first sign I saw of this type of problem was in my very early years as a scientist. Sir Richard Doll produced a report that convincingly (at least to me) linked smoking to cancer. Out came a number of papers rubbishing this, largely from people employed by the tobacco industry. Here we have a clear conflict, and while it is ethically correct to show that some hypothesis is wrong, it should be based on sound logic. Now I believe that there are usually a very few results, and maybe as few as one specific result, that makes the conclusion unassailable. In this case, chemists isolated the constituents of cigarette smoke and found over 200 suspected carcinogens, and trials with some of these on lab rats were conclusive: as an example one dab of pure 3,4-benzopyrene gave an almost 100% probability of inducing a tumour. Now that is a far greater concentration than any person will get smoking, and people are not rats, nevertheless this showed me that on any reasonable assessment, smoking is a bad idea. (It was also a bad idea for a young organic chemist: who needs an ignition source a few centimeters in front of the face when handling volatile solvents?) Yet fifty years or so later, people continue to smoke. It seems to be a Faustian attitude: the cancer will come decades later, or for some lucky ones, not at all, so ignore the warning.

A similar situation is occurring now with climate change. The critical piece of information for me is that during the 1990s and early 2000s (the period of the study) it was shown there is a net power input to the oceans of 0.64 W/m2. If there is a continuing net energy input to the oceans, they must be warming. Actually, the Tasman has been clearly warming, and the evidence from other oceans supports that. So the planet is heating. Yet there are a small number of “deniers” who put their head in the sand and refuse to acknowledge this, as if by doing so, the problem goes away. Scientists seem unable to make people fact up to the fact that the problem must be dealt with now but the price is not paid until much later. As an example, in 2014 US Senate majority leader Mitch McConnell said: “I am not a scientist. I’m interested in protecting Kentucky’s economy.” He forgot to add, now.

The problem of ignoring what you do not like is general and pervasive, as I quickly learned while doing my PhD. My PhD was somewhat unusual in that I chose the topic and designed the project. No need for details here, but I knew the department, and my supervisor, had spent a lot of effort establishing constants for something called the Hammett equation. There was a great debate going on whether the cyclopropane ring could delocalise electronic charge in the same way as a double bond, only mre weakly. This equation would actually address that question. The very limited use of it by others at the start of my project was inconclusive, for reasons we need not go into here. Anyway, by the time I finished, my results showed quite conclusively that it did not, but the general consensus, based essentially on the observation that positive electric charge was strongly stabilised by it, and on molecular orbital theory (which assumes it initially, so was hardly conclusive on this question) was that it did. My supervisor made one really good suggestion as to what to do when I ran into trouble, and this was the part that showed the effect the most. But when it became clear that everyone else was agreeing the opposite and he had moved to a new position, he refused to publish that part.

This was an example of what I believe is the biggest failing. The observation everyone clung to was unexpected and needed a new explanation, and what they came up with most certainly gave the right answer for that specific case. However, many times there is more than one possible explanation, and I came up with an alternative based on classical electric field theory, that also predicted positive charge would be stabilized, and by how much, but it also predicted negative charge would be destabilized. The delocalization concept required bothto be stabilised. So there was a means of distinguishing them, and there was a very small amount of clear evidence that negative charge was destabilised. Why a small amount of evidence. Well, most attempts at making such compounds failed outright, which is in accord with the compounds being unstable but it is not definitive.

So what happened? A review came out that “convincingly showed” the answer was yes. The convincing part was that it cited a deluge of “me too” work on the stabilization of positive charge. It ignored my work, and as I later found out when I wrote a review, it ignored over 60 different types of evidence that showed results that contradicted the “yes” answer. My review was not published because it appears chemistry journals do not publish logic analyses. I could not be bothered rewriting, although the draft document is on the web if anyone is interested.

The point this shows is that once a paradigm is embedded, even if on shaky grounds, it is very hard to dislodge, in accord with what Thomas Kuhn noted in “The structure of scientific revolutions”. One of the points Kuhn noted was if the paradigm had evidence, scientists would rush to write papers confirming the paradigm by doing minor variations on what worked. That happened above: they were not interested in testing the hypothesis; they were interested in getting easy papers published to advance their careers. Kuhn also noted that observations that contradict the paradigm are ignored as long as they can be. Maybe over 60 different types of observations that contradict, or falsify, the paradigm is a record? I don’t know, but I suspect the chemical community will not be interested in finding out.