No sooner do I publish a blog on disruptive science than what does Nature do? It publishes an editorial https://doi.org/10.1038/d41586-023-00183-1 questioning whether it is (which is fair enough) and then, rather bizarrely, does it matter? Does the journal not actually want science to advance, but merely restrict itself to the comfortable? Interestingly, their disruptive examples they cite come from Crick (1953, structure of DNA) and the first planet orbiting another star. In my opinion, these are not disruptive. In Crick’s case, he merely used Rosalind Franklin’s data, and in the second case, this had been expected for years, and indeed I had seen a claim about twenty years earlier for a Jupiter-style planet around Epsilon Eridani. (Unfortunately, I did not write down the reference because I was not involved in that yet.) This result was rubbished because it was claimed the data was too inaccurate, yet the result that I wrote down complied quite well with what we now accept. I am always suspicious of discounting a result when it not only got a good value for the semimajor axis but also proposed a significantly eccentric orbit. For me, these two papers are merely obvious advances on previous theory or logic.
The proposed test by Nature for a disruptive paper is based on citations, the idea being that if a disruptive paper is cited, it is less likely for its predecessors to be cited. If the paper is consolidating, the previously disruptive papers continue to be cited. If this were to be a criterion, probably one of the most disruptive papers would be on the EPR paradox (Einstein, A., Podolsky, B., Rosen, N. 1935. Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. 47: 777-780.) Yet the remarkable thing about this paper is that people fall over themselves to point out that Einstein “got it wrong”. (That they do not actually answer Einstein’s point seems to be irrelevant to them.)
Nature spoke to a number of scholars who study science and innovation. Some were worried by Park’s paper, one of the worries being that declining disruptiveness could be linked to sluggish productivity and economic growth being seen in many parts of the world. Sorry, but I find that quite strange. It is true that an absence of discoveries is not exactly helpful, but economic use of a scientific discovery usually takes decades after the discovery. There is prolonged engineering, and if it is novel, a market for the product has to be developed. Then they usually displace something else. Very little economic growth follows quickly from scientific discovery. No need to go down this rabbit hole.
Information overload was considered a reason, and it was suggested that artificial intelligence sift and sort useful information, to identify projects with potential for a breakthrough. I completely disagree with this regarding disruption. Anyone who has done a computer search of scientific papers will know that unless you have a very clear idea of what you are looking for, you get a bewildering amount of irrelevant stuff. Thus, if I want to know the specific value of some measurement, the computer search will give me in seconds what previously could have taken days. But if the search constraints are abstract, almost anything can come out, including the erroneous material, examples being in my previous post. The computer, so far, cannot make value judgments because it has no criteria for so doing. What it will do is to comply with established thinking because they will be the constraints for the search. Disruption is something that you did not expect. How can a computer search for what is neither expected nor known? Particularly if that which is unexpected is usually mentioned as an uncomfortable aside in papers and not mentioned in abstracts or keywords. The computer will have to thoroughly understand the entire subject to appreciate the anomaly, and artificial intelligence is still a long way from that.
In a similar vein, Nature published a news item dated January 18. Apparently, people have been analysing advertisements and have come across something both remarkable and depressing: there are apparently hundreds of advertisements offering the sale of authorship in a reputable journal for a scientific paper. Prices range from hundreds to thousands of USD depending on the research area and the journal’s prestige, and the advertisement often cites the title of the paper, the journal, when it will be published (how do they know that) and the position of the authorship slots. This is apparently a multimillion-dollar industry. Interestingly, this advertising that specifies what title in what journal immediately raises suspicion, and a number of papers have been retracted. Another flag is that following peer review, further authors are added. If the authors actually contributed to the paper, they should have been known at the start. The question then is, why would anyone pay good coin for that? Unfortunately, the reason is depressingly simple: you need more citations to get more money, promotion, prizes, tenure, etc. It is a scheme to make money from those whose desire for position exceeds their skill level. And it works because nobody ever reads these papers anyway. The chances of being asked by anyone for details is so low it would be extremely unlucky to be caught out that way. Such an industry, of course, will be anything but disruptive. It only works as long as nobody with enough skill to recognize an anomaly actually reads the papers, because then the paper becomes famous, and thoroughly examined. This industry works because of the citation-counting but not understanding the content is the method of evaluating science. In short, evaluation by ignorant committee.