Scientific Journals Accused of Behaving Badly

I discussed peer review in my previous post, and immediately came across an article on “Predatory Publishing” (Science367, p 129). They report that six out of ten articles published in a sample of what they call “predatory” journals received no citations, i.e. nobody published a further paper referring to the work in these papers. The only reasonable inference that could be taken from what followed was that this work was very much worse than that published in more established journals. So, first, what are “predatory” journals, are they inherently bad, is the work described there seriously worse than in the established ones, or is this criticism more to defend the elite positions of some? I must immediately state I don’t know because the article did not give specific examples that I could analyse, neither science nor journals, although the chances are I would not have been able to read such journals. There are so many journals out there that libraries are generally restricted by finance on what they purchase.

Which gets to the first response. Maybe there are no citations because nobody is reading the articles because libraries do not buy the journals. There can, of course, be other good reasons why a paper is not cited, in that the subject may be of very narrow interest, but it was published to archive a fact. I have some papers that fit that description. For a while I had a contract to establish the chemical structures of polysaccharides from some New Zealand seaweeds and publish the results. If the end result is clearly correct, and if the polysaccharide was unique to that seaweed, which was restricted to being found in New Zealand and had no immediate use, why would anyone reference it? One can argue that the work ended up being not that interesting, but I did not know that before I started. Before starting I did not know; after completion I did, and by publishing, so will everyone else. If they never have any use, well, at least we know why. From my point of view, they were useful; I had a contract and I fulfilled it. When you are independent and do not have such things as secure salaries, contracts are valuable.

The article defined the “predatory” journal as (a) one that charged to publish (Page charges were well established in the main stream journals); (b) they used aggressive marketing tactics (so do the mainstream journals); and (c) they offered “little or no peer review” (I have no idea how they reached this conclusion because peer review is not open to examination). As an aside, the marketing tactics of the big conglomerates is not pretty either, but they have the advantage of being established, and libraries cannot usually bring themselves to stop using them, as it is an “all or nothing” subscription with a lot of journals involved, with at least one or two essential for a University.

The next criticism was these upstarts were getting too much attention. And horrors, 40% of the articles drew at least one citation. You can’t win against this sort of critic: it is bad because articles are not cited, and bad because they are. I find citations are a bad iindication of importance. Many scientists in the West cite their friends frequently, irrespective of whether the article cited has any relevance because they know nobody checks, and the number of citations is important in the West for getting grants. You cite them, they cite you, everybody wins, except those not in the loop. This is a self-help game.

The next criticism of them is there are too many of them. Actually the same could be said of mainstream journals; take a look at the number of journals from Elsevier. Even worse, many of these come from Africa and Asia. How dare they challenge our established superiority! Another criticism – the articles are not cited in Wikipedia. As if citations in Wikipedia were important. So why do scientists in Africa and Asia publish in such journals? The article suggests an answer: publication is faster. Hmm, fancy going for better performance! So, if it is a problem, the answer would surely be to fix it with the “approved” journals, but that is not going to happen any time soon. Also, from the Africans’ perspective, their papers may well be more likely to be rejected in the peer review of Western journals because they are not using the most modern equipment, in part because they can’t afford it. The work may be less interesting to Western eyes, but is that relevant if it is interesting in Africa? I can’t help but think this article was more a sign of “protecting their turf” than of trying to improve the situation.

Peer Review – a Flawed Process for Science

Back from a Christmas break, and I hope all my readers had a happy and satisfying break. 2020 has arrived, more with a bang than a whimper, but while languishing in what has started off as a dreadful summer here, thanks to Australia (the heat over Central Australia has stirred up the Southern Ocean to give us cold air, while their bush fires have given us smoky air, even though we are about 2/3 the width of the Atlantic away) I have been thinking of how science progresses, or doesn’t. One of the thoughts that crossed my mind was the assertion that we must believe climate change is real because the data are published in peer-reviewed journals. Climate change is certainly real, Australia is certainly on fire, but what do you make of the reference to peer-reviewed journals? Does such publication mean it is right, and that peer review is some sort of gold standard?

Unfortunately, that is not necessarily so, and while the process filters out some abysmal rubbish it also lets through some fairly mediocre stuff, although we can live with that. If the work reports experimental observations we should have more faith in it, right? After all, it will have been looked at by experts in the field who use the same techniques and they will filter out errors. There are two reasons why that is not so. The first is that the modern scientific paper, written to save space, usually gives insufficient evidence to tell. The second is illustrated by climate change; there are a few outlets that are populated solely by deniers, in which another denier reviews the work, in other words, prejudice rules. 

Chemistry World reported a study carried out by the Royal Society of Chemistry that reviewed the performance of peer review, and came to the conclusion that peer review is sexist. Females as corresponding authors made up 23.9% of submissions, and 25.6% of the rejections without peer review. Only 22.9% of the papers accepted after peer review came from female corresponding authors. Female corresponding authors are less likely to receive an immediate “accept”, or “accept with minor revisions”, but interestingly, if the reviewer is female, the males are less likely to receive that. These figures come from 700,000 submissions, so although the percentages are not very big, the question remains: are they meaningful, and if so, what do they mean?

There is a danger in drawing conclusions from statistics because correlations do not mean cause. It may be nothing more than women are more likely to be younger, and hence being early in their careers are more likely to need help, or they are more likely to have sent the paper to a less than appropriate journal, since journals tend to publish only in very narrow fields. It could also indicate that style is more important than substance, because the only conceivable difference with a gender bias is the style used in presentation. It would be of greater interest to check out how status affects the decision. Is a paper from Harvard, say, more likely to be accepted than a paper from a minor college, or something non-academic, such as a patent office?

One of my Post-doc supervisors once advised me that a new idea will not get published, but publication is straightforward if you report the melting point of a new compound. Maybe he was a little bitter, but it raises the question, does peer review filter out critical material because it does not conform to the required degree of comfort and compliance with standard thinking? Is important material rejected simply because of the prejudices or incompetence of the reviewer? What happens if the reviewer is not a true peer? Irrespective of what the editor tells the author, is a paper that criticizes the current paradigm rejected on that ground? I have had some rather pathetic experiences, and I expect a number of others have too, but the counter to that is, maybe the papers had insufficient merit. That is the simple out, after all, who am I?

Accordingly, I shall end by citing someone else. This related to a paper about spacetime, which at its minimum is a useful trick for solving the equations of General Relativity. However, for some people, spacetime is actually a “thing”; you hear about the “fabric of spacetime” and in an attempt to quantize it, scientists have postulated that it exists in tiny lumps. In 1952 an essay was written that was against the prevailing view that spacetime is filled with fields that froth with “virtual particles”. I don’t know whether this was right or not because nobody would publish it, so it is not to be discussed in polite scientific society. It was probably rejected because it went totally against the prevailing view, and we must not challenge that. And no, it was no written by an ignorant fool, although it should have been judged on content and not the status of the person. The author was Albert Einstein, who could legitimately argue that he knew a thing or two about General Relativity, so nobody is immune to such rejection. If you want to see such prejudice in action, try arguing that quantum field theory is flawed in front of an advocate. You would be sent to the corner wearing a conical hat. The advocate will argue that the theory has calculated the magnetic moment of the electron and this is the most accurate calculation in physics. The counter is yes, but only through some rather questionable mathematics (like cancelling out infinities), while another calculation based on Einstein’s relativity gives an error in the cosmological constant of about 120 orders of magnitude (ten followed by 120 zeros), the worst error in all of physics. Oops!

A Different Way of Applying Quantum Mechanics to Chemistry

Time to explain what I presented at the conference, and to start, there are interpretive issues to sort. The first issue is either there is a physical wave or there is not. While there is no definitive answer to this, I argue there is, because something has to go through both slits in the two slit experiment, and the evidence is the particle always goes through only one slit. That means something else should be there. 

I differ from the pilot wave theory in two ways. The first is mathematical. The wave is taken as complex because its phase is. (Here, complex means a value includes the square root of minus 1, and is sometimes called an imginary number for obvious reasons.) However, Euler, the mathematician that really invented complex numbers, showed that if the phase evolves, as waves always do by definition of an oscillation and as action does with time, there are two points that are real, and these are at the antinodes of the wave. That means the amplitudes of the quantal matter wave should have real values there. The second difference is if the particle is governed by a wave pulse, the two must travel at the same velocity. If so, and if the standard quantum equations apply, there is a further energy, equal in magnitude to the kinetic energy of the particle. This could be regarded as equivalent to Bohm’s quantum potential, except there is a clear difference in that there is a clear value of it. It is a hidden variable in that you cannot measure it, but then again, so is potential energy; you have never actually measured that.

This gives a rather unexpected simplification: the wave behaves exactly as a classical wave, in which the square of the amplitude, which is a real number, gives the energy of the particle. This is a big advance over the probabilistic interpetation because while it may be correct,  the standard wave theory would say that A^2 = 1 per particle, that is, the probability of one particle being somewhere is 1, which is not particularly informative. The difference now is that for something like the chemical bond, the probabilistic interpretation requires all values of ψ1 to interact with all values of ψ2; The guidance wave method merely needs the magnitude of the combined amplitude.  Further, the wave refers only to a particle’s motion, and not to a state (although the particle motion may define a state). This gives a remarkable simplification for the stationary state, and in particular, the chemical bond. The usual way of calculating this involves calculating the probabilities of where all the electrons will be, then further calculating the interactions between them, recalculating their positions with the new interactions, recalculating the new interactions, etc, and throughout this procedure, getting the positional probabilities requires double integrations because forces give accelerations, which means constants have to be assigned. This is often done, following Pople, by making the calculations fit similar molecules and transferring the constants, which to me is essentially an empirical assignment, albeit disguised with some fearsome mathematics.

The approach I introduced was to ignore the position of the electrons and concentrate on the waves. The waves add linearly, but you introduce two new interactions that effectively require two new wave components. They are restricted to being between the nuclei, the reason being that the force on nucleus 1, from atom 2 must be zero, otherwise it would accelerate and there would be no bond. The energy of an interaction is the energy added to the antinode. For a hydrogen molecule, the electrons are indistinguishable, so the energy of the two new interactions are equivalent to the energy of the two equivalent ones from the linear addition, therefore the bond energy of H2 is 1/3 the Rydberg energy. This is out by about 0.3%. We get better agreement using potential energy, and introduce electric field renormalisation to comply with Maxwell’s equations. Needless to say this did not get much excitement from the conference audience. You cannot dazzle with obscure mathematics doing this.

The first shock for the audience was when I introduced my Certainty Principle, which, roughly stated, is the action of a stationary state must be quantized, and further, because of the force relationship I introduced above, the action must be that of the sum of the participating atoms. Further, the periodic time of the initial electron interactions must be constant (because their waves add linearly, a basic wave property). The reason you get bonding from electron pairing is that the two electrons halve the periodic time on that zone, which permits twice the energy at constant action, or the addition of the two extras as shown for hydrogen. That also contracts the distance to the antinode, in which case the appropriate energy can only arise because the electron – electron repulsion compensates. This action relationship is also the first time a real cause of there being a bond radius that is constant across a number of bonds has been made. The repulsion energy which is such a problem if you consider it from the point of view of electron position is self-correcting if you consider the wave aspect. The waves cannot add linearly and maintain constant action unless the electrons provide exactly the correct compensation, and the proposition is the guidance waves guide them into doing that.

The next piece of “shock” comes with other atoms. The standard theory says their wave functions correspond to the excited states of hydrogen because they are the only solutions of the Schrödinger equation. However, if you do that, you find that the calculated values are too small. By caesium, the calculated energy is out by an order of magnitude. The standard answer – the electrons penetrate the space occupied by the inner electrons and experience stronger electric field. If you accept the guidance wave principle, that cannot be because the energy is determined at the major antinode. (If you accept Maxwell’s equations I argue it cannot be either, but that is too complicated to be put here.) So my answer is that the wave is actually a sum of component waves that have different nodal structures, and an expression for the so-called screening constant (which is anything but constant, and varies according to circumstances) is actually a function of quantum numbers, and I produced tables that shows the functions give good agreement for the various groups, and for excited states.

Now, the next shock. Standard theory has argued that the properties of heavy elements like gold have unusual properties because these “penetrations” lead to significant relativistic corrections. My guidance wave theory requires such relativistic effects to be very minor, and I provided  evidence that showed the properties of elements like francium, gold, thallium, and lead were as good or better fitting with my functions as any of the other elements.

What I did not tell them was that the tables of data I showed them had been published in a peer reviewed journal over thirty years before, but nobody took any notice. As someone said to me, when I mentioned this as an aside, “If it isn’t published in the Physical Review or J. Chem Phys., nobody reads it.” So much for publishing in academic journals.  As to why I skipped those two, I published that while I was starting my company, and $US 1,000 per page did not strike me as a good investment.So, if you have got this far, I appreciate your persistence. I wish you a very Merry Christmas and the best for 2020. I shall stop posting until a Thursday in mid January, as this is the summer holiday season here.

What does Quantum Mechanics Mean?

Patrice Ayme gave a long comment to my previous post that effectively asked me to explain in some detail the significance of some of my comments on my conference talk involving quantum mechanics. But before that, I should explain why there is even a problem, and I apologise if the following potted history seems a little turgid. Unfortuately, the background situation is important. 

First, we are familiar with classical mechanics, where, given all necessary conditions, exact values of the position and momentum of something can be calculated for any future time, and thanks to Newtom and Leibniz, we do this through differential equations involving familiar concepts such as force, time, position, etc. Thus suppose we shot an arrow into the air and ignored friction and we wanted to know where it was, when. Velocity is the differential of position with respect to time, so we take the velocity and integrate it. However, to get an answer, because there are two degrees of freedom (assuming we know which direction it was shot) we get two constants to the two integrations. In classical mechanics these are easily assigned: the horizontal constant depends on where it was fired from, and the other constant comes from the angle of elevation. 

Classical mechanics reached a mathematical peak through Lagrange and Hamilton. Lagrange introduced a term that is usually the difference between the potential and kinetic energy, and thus converted the problem from forces to one of energy. Hamilton and Jacobi converted the problem to one involving action, which is the time integral of the Lagrangian. The significance of this is that in one sense action summarises all that is involved in our particle going from A to B. All of these variations are equivalent, and merely reflect alternative ways of going about the problem, however the Hamilton Jacobi equation is of special significance because it can be mathematically transformed into a mathematical wave expression. When Hamilton did this, there were undoubtedly a lot of yawns. Only an abstract mathematician would want to represent a cannonball as a wave.

So what is a wave? While energy can be transmitted by particles moving (like a cannon ball) waves transmit energy without moving matter, apart from a small local oscillation. Thus if you place a cork on the sea far from land, the cork basically goes around in a circle, but on average stays in the same place. If there is an ocean current, that will be superimposed on the circular motion without affecting it. The wave has two terms required to describe it: an amplitude (how big is the oscillation?) and a phase (where on the circle is it?).

Then at the end of the 19th century, suddenly classical mechanics gave wrong answers for what was occurring at the atomic level. As a hot body cools, it should give radiation from all possible oscillators and it does not. To explain this, Planck assumed radiation was given off in discrete packets, and introduced the quantum of action h. Einstein, recognizing the Principle of Microscopic Reversibility should apply, argued that light should be absorbed in discrete packages as well, which solved the problem of the photoelectric effect. A big problem arose with atoms, which have positively charged nuclei and electrons moving around it. To move, electrons must accelerate, and hence should radiate energy and spiral into the nucleus. They don’t. Bohr “solved” this problem with the ad hoc assumption that angular momentum was quantised, nevertheless his circular orbits (like planetary orbits) are wrong. For example, if they occurred, hydrogen would be a powerful magnet and it isn’t. Oops. Undeterred, Sommerfeld recognised that angular momentum is dimensionally equivalent to action, and he explained the theory in terms of action integrals. So near, but so far.

The next step involved the French physicist de Broglie. With a little algebra and a bit more inspiration, he represented the motion in terms of momentum and a wavelength, linked by the quantum of action. At this point, it was noted that if you fired very few electrons through two slits at an appropriate distance apart and let them travel to a screen, each electron was registered as a point, but if you kept going, the points started to form a diffraction pattern, the characteristic of waves. The way to solve this was if you take Hamilton’s wave approach, do a couple of pages of algebra and quantise the period by making the phase complex and proportional to the action divided by (to be dimensionally correct bcause the phase must be a number), you arrive at the Schrödinger equation, which is a partial differential equation, and thus is fiendishly difficult to solve. About the same time, Heisenberg introduced what we call the Uncertainty Principle, which usually states that you cannot know the product of the position and the momentum to better than h/2π. Mathematicians then formulated the Schrödinger equation into what we call the state vector formalism, in part to ensure that there are no cunning tricks to get around the Uncertainty Principle.

The Schrödinger equation expresses the energy in terms of a wave function ψ. That immediately raised the question, what does ψ mean? The square of a wave amplitude usually indicats the energy transmitted by the wave. Because ψ is complex, Born interpreted ψ.ψ* as indicating the probability that you would find the particle at the nominated point. The state vector formalism then proposed that ψ.ψ* indicates the probability that a state will have probabilities of certain properties at that point. There was an immediate problem that no experiment could detect the wave. Either there is a wave or there is not. De Broglie and Bohm assumed there was and developed what we call the pilot wave theory, but almost all physicists assume, because you cannot detect it, there is no actual wave.

What do we know happens? First, the particle is always detected as a point, and it is the sum of the points that gives the diffraction pattern characteristic of waves. You never see half a particle. This becomes significant because you can get this diffraction pattern using molecules made from 60 carbon atoms. In the two-slit experiment, what are called weak measurements have shown that the particle always goes through only one slit, and not only that, they do so with exactly the pattern predicted by David Bohm. That triumph appears to be ignored. Another odd feature is that while momentum and energy are part of uncertainty relationships, unlike random variation in something like Brownian motion, the uncertainty never grows

Now for the problems. The state vector formalism considers ψ to represent states. Further, because waves add linearly, the state may be a linear superposition of possibilities. If this merely meant that the probabilities merely represented what you do not know, then there would be no problem, but instead there is a near mystical assertion that all probabilities are present until the subject is observed, at which point the state collapses to what you see. Schrödinger could not tolerate this, not the least because the derivation of his equation is incompatible with this interpretation, and he presented his famous cat paradox, in which a cat is neither dead nor alive but in some sort of quantum superposition until observed. The result was the opposite of what he expected: this ridiculous outcome was asserted to be true, and we have the peculiar logic applied in that you cannot prove it is not true (because the state collapses if you try to observe the cat). Equally, you cannot prove it is true, but that does not deter the mystics. However, there is worse. Recall I noted when we integrate we have to assign necessary constants. When all positions are uncertain, and when we are merely dealing with probabilities in superposition, how do you do this? As John Pople stated in his Nobel lecture, for the chemical bonds of hydrocarbons, he assigned values to the constants by validating them with over two hundred reference compounds. But suppose there is something fundamentally wrong? You can always get the right answer if you have enough assignable constants.The same logic applies to the two-slit experiment. Because the particle could go through either slit and the wave must go through both to get the diffraction pattern, when you assume there is no wave it is argued that the particle goes through both slits as a superposition of the possibilities. This is asserted even though it has clearly been demonstrated that it does not. There is another problem. The assertion that the wave function collapses on observation, and all other probabilities are lost actually lies outside the theory. How does that actually happen? That is called the measurement problem, and as far as I am aware, nobody has an answer, although the obvious answer, the probabilities merely reflected possibilities and the system was always just one but we did not know it is always rejected. Confused? You should be. Next week I shall get around to some from my conference talk that caused stunned concern with the audience.

The Year of Elements, and a Crisis

This is the International Year of the Periodic Table, and since it is almost over, one can debate how useful it was. I wonder how many readers were aware of this, and how many really understand what the periodic table means. Basically, it is a means of ordering elements with respect to their atomic number in a way that allows you to make predictions of properties. Atomic number counts how many protons and electrons a neutral atom has. The number of electrons and the way they are arranged determines the atom’s chemical properties, and thanks to quantum mechanics, these properties repeat according to a given pattern. So, if it were that obvious, why did it take so long to discover it?

There are two basic reasons. The first is it took a long time to discover what were elements. John Dalton, who put the concept of atoms on a sound footing, made a list that contained twenty-one, and some of those, like potash, were not elements, although they did contain atoms that were different from the others, and he inferred there was a new element present. The problem is, some elements are difficult to isolate from the molecules they are in so Dalton, unable to break them down, but seeing from their effect on flames knew they were different, labelled them as elements. The second problem is although the electron configurations appear to have common features, and there are repeats in behaviour, they are not exact repeats and sometimes some quite small differences in electron behaviour makes very significant differences to chemical properties. The most obvious example is the very common elements carbon and silicon. Both form dioxides of formula XO2. Carbon dioxide is a gas; you see silicon dioxide as quartz. (Extreme high-pressure forces CO2 to form a quartz structure, though, so the similarity does emerge when forced.) Both are extremely stable, and silicon does not readily form a monoxide, while carbon monoxide has an anomalous electronic structure. At the other end of the “family”, lead does not behave particularly like carbon or silicon, and while it forms a dioxide, this is not at all colourless like the others. The main oxide of lead is the monoxide, and this instability is used to make the anode work in lead acid batteries.

The reason I have gone on like this is to explain that while elements have periodic properties, these are only indicative of the potential, and in detail each element is unique in many ways. If you number them on the way down the column, there may be significant changes depending on whether the number is odd or even that are superimposed on a general change. As an example: copper, silver, gold. Thus copper and gold are coloured; silver is not. The properties of silicon are wildly different from those of carbon; there is an equally dramatic change in properties from germanium to tin. What this means is that it is very difficult to find a substitute material for an element that is used for a very specific property. Further, the amounts of given elements on the planet depend partly on how the planet accreted, thus we do not have much helium or neon, despite these being extremely common elements in the Universe as a whole, and partly on the fact that nucleosynthesis gives variable yields for different elements. The heavier elements in a periodic column are generally formed in lower amounts, while elements with a greater number of stable isotopes, or particularly stable isotopes, tend to be made in greater amounts. On the other hand, their general availability tends to depend on what routes there are for their isolation during geochemical processing. Some elements such as lead form a very insoluble sulphide and that separates from the rock during geothermal processing, but others are much more resistant and remain distributed throughout the rock in highly dilute forms, so even though they are there, they are not available in concentrated forms. The problem arises when we need some of these more difficult to obtain elements, yet they have specific uses. Thus a typical mobile phone contains more than thirty different elements

The Royal Society of Chemistry has found that at least six elements used in mobile phones are going out be mined out in at least 100 years. These have other uses as well. Gallium is used in microchips, but also in LEDs and solar panels. Arsenic is also used in microchips, but also used in wood preservation and, believe it or not, poultry feed. Silver is used in microelectrical components, but also in photochromic lenses, antibacterial clothing, mirrors, and other uses. Indium is used on touchscreens and microchips, but also in solar panels and specialist ball bearings. Yttrium is used for screen colours and backlighting, but also used for white LED lights, camera lenses, and anticancer drugs, e.g. against liver cancer. Finally, there is tantalum, used for surgical implants, turbine blades, hearing aids, pacemakers, and nosescaps for supersonic aircraft. Thus mobile phones will put a lot of stress on other manufacturing. To add to the problems, cell phones tend to have a life averaging two years. (There is the odd dinosaur like me who keeps using them until technology makes it difficult to keep doing it. I am on my third mobile phone.)A couple of other facts. 23% of UK households have an unused mobile phone. While in the UK, 52% of 16 – 24 year olds have TEN or more electronic devices in their home. The RSC estimates that in the UK there are as many as 40 million old and unused such devices in people’s homes. I have no doubt that many other countries, including the US, have the same problem. So, is the obvious answer we should promote recycling? There are recycling schemes around the world, but it is not clear what is being done with what is collected. Recovering the above elements from such a mixture is anything but easy. I suspect that the recyclers go for the gold and one or two other materials, and then discard the rest. I hope I am wrong, but from the chemical point of view, getting such small mounts of so many different elements from such a mix is anything but easy. Different elements tend to be in different parts of the phone, so the phones can be dismantled and the parts chemically processed separately but this is labour intensive. They can be melted down and separated chemically, but that is a very complicated process. No matter how you do it, the recovered elements will be very expensive. My guess is most are still not recovered. All we can hope is they are discarded somewhere where they will lie inertly until they can be used economically.

An Ugly Turn for Science

I suspect that there is a commonly held view that science progresses inexorably onwards, with everyone assiduously seeking the truth. However, in 1962 Thomas Kuhn published a book “The structure of scientific revolutions” that suggested this view is somewhat incorrect. He suggested that what actually happens is that scientists spend most of their time solving puzzles for which they believe they know the answer before they begin, in other words their main objective is to add confirming evidence to current theory and beliefs. Results tend to be interpreted in terms of the current paradigm and if it cannot, it tends to be placed in the bottom drawer and is quietly forgotten. In my experience of science, I believe that is largely true, although there is an alternative: the result is reported in a very small section two-thirds through the published paper with no comment, where nobody will notice it, although I once saw a result that contradicted standard theory simply reported with an exclamation mark and no further comment. This is not good, but equally it is not especially bad; it is merely lazy and ducking the purpose of science as I see it, which is to find the truth. The actual purpose seems at times merely to get more grants and not annoy anyone who might sit on a funding panel.

That sort of behaviour is understandable. Most scientists are in it to get a good salary, promotion, awards, etc, and you don’t advance your career by rocking the boat and missing out on grants. I know! If they get the results they expect, more or less, they feel they know what is going on and they want to be comfortable. One can criticise that but it is not particularly wrong; merely not very ambitious. And in the physical sciences, as far as I am aware, that is as far as it goes wrong. 

The bad news is that much deeper rot is appearing, as highlighted by an article in the journal “Science”, vol 365, p 1362 (published by the American Association for the Advancement of Science, and generally recognised as one of the best scientific publications). The subject was the non-publication of a dissenting report following analysis on the attack at Khan Shaykhun, in which Assad was accused of killing about 80 people with sarin, and led, 2 days later, to Trump asserting that he knew unquestionably that Assad did it, so he fired 59 cruise missiles at a Syrian base.

It then appeared that a mathematician, Goong Chen of Texas A&M University, elected to do some mathematical modelling using publicly available data, and he got concerned with what he found. If his modelling was correct, the public statements were wrong. He came into contact with Theodore Postol, an emeritus Professor from MIT and a world expert on missile defence and after discussion he, Postol, and five other scientists carried out an investigation. The end result was that they wrote a paper essentially saying that the conclusions that Assad had deployed chemical weapons did not match the evidence. The paper was sent to the journal “Science and Global Security” (SGS), and following peer review was authorised for publication. So far, science working as it should. The next step is if people do not agree, they should either dispute the evidence by providing contrary evidence, or dispute the analysis of the evidence, but that is not what happened.

Apparently the manuscript was put online as an “advanced publication”, and this drew the attention of Tulsi Gabbard, a Presidential candidate. Gabbard was a major in the US military and had been deployed in Syria in a sufficiently senior position to have a realistic idea of what went on. She has stated she believed the evidence was that Assad did not use chemical weapons. She has apparently gone further and said that Assad should be properly investigated, and if evidence is found he should be accused of war crimes, but if evidence is not found he should be left alone. That, to me, is a sound position: the outcome should depend on evidence. She apparently found the preprint and put it on her blog, which she is using in her Presidential candidate run. Again, quite appropriate: resolve an issue by examining the evidence. That is what science is all about, and it is great that a politician is advocating that approach.

Then things started to go wrong. This preprint drew a detailed critique from Elliot Higgins, the boss of Bellingcat, which has a history of being anti-Assad, and there was also an attack from Gregory Koblentz, a chemical weapons expert who says Postol has a pro-Assad line. The net result is that SGS decided to pull the paper, and “Science” states this was “amid fierce criticism and warnings that the paper would help Syrian President Bashar al-Assad and the Russian government.” Postol argues that Koblentz’s criticism is beside the point. To quote Postol: “I find it troubling that his focus seems to be on his conclusion that I am biased. The question is: what’s wrong with the analysis I used?” I find that to be well said.

According to the Science article, Koblentz admitted he was not qualified to judge the mathematical modelling, but he wrote to the journal editor more than once, urging him not to publish. Comments included: “You must approach this latest analysis with great caution”, the paper would be “misused to cover up the [Assad] regime’s crimes” and “permanently stain the reputation of your journal”. The journal then pulled the paper off the publication rank, at first saying they would edit it, but then they backtracked completely. The editor of the journal is quoted in Science as saying, “In hindsight we probably should have sent it to a different set of reviewers.” I find this comment particularly abhorrent. The editor should not select reviewers on the grounds they will deliver the verdict that the editor wants, or the verdict that happens to be most convenient; reviewers should be restricted to finding errors in the paper.I find it extremely troubling that a scientific institution is prepared to consider repressing an analysis solely on grounds of political expediency with no interest in finding the truth. It is also true that I hold a similar view relating to the incident. I saw a TV clip that was taken within a day of the event where people were taking samples from the hole where the sarin was allegedly delivered without any protection. If the hole had been the source of large amounts of sarin, enough would remain at the primary site to still do serious damage, but nobody was affected. But whether sarin was there or not is not my main gripe. Instead, I find it shocking that a scientific journal should reject a paper simply because some “don’t approve”. The reason for rejection of a paper should be that it is demonstrably wrong, or it is unimportant. The importance cannot be disputed, and if it is demonstrably wrong, then it should be easy to demonstrate where it is wrong. What do you all think?

A Planet Destroyer

Probably everyone now knows that there are planets around other stars, and planet formation may very well be normal around developing stars. This, at least, takes such alien planets out of science fiction and into reality. In the standard theory of planetary formation, the assumption is that dust from the accretion disk somehow turns into planetesimals, which are objects of about asteroid size and then mutual gravity brings these together to form planets. A small industry has sprung up in the scientific community to do computerised simulations of this sort of thing, with the output of a very large number of scientific papers, which results in a number of grants to keep the industry going, lots of conferences to attend, and a strong “academic reputation”. The mere fact that nobody knows how to get to their initial position appears to be irrelevant and this is one of the things I believe is wrong with modern science. Because those who award prizes, grants, promotions, etc have no idea whether the work is right or wrong, they look for productivity. Lots of garbage usually easily defeats something novel that the establishment does not easily understand, or is prepared to give the time to try.

Initially, these simulations predicted solar systems similar to ours in that there were planets in circular orbits around their stars, although most simulations actually showed a different number of planets, usually more in the rocky planet zone. The outer zone has been strangely ignored, in part because simulations indicate that because of the greater separation of planetesimals, everything is extremely slow. The Grand Tack simulations indicate that planets cannot form further than about 10 A.U. from the star. That is actually demonstrably wrong, because giants larger than Jupiter and very much further out are observed. What some simulations have argued for is that there is planetary formation activity limited to around the ice point, where the disk was cold enough for water to form ice, and this led to Jupiter and Saturn. The idea behind the NICE model, or Grand Tack model (which is very close to being the same thing) is that Uranus and Neptune formed in this zone and moved out by throwing planetesimals inwards through gravity. However, all the models ended up with planets being in near circular motion around the star because whatever happened was more or less happening equally at all angles to some fixed background. The gas was spiralling into the star so there were models where the planets moved slightly inwards, and sometimes outwards, but with one exception there was never a directional preference. That one exception was when a star came by too close – a rather uncommon occurrence. 

Then, we started to see exoplanets, and there were three immediate problems. The first was the presence of “star-burners”; planets incredibly close to their star; so close they could not have formed there. Further, many of them were giants, and bigger than Jupiter. Models soon came out to accommodate this through density waves in the gas. On a personal level, I always found these difficult to swallow because the very earliest such models calculated the effects as minor and there were two such waves that tended to cancel out each other’s effects. That calculation was made to show why Jupiter did not move, which, for me, raises the problem, if it did not, why did others?

The next major problem was that giants started to appear in the middle of where you might expect the rocky planets to be. The obvious answer to that was, they moved in and stopped, but that begs the question, why did they stop? If we go back to the Grand Tack model, Jupiter was argued to migrate in towards Mars, and while doing so, throw a whole lot of planetesimals out, then Saturn did much the same, then for some reason Saturn turned around and began throwing planetesimals inwards, which Jupiter continued the act and moved out. One answer to our question might be that Jupiter ran out of planetesimals to throw out and stopped, although it is hard to see why. The reason Saturn began throwing planetesimals in was that Uranus and Neptune started life just beyond Saturn and moved out to where they are now by throwing planetesimals in, which fed Saturn’s and Jupiter’s outwards movement. Note that this does depend on a particular starting point, and it is not clear to me  that since planetesimals are supposed to collide and form planets, if there was an equivalent to the masses of Jupiter and Saturn, why did they not form a planet?

The final major problem was that we discovered that the great bulk of exoplanets, apart from those very close to the star, had quite significant elliptical orbits. If you draw a line through the major axis, on one side of the star the planet moves faster and closer to it than the other side. There is a directional preference. How did that come about? The answer appears to be simple. The circular orbit arises from a large number of small interactions that have no particular directional preference. Thus the planet might form from collecting a huge number of planetesimals, or a large amount of gas, and these occur more or less continuously as the planet orbits the star. The elliptical orbit occurs if there is on very big impact or interaction. What is believed to happen is when planets grow, if they get big enough their gravity alters their orbits and if they come quite close to another planet, they exchange energy and one goes outwards, usually leaving the system altogether, and the other moves towards the star, or even into the star. If it comes close enough to the star, the star’s tidal forces circularise the orbit and the planet remains close to the star, and if it is moving prograde, like our moon the tidal forces will push the planet out. Equally, if the orbit is highly elliptical, the planet might “flip”, and become circularised with a retrograde orbit. If so, eventually it is doomed because the tidal forces cause it to fall into the star.

All of which may seem somewhat speculative, but the more interesting point is we have now found evidence this happens, namely evidence that the star M67 Y2235 has ingested a “superearth”. The technique goes by the name “differential stellar spectroscopy”, and what happens is that provided you can realistically estimate what the composition should be, which can be done with reasonable confidence if stars have been formed in a cluster and can reasonably be assigned as having started from the same gas. M67 is a cluster with over 1200 known members and it is close enough that reasonable details can be obtained. Further, the stars have a metallicity (the amount of heavy elements) similar to the sun. A careful study has shown that when the stars are separated into subgroups, they all behave according to expectations, except for Y2235, which has far too high a metallicity. This enhancement corresponds to an amount of rocky planet 5.2 times the mass of the earth in the outer convective envelope. If a star swallows a planet, the impact will usually be tangential because the ingestion is a consequence of an elliptical orbit decaying through tidal interactions with the star such that the planet grazes the external region of the star a few times before its orbital energy is reduced enough for ingestion. If so, the planet should dissolve in the stellar medium and increase the metallicity of the outer envelope of the star. So, to the extent that these observations are correctly interpreted, we have the evidence that stars do ingest planets, at least sometimes.

For those who wish to go deeper, being biased I recommend my ebook “Planetary Formation and Biogenesis.” Besides showing what I think happened, it analyses over 600 scientific papers, most of which are about different aspects.