Why is there no Disruptive Science Being Published?

One paper (Park et al. Nature 613: pp 138) that caught my attention over the post-Christmas period made the proposition that scientific papers are getting less disruptive over time, until now, in the physical sciences, there is essentially very little disruption currently. First, what do I mean by disruption? To me, this is a publication that is at cross-purposes with established thought. Thus the recent claim there is no sterile neutrino is at best barely disruptive because the existence of it was merely a “maybe” solution to another problem. So why has this happened? One answer might be we know everything so there is neither room nor need for disruption. I can’t accept that. I feel that scientists do not wish to change: they wish to keep the current supply of funds coming their way. Disruptive papers keep getting rejected because what reviewer who has spent decades on research want to let through a paper that essentially says he is wrong? Who is the peer reviewer for a disruptive paper?

Let me give a personal example. I made a few efforts to publish my theory of planetary formation in scientific journals. The standard theory is that the accretion disk dust formed planetesimals by some totally unknown mechanism, and these eventually collided to form planets. There is a small industry in running computer simulations of such collisions. My paper was usually rejected, the only stated reason being it did not have computer simulations. However, the proposition was that the growth was caused chemically and used the approximation there were no collisions. There was no evidence the reviewer read the paper past the absence of mention of simulations in the abstract. No comment about the fact here was the very first mechanism stated as to how accretion started and with a testable mathematical relationship regarding planetary spacing.

If that is bad, there is worse. The American Physical Society has published a report of a survey relating to ethics (Houle, F. H., Kirby, K. P. & Marder, M. P. Physics Today 76, 28 (2023). In a 2003 survey, 3.9% of early physicists admitted that they had been required to falsify data, or they did it anyway, to get to publication faster, to get more papers. By 2020, that number has risen to 7.3%. Now, falsifying data will only occur to get the result that fits in with standard thinking, because if it doesn’t, someone will check it.

There is an even worse problem: that of assertion. The correct data is obtained, any reasonable interpretation will say it contradicts the standard thinking, but it is reported in a way that makes it appear to comply. This will be a bit obscure for some, but I shall try to make it understandable. The paper is: Maerker, A.; Roberts, J. D. J. Am. Chem.Soc. 1966, 88, 1742-1759. At the time there was a debate whether cyclopropane could delocalize electrons. Strange effects were observed and there were two possible explanations: (1) it did delocalize electrons; (2) there were electric field effects. The difference was that both would stabilize positive charge on an adjacent centre, but the electric field effects would be opposite if the charge was opposite. So while it was known that putting a cyclopropyl ring adjacent to a cationic centre stabilized it, what happened to an anionic centre? The short answer is that most efforts to make R – (C-) – Δ, where Δ means cyclopropyl failed, whereas R – (C-) – H is easy to make. Does that look as if we are seeing stabilization? Nevertheless, if we put the cyclopropyl group on a benzylic carbon by changing R to a phenyl group φ so we have φ – (C-) – Δ an anion was just able to be made if potassium was the counter ion. Accordingly it was stated that the fact the anion was made was attributed to the stabilizing effect of cyclopropyl. No thought was given to the fact that any chemist who cannot make the benzyl anion φ – (C-) – H should be sent home in disgrace. One might at least compare like with like, but not apparently if you would get the answer you don’t want. What is even more interesting is that this rather bizarre conclusion has gone unremarked (apart from by me) since then.

This issue was once the source of strong debate, but a review came out and “settled” the issue. How? By ignoring every paper that disagreed with it, and citing the authority of “quantum mechanics”. I would not disagree that quantum mechanics is correct, but computations can be wrong. In this case, they used the same  computer programmes that “proved” the exceptional stability of polywater. Oops. As for the overlooked papers, I later wrote a review with a logic analysis. Chemistry journals do not publish logic analyses. So in my view, the reason there are no disruptive papers in the physical sciences is quite clear: nobody really wants them. Not enough to ask for them.

Finally, some examples of papers that in my opinion really should have  done better. Weihs et al. (1998) arXiv:quant-ph/9810080 v1 claimed to demonstrate clear violations of Bell’s inequality, but the analysis involved only 5% of the photons? What happened to the other 95% is not disclosed. The formation of life is critically dependent on reduced chemicals being available. A large proportion of ammonia was found in ancient seawater trapped in rocks at Barberton (de Ronde et al. Geochim.  Cosmochim. Acta 61: 4025-4042.) Thus information critical for an understanding of biogenesis was obtained, but the information was not even mentioned in the abstract or in keywords, so it is not searchable by computer. This would have disrupted the standard thinking of the ancient atmosphere, but nobody knew about it. In another paper, spectroscopy coupled with the standard theory predicted strong bathochromic shifts (to longer wavelengths) for a limited number of carbenium ions, but strong hypsochromic shifts were observed without comment (Schmitze, L. R.; Sorensen, T. S. J. Am. Chem. Soc. 1982, 104, 2600-2604.) So why was no fuss made about these things by the discoverers? Quite simply, they wanted to be in with the crowd. Be good, get papers, get funded. Don’t rock the boat! After all, nature does not care whether we understand or not.

Advertisement

Fusion Energy on the Horizon? Again?

The big news recently as reported in Nature (13 December) was that researchers at the US National Ignition Facility carried out a reaction that made more energy than was put in. That looks great, right? The energy crisis is solved. Not so fast. What actually happened was that 192 lasers delivered 2.05 MJ of energy onto a pea-sized gold cylinder containing a frozen pellet of deuterium and tritium. The reason for all the lasers was to ensure that the input energy was symmetrical, and that caused the capsule to collapse under pressures and temperatures only seen in stars, and thermonuclear weapons. The hydrogen isotopes fused into helium, releasing additional energy and creating a cascade of fusion reactions. The laboratory claimed the reaction released 3.15 MJ of energy, roughly 54% more than was delivered by the lasers, and double the previous record of 1.3 MJ.

Unfortunately, the situation is a little less rosy than that might appear. While the actual reaction was a net energy producer based on the energy input to the hydrogen, the lasers were consuming power even when not firing at the hydrogen, and between start-up and shut-down they consumed 322 MJ of energy. So while more energy came out of the target than went in to compress it, if we count the additional energy consumed elsewhere but necessary to do the experiment, then slightly less than 1% of what went in came out. That is not such a roaring success. However, before we get critical, the setup was not designed to produce power. Rather it was designed to produce data to better understand what is required to achieve fusion. That is the platitudinal answer. The real reason was to help nuclear weapons scientists understand what happens with the intense heat and pressure of a fusion reaction. So the first question is, “What next?” Weapons research, or contribute towards fusion energy for peaceful purposes?

Ther next question is, will this approach contribute to an energy program. If we stop and think, the gold pellet of frozen deuterium had to be inserted, then everything line up for a concentrated burst. You get a burst of heat, but we still only got 3 MJ of heat. You may be quite fortunate to convert that to 1 MJ of electricity. Now, if it takes, say, a thousand second before you can fire up the next capsule, you have 1 kW of power. Would what you sell that for pay for the gold capsule consumption?

That raises the question, how do you convert the heat to electricity? The most common answer offered appears to be to use it to boil water and use the steam to drive a turbine. A smarter way might be to use magnetohydrodynamics. The concept is the hot gas is made to generate a high velocity plasma, and as that is slowed down, the kinetic energy of the plasma is converted to electricity. The Russians tried to make electricity this way by burning coal in oxygen to make a plasma at about 4000 degrees K. The theoretical maximum energy U is given by

    U  =  (T – T*)/T

where T is the maximum temperature and T* is the temperature when the plasma degrades and the extraction of further electricity is impossible. As you can see, it was possible to get approximately 60% energy conversion. Ultimately, this power source failed, mainly because the cola produces a slag which damaged the electrodes. In theory, the energy could be drawn in almost 100 % efficiency.

Once the recovery of energy is solved, there remains then problem of increasing the burn rate. Waiting for everything to cool down then adding an additional pellet cannot work, but expecting a pellet of hydrogen to remain in the condensed form when inserted into a temperature of, say, a million degrees, is asking a lot.

This will be my last post for the year, so let me wish you all a Very Merry Christmas, and a prosperous and successful New Year. I shall post again in mid-January, after a summer vacation.

Meanwhile, for any who fell they have an interest in physics, in the Facebook Theoretical Physics group, I am posting a series that demonstrates why this year’s Nobel Prize was wrongly assigned as Alain Aspect did not demonstrate violations of Bell’s inequality. Your challenge, for the Christmas period, is to prove me wrong and stand up for the Swedish Academy. If it is too difficult to find, I may post the sequence here if there were interest.

Solar Cycles

As you may know, our sun has a solar cycle of about 11 years, and during that time the sun’s magnetic field changes, oscillating between very strong then there is a minimum, then back to the next cycle. During the minimum, there are far fewer sunspots, and the power output is also at a minimum. The last minimum started about 2017, so now we can expect increased activity. It may come as something of a disappointment that some of the peak temperatures here happened during solar minima as we can expect that the next few years will be even hotter and the effects of climate change more dramatic, but that is not what this post is about. The question is, is out sun a typical star, or is it unusual?

That raises the question, if it were unusual, how can we tell?

The power output may vary, but not extremely. The output generally is reasonably constant. We can attribute the variation in the solar output we receive over different years of about 0.1% of a degree Kelvin (or Centigrade) to that. There may appear to be more greater changes as the frequency and strength of aurorae are more significant. So how do we tell whether other stars have similar cycles? As you might guess, the power input from other stars is trivial compared even with that small variation. Any variation in total power output would be extremely difficult to detect, especially over time since instrument calibration could easily vary by more. A non-scientist may have trouble with this statement, but it would be extremely difficult to make a sensitive instrument that would record a dead flat line for a tiny constant power source over an eleven-year period. Over shorter time periods the power input from a star does vary in a clearly detectable way, and has been the basis of the Kepler telescope detecting planets.

However, as outlined in Physics World (April 5) there is a way to detect changes in magnetic fields. Stars are so hot they ionize elements, and some absorption lines in the spectrum due to ionized calcium happen to be sensitive to the stellar magnetic field. One survey showed that about half the stars surveyed appeared to have such starspot cycles, and the periodic time could be measured for half of those with the cycles. It should be noted that the inability to detect the lines does not mean the star does not have such a cycle; it may mean that, working at the limits of detection anyway, the signals were too weak to be certain of their presence.

The average length of the length of such solar cycles was about ten years, which is similar to our sun’s eleven-year cycle, although one star had a cycle lasting four years. One star, HD 166620 had a cycle seventeen years long, although “had” is the operative tense. From somewhere between 1995 and 2004, HD 166620’s starspot cycle simply turned off. (The uncertainty in the timing was because the study was discontinued due to a change of observatories, and it changed to one receiving an upgrade that was not completed until 2004.) We now await it starting up again.

Maybe that could be a long wait. In 1645 the Sun entered what we call the Maunder minimum. During the bottom of a solar cycle we would expect at least a dozen or so sunspots per year, and at the maximum, over 100. Between 1672 and 1699 fewer than 50 sunspots were observed. It appeared that for about 70 years the sun’s magnetic field was mostly turned off. So maybe HD 166620 is sustaining a similar minimum. Maybe there is a planet with citizens complaining about the cold.

What causes that? Interestingly, (Metcalfe et al. Astrophys. J. Lett. 826 L2 2016) showed that by correlating stellar rotation with age for stars older than the sun, while stars start out spinning rapidly, magnetic braking gradually slows them down, and as they slow it is argued that Maunder Minimum events may become more regular, and eventually the star slows sufficiently that the dynamo effect is insufficient and they enter a grand minimum. So eventually the Sun’s magnetic dynamo may shut down completely. Apparently, some stars display somewhat chaotic activity, some have spells of lethargy, thus HD 101501 shut down between 1980 – 1990, before reactivating, a rather short Maunder Minimum.

So when you hear people say the sun is just an average sort of star, they are fairly close to the truth. But when you hear them say the power output will steadily increase, that may not be exactly correct.

Is Science Sometimes Settled Wrongly

In a post two weeks ago I raised the issue of “settled science”. The concept was there had to be things you were not persistently checking. Obviously, you do not want everyone wasting time rechecking the melting point of benzoic acid, but fundamental theory is different. Who settles that, and how? What is sufficient to say, “We know it must be that!” In my opinion, admittedly biased, there really is something rotten in the state of science. Once upon a time, namely in the 19th century, there were many mistakes made by scientists, but they were sorted out by vigorous debate. The Solvay conferences continued that tradition in the 1920s for quantum mechanics, but something went wrong in the second half of the twentieth century. A prime example occurred in 1952, when David Bohm decided the mysticism inherent in the Copenhagen Interpretation of quantum mechanics required a causal interpretation, and he published a paper in the Physical Review. He expected a storm of controversy and he received – silence. What had happened was that J. Robert Oppenheimer, previously Bohm’s mentor, had called together a group of leading physicists to find an error in the paper. When they failed, Oppenheimer told the group, “If we cannot disprove Bohm we can all agree to ignore him”. Some physicists are quite happy to say Bohm is wrong; they don’t actually know what Bohm said, but they know he is wrong. (https://www.infinitepotential.com/david-bohm-silenced-by-his-colleagues/   ) If that were one isolated example, that would be wrong, but not exactly a crisis. Unfortunately, it is not an isolated case. We cannot know how bad the problem is because we cannot sample it properly.

A complicating issue is how science works. There are millions of scientific papers produced every year. Thanks to time constraints, very few are read by several people. The answer to that would be to publish in-depth reviews, but nobody appears to publish logic analysis reviews. I believe science can be “settled” by quite unreasonable means. As an example, my career went “off the standard rails” with my PhD thesis.

My supervisor’s projects would not work, so I selected my own. There was a serious debate at the time whether strained systems could delocalize their electrons into adjacent unsaturation in the same way double bonds did. My results showed they did not, but it became obvious that cyclopropane stabilized adjacent positive charge. Since olefins do this by delocalizing electrons, it was decided that cyclopropane did that too. When the time came for my supervisor to publish, he refused to publish the most compelling results, despite suggesting this sequence of experiments was his only contribution, because the debate was settling down to the other side. An important part of logic must now be considered. Suppose we can say, if theory A is correct, then we shall see Q. If we see Q we can say that the result is consistent with A, but we cannot say that theory B would not predict Q also. So the question is, is there an alternative?

The answer is yes. The strain arises from orbitals containing electrons being bent inwards towards the centre of the system, hence coming closer to each other. Electrons repel each other. But it also becomes obvious that if you put positive charge adjacent to the ring, that charge will attract the electrons and override the repulsion on and attract electrons moving towards the positive charge. That lowers the energy, and hence stabilizes the system. I actually used an alternative way of looking at it: If you move charge, by bending the orbitals, you should generate a polarization field, and that stabilizes the positive charge. So why look at it like that? Because if the cause of a changing electric field is behind a wall, say, you cannot tell the difference between charge moving or of charge being added. Since the field contains the energy the two approaches give the same strain energy but by considering an added pseudocharge it was easy to put numbers on effects.

However, the other side won, by “proving” delocalization through molecular orbital theory, which, as an aside, assumes it is delocalized. Aristotle had harsh words for people who prove what they assume after a tortuous path. As another aside, the same quantum methodology proved the stability of “polywater” – where your water could turn into a toffee-like consistency. A review came out, and confirmed the “other side” by showing numerous examples of where the strained ring stabilized positive charge. It also it ignored everything that contradicted it.

Much later I wrote a review that showed this first one had ignored up to sixty different types of experimental result that contradicted the conclusion. That was never published by a journal – the three reasons for rejection, in order, were: not enough pictures and too many mathematics; this is settled; and some other journals that said “We do not publish logic analyses”.

I most certainly do not want this post to simply turn into a general whinge, so I shall stop there other than to say I could make five other similar cases that I know about. If that happens to or comes to the attention of one person, how general is it? Perhaps a final comment might be of interest. As those who have followed my earlier posts may know, I concluded that the argument that the present Nobel prize winners in physics found violations of Bell’s Inequality is incorrect in logic. (Their procedure violates Noether’s theorem.) When the prize was announced, I sent a polite communication to the Swedish Academy of Science, stating one reason why the logic was wrong, and asking them, if I had missed something, to inform me where I was wrong. So far, over five weeks later, no response. It appears no response might be the new standard way of dealing with those who question the standard approach.

Success! Defence Against Asteroids

Most people will know that about 64 million years ago an asteroid with a diameter of about 10 km struck the Yucatán peninsula and exterminated the dinosaurs, or at least did great damage to them from which they never recovered. The shock-wave probably also initiated the formation of the Deccan Traps, and the unpleasant emission of poisonous gases which would finish off any remaining dinosaurs. The crater is 180 km wide and 20 km deep. That was a very sizeable excavation. Rather wisely, we would like to avoid a similar fate, and the question is, can we do anything about it? NASA thinks so, and they carried out an experiment.

I would be extremely surprised if, five years ago, anyone reading this had heard of Dimorphos. Dimorphos is a small asteroid with dimensions about those of the original Colosseum, i.e.  before vandals, like the Catholic Church took stones away to make their own buildings. By now you will be aware that Dimorphos orbits another larger asteroid called Didymos. What NASA has done was to send a metallic object of dimensions 1.8 x 1.9 x 2.6 meters, of mass 570 kg, and velocity 22,530 km/hr to crash into Dimorphos to slightly slow its orbital speed, which would change its orbital parameters. It would also change then orbital characteristics of the two around the sun. Dimorphos has a “diameter” of about 160 m., Didymos about 780 m. Neither are spherical hence the quotation marks.

This explains why NASA selected Dimorphos for the collision. First, it is not that far from Earth, while the two on their current orbits will not collide with Earth on their current orbits. Being close to Earth, at least when their orbits bring them close, lowers the energy requirement to send an object there. It is also easier to observe what happens hence more accurately determine the consequences. The second reason is that Dimorphos is reasonably small and so if a collision changes its dynamics, we shall be able to see by how much. At first sight you might say that conservation of momentum makes that obvious, but it is actually more difficult to know because it depends on what takes the momentum away after the collision. If it is perfectly inelastic, the object gets “absorbed” by the target which stays intact, then we simply add its relative momentum to that of the target. However, real collisions are seldom inelastic, and it would have been considered important to determine how inelastic. A further possibility is that the asteroid could fragment, and send bits in different directions. Think of Newton’s cradle. You hit one end and the ball stops but another flies off from the other end, and the total stationary mass is the same. NASA would wish to know how well the asteroid held together. A final reason for selecting Dimorphos would be that by being tethered gravitationally to Didymos, it could not go flying off is some unfortunate direction, and eventually collide with Earth. It is interesting that the change of momentum is shared between the two bodies through their gravitational interaction.

So, what happened, apart from the collision. There was another space craft trailing behind: the Italian LICIACube (don’t you like these names? It is an acronym for “Light Italian Cubesat for Imaging Asteroids”, and I guess they were so proud of the shape they had to have “cube” twice!). Anyway, this took a photograph before and after impact, and after impact Dimorphos was surrounded by a shower of material flung up from the asteroid. You could no longer see the asteroid for the cloud of debris. Of course Dimorphos survived, and the good news is we now know that the periodic time of Dimorphos around Didymos has been shortened by 32 minutes. That is a genuine success. (Apparently, initially a change  by as little as 73 seconds would have been considered a success!) Also, very importantly, Dimorphos held together. It is not a loosely bound rubble pile, which would be no surprise to anyone who has read my ebook “Planetary Formation and Biogenesis”.

This raises another interesting fact. The impact slowed Dimorphos down relative to Didymos, so Dimorphos fell closer to Didymos, and sped up. That is why the periodic time was shortened. The speeding up is because when you lower the potential energy, you bring the objects closer together and thus lower the total energy, but this equals the kinetic energy except the kinetic energy has the opposite sign, so it increases. (It also shortens the path length, which also lowers the periodic time..)

The reason for all this is to develop a planetary protection system. If you know that an asteroid is going to strike Earth, what do you do? The obvious answer is to divert it, but how? The answer NASA has tested is to strike it with a fast-moving small object. But, you might protest, an object like that would not make much of a change in the orbit of a dinosaur killer. The point is, it doesn’t have to. Take a laser light and point it at a screen. Now, give it a gentle nudge so it changes where it impacts. If the screen as a few centimeters away the lateral shift is trivial, but if the screen is a kilometer away, the lateral shift is now significant, and in fact the lateral shift is proportional to the distance. The idea is that if you can catch the asteroid far enough away, the asteroid won’t strike Earth because the lateral shift will be sufficient.

You might protest that asteroids do not travel in a straight line. No, they don’t, and in fact have trajectories that are part of an ellipse. However, this is still a line, and will still shift laterally. The mathematics are a bit more complicated because the asteroid will return to somewhere fairly close to where it was impacted, but if you can nudge it sufficiently far away from Earth it will miss. How big a nudge? That is the question about which this collision was designed to provide us with clues.

If something like Dimorphos struck Earth it would produce a crater about 1.6 km wide and 370 m deep, while the pressure wave would knock buildings over tens of km away. If it struck the centre of London, windows would break all over South-East England. There would be no survivors in central London, but maybe some on the outskirts. This small asteroid would be the equivalent a good-sized hydrogen bomb, and, as you should realize, a much larger asteroid would do far more damage. If you are interested in further information, I have some data and a discussion of such collisions in my ebook noted above.

2022 Nobel Prize in Physics

When I woke up on Wednesday, I heard the Physics prize being announced: it was given for unravelling quantum entanglement, and specifically to Alain Aspect for the first “convincing demonstration of violations of Bell’s Inequalities”. In the unlikely event you recall my posts: https://wordpress.com/post/ianmillerblog.wordpress.com/542  and https://wordpress.com/post/ianmillerblog.wordpress.com/547 will realize that I argue he did no such thing. In my ebook “Guidance Waves” I made this point ten years ago.

So, how do these Inequalities work? Basically, to use these inequalities you need measurements on a sequence of a number of equivalent things (in Aspect’s case, photons), where the measurements that can have one of two values (such as pass/fail) are made on the items at three DIFFERENT conditions. Bell illustrated this need for different conditions by washing socks at 25 degrees, 35 degrees and 45 degrees C. What Bell did was to derive a relationship that mixed up the passes and fails from different conditions.

The issue I dispute involves rotational invariance. Each photon is polarized, which means it will go through a filter aligned with its polarization, and not one at right angles to it. In between, there is a probabilistic relation called the Malus Law. So what Aspect did was to pass photons through polarizing filters orients at different angles, and for example a + result was obtained with a filter that started vertical and a fail with a filter that started horizontal. Those configurations were set A. Set B involved rotating a filter clockwise by 22.5 degrees, set C by rotating a filter by 45 degrees. Three joint sets were collected: A+.B-; B+.C-, and A+.C-. These sets have to be different for the inequality to be used.

My objection is simple. Aspect showed that rotating one filter always achieved a constant value, so the source was giving off photons with all polarizations equally. If so, B+.C- is exactly the same as A+.B- except the key parts of the apparatus have been rotated by 22.5 degrees. If the background has no angular preference, it should not matter what angle the action is to it. Thus, if you are playing billiards, you do not have to worry about whether the table is pointing north or east. (If you use that as an excuse for a poor shot, quite rightly you will be put in your place.) This fact that the orientation of how you measure something does not affect the answer unless there is an angular asymmetry is shown by Noether’s theorem, on which all the conservation laws of physics depend. The angular symmetry leads to the conservation of angular momentum, a physical law on which ironically enough Aspect’s experiment depends to get the entanglement, and he then proceeded to ignore it when evaluating the results of his experiment.

Interestingly, if the background has a directional preference, the above argument does not apply. Thus, a compass needle always tries to point north because unlike the billiards example, there is a force that interacts with the needle that comes from the planet’s magnetic field. Now, suppose you employ a polarized source. Now there will be a preferred background direction, and the Aspect experiment does generate sufficient variables. However, calculations show that if you properly count the number of photons and apply the Malus Law, the results shpould comply with Bell’s inequality.

Accordingly, I am in the weird position of having published an argument that shows something in the scientific literature is wrong, nobody has ever found a flaw in my argument, yet the something has been awarded a Nobel Prize. Not many people get into that position. Does that mean that entanglement does not occur? No, it does not. The entanglement is simply a consequence of the law of conservation of angular momentum. There are further consequences of my argument, but they go beyond this post.

Nuclear War is Not Good

Yes, well that is sort of obvious, but how not good? Ukraine has brought the scenario of a nuclear war to the forefront, which raises the question, what would the outcome be? You may have heard estimates from military hawks that apart from those killed due to the blasts and those who got excessively irradiated, all would be well. Americans tend to be more hawkish because the blasts would be “over there”, although if the enemy were Russia, Russia should be able to bring it to America. There is an article in Nature (579, pp 485 – 487) that paints a worse picture. In the worst case, they estimate deaths of up to 5 billion, and none of these are due to the actual blasts or the radiation; they are additional extras. The problem lies in the food supply.

Suppose there was a war between India and Pakistan. Each fires nuclear weapons, first against military targets, then against cities. Tens of millions die in the blasts. However, there is a band of soot that rises into the air, and temperatures drop. Crop yields drop dramatically from California to China, and affect dozens of countries. Because of the limited food yields, more than a billion people will suffer from food shortages. The question then is, how valid are these sort of predictions?

Nuclear winter was first studied during the cold war. The first efforts described how such smoke would drop the planet into a deep freeze, lasting for months, even in summer. Later studies argued this effect was overdone and it would not end up with such a horrific chill, and unfortunately that has encouraged some politicians who are less mathematically inclined and cannot realize that “less than a horrific chill” can still be bad.

India and Pakistan each have around 150 nuclear warheads, so a study in the US looked into what would happen in which the countries set off 100 Hiroshima-sized bombs. The direct casualties would be about 21 million people. But if we look at how volcanic eruptions cool the planet, and how soot goes into the atmosphere following major forest fires, modelling can predict the outcome. A India-Pakistan war would put 5 million tonne of soot into the atmosphere, while a US Russian war would loft 150 million tonne. The first war would lower global temperatures by a little more than 1 degree C           , but the second would lower it by 10 degrees C, temperatures not seen since the last Ice Age. One problem that may not be appreciated is that sunlight would heat the soot, and by heating the adjacent air, it causes it to rise, and therefore persist longer.

The oceans tell a different story. Global cooling would affect the oceans’ acidity, and the pH would soar upwards (making it more alkaline). The model also suggested that would make it harder to form aragonite, making life difficult for shellfish. Currently, the shellfish are in the same danger from too much acidity; depending on aragonite is a bad option! The biggest danger would come regions that are home to coral reefs. There are some places that cannot win. However, there is worse to come: possibly a “Nuclear Niño”, which is described as a “turbo-charged El Niño”. In the case of a Russia/US war, the trade winds would reverse direction and water would pool in the eastern pacific ocean. Droughts and heavy rain would plague different parts of the world for up to seven years.

One unfortunate effect is that this is modelled. Immediately, another group from Los Alamos carried out different calculations and came to less of a disastrous result. The difference depends in part on how they simulate the amount of fuel, and how that is converted to smoke. Soot comes from partial combustion, and what happens where in a nuclear blast is difficult to calculate.

The effects on food production could be dramatic. Even following the small India-Pakistan war, grain production could drop by approximately 12% and soya bean production by 17%. The worst effects would come from the mid-latitudes such as the US Midwest and Ukraine. The trade in food would dry up because each country would be struggling to feed itself. A major war would be devastating, for other reasons as well. It is all very well to say your region might survive the climate change, and somewhere like Australia might grow more grain if it gets adequate water, as at present it is the heat that is the biggest problem. But if the war also took out industrial production and oil production and distribution, now what? Tractors are not very helpful if you cannot purchase diesel. A return to the old ways of harvesting? Even if you could find one, how many people know how to use a scythe? How do you plough? Leaving the problem of knowing where to find a plough that a horse could pull, and the problem of how to set it up, where do you find the horses? It really would not be easy.

Rotation, Rotation

You have probably heard of dark matter. It is the stuff that is supposed to be the predominant matter of the Universe, but nobody has ever managed to find any, which is a little embarrassing when there is supposed to be something like about 6 times more dark matter in the Universe than ordinary matter. Even more embarrassing is the fact that nobody has any real idea what it could be. Every time someone postulates what it is, and work out a way to detect it, they find absolutely nothing. On the other hand there may be a simpler reason for this. Just maybe they postulated what they thought they could find, as opposed to what it is, in other words it was a proposal to get more funds with the uncovering the nature of the Universe as a hoped-for by-product.

The first reason why there might be dark matter came from the rotation of galaxies. Newtonian mechanics makes some specific predictions. Very specifically, the periodic time for an object orbiting the centre of mass at a distance r varies as r^1.5. That means that say there are two orbiting objects, say Earth and Mars, where Mars is about 1.52 times more distant, the Martian year is about 1.88 Earth years. The relationship works very well in our solar system, and it was from the unexpected effects on Uranus that Neptune was predicted, and found to be in the expected place. However, when we take this up to galactic level, things come unstuck. As we move out from the centre, stars move faster than predicted from the speed of those in the centre. This is quite unambiguous, and has been found in many galaxies. The conventional explanation is that enormous quantities of cold dark matter provide the additional gravitational binding.

However, that explanation also has problems. A study of 175 galaxies showed that the radial acceleration at different distances correlated with the amount of visible matter attracting it, but the relationship does not match Newtonian dynamics. If the discrepancies are due to dark matter, one might expect the dark matter to be present in different amounts in different galaxies, and different parts of the same galaxy. Any such relationship should have a lot of scatter, but it hasn’t. Of course, that might be a result of dark matter being attracted to ordinary matter.

There is an alternative explanation called MOND, which stands for modified Newtonian gravity, which proposes that at large distances and small accelerations, gravity decays more slowly than the inverse square law. The correlation of the radial acceleration with the amount of visible matter would be required by something like MOND, so that is a big plus for it, although the only reason it was postulated in this form was to account for what we see. However, a further study has shown there is no simple scale factor. What this means is that if MOBD is correct the effects on different galaxies should be essentially dependent on the mass of visible matter but it isn’t. MOND can explain any galaxy, but the results don’t translate to other galaxies in any simple way. This should rule out MOND without amending the underlying dynamics, in other words, altering Newtonian laws of motion as well as gravity. This may be no problem for dark matter, as different distributions would give different effects. But wait: in the previous paragraph it was claimed there was no scatter.

The net result: there are two sides to this: one says MOND is ruled out and the other says no it isn’t, and the problem is that it is observational uncertainties that suggest it might be. The two sides of the argument seem to be either using different data or are interpreting the same data differently. I am no wiser.

Astronomers have also observed one of the most distant galaxies ever, MACS1149-JD1, which is over ten billion light years away, and it too is rotating, although the rotational velocity is much slower than galaxies that we see that are much closer and nowhere near as old. So why is it slower? Possible reasons include it has much less mass, hence the gravity is weaker.

However, this galaxy is of significant interest because its age makes it one of the earliest galaxies to form. It also has stars in it estimated to be 300 million years old, which puts the star formation at just 270 million years after the Big Bang. The problem with that is it is in the dark period, when matter as we know it had presumably not formed, so how did a collection of stars start? For gravity to cause a star to accrete, it has to give off radiation but supposedly no radiation was given off then. Again, something seems to be wrong. That most of the stars are just this age makes it appear that the galaxy formed about the same time as the stars, or put it another way, something made a whole lot of stars form at the same time in places where the net result was a galaxy. How did that happen? And where did the angular momentum come from? Then again, did it happen? This is at the limit of observational techniques, so have we drawn a non-valid conclusion from difficult to interpret data. Again, I have no idea, but I mention this to show there is a still a lot to learn about how things started.

Betelgeuse Dimmed

First, I apologize for the initial bizarre appearance of my last post. For some reason, some computer decided to slice and dice. I have no idea why, or for that matter, how. Hopefully, this post will have better luck.

Some will recall that around October 2019 the red supergiant Betelgeuse dimmed, specifically from magnitude +0.5 down to +1.64. As a variable star, its brightness oscillates, but it had never dimmed like this before, at least within our records. This generated a certain degree of nervousness or excitement because a significant dimming is probably what happens initially before a supernova. There has been no nearby supernova since that of the crab nebula in 1054 AD.

To put a cool spot into perspective, if Betelgeuse replaced the sun, its size is such it would swallow Mars, and its photosphere might almost reach Saturn. Its mass is estimated at least ten times, or possibly up to twenty times, the mass of the sun. Such a variation sparks my interest because when I pointed out that my proposed dependence of characteristic planetary orbital semimajor axes on the cube of the mass of the star ran into trouble because the stellar masses were not known that well I got criticised by an astronomer: they knew the masses to within a few percent. The difference between ten times the sun’s mass and twenty times is more than a few percent. This is a characteristic of science. They can measure stellar masses fairly accurately in double star systems, then they “carry over” the results,

But back to Betelgeuse. Our best guess as to distance is between 500 – 600 light years. Interestingly, we have observed its photosphere, the outer “shell” of the star that is transparent to photons, at least to a degree, and this is non-spherical, presumably due to stellar pulsations that send matter out from the star. The star may seem “stable” but actually its surface (whatever that means) is extremely turbulent. It is also surrounded by something we could call an atmosphere, an envelope of matter about 250 times the size of the star. We don’t really know its size because these asymmetric pulsations can add several astronomical units (the Earth-sun distance) in selected directions.

Anyway, back to the dimming. Two rival theories were produced: one involved the development of a large cooler cell that came to the surface and was dimmer than the rest of Betelgeuse’s surface. The other was the partial obscuring of the star by a dust cloud. Neither proposition really explained the dimming, nor did they explain why Betelgeuse was back to normal by the end of February, 2020. Rather unsurprisingly, the next proposition was that the dimming was caused by both of those effects.

Perhaps the biggest problem because telescopes could only look at the star sone of them however a Japanese weather satellite ended up providing just the data they needed. This was somewhat inadvertent. The weather satellite was in geostationary orbit 35,786 km above the Western Pacific. It was always looking at half of Earth, and always the same half, but the background was also always constant, and in the background was Betelgeuse. The satellite revealed that the star overall cooled by 140 degrees C. This was sufficient to reduce the heating of a nearby gas cloud, and when it cooled, dust condensed and formed obscuring dust. So both theories were right, and even more strangely, both contributed roughly equally to what was called “the Great Dimming”.

It also suggested more was happening to the atmospheric structure of the star before this happened. By looking at the infrared lines, it became apparent that water molecules in the upper atmosphere that would normally create absorption lines in the star’s spectrum suddenly changed to form emission lines. Something had made them become unexpectedly hotter. The current thinking is that a shock-wave from the interior propelled a lot of gas outwards from the star, leading to a cooler surface, while heating the outer atmosphere. That is regarded as the best current explanation. It is possible that there was a similar dimming event in the 1940s, but otherwise we have not noticed much, but possibly it could have occurred but our detection methods may not have been accurate enough. People may not want to get carried away with, “I think it might be dimmer.” Anyway, for the present, no supernova. But one will occur, probably within the next 100,000 years. Keep looking upwards!

Some Scientific Curiosities

This week I thought I would try to be entertaining, to distract myself and others from what has happened in Ukraine. So to start with, how big is a bacterium? As you might guess, it depends on which one, but I bet you didn’t guess the biggest. According to a recent article in Science Magazine (doi: 10.1126/science.ada1620) a bacterium has been discovered that lives in Caribbean mangroves that, while it is a single cell, it is 2 cm long. You can see it (proposed name, Thiomargarita magnifica) with the naked eye.

More than that, think of the difference between prokaryotes (most bacteria and single-cell microbes) and eukaryotes (most everything else that is bigger). Prokaryotes have free-floating DNA while eukaryotes package their DNA nucleus and put various cell functions into separate vesicles and can move molecules between the vesicles. But this bacterium cell includes two membrane sacs, only one of which contains DNA. The other sac contains 73% of the total volume and seems to be filled with water. The genome was nearly three times bigger than those of most bacteria.

Now, from Chemistry World. You go to the Moon or Mars, and you need oxygen to breathe. Where do you get it from? One answer is electrolysis, so do you see any problems, assuming you have water and you have electricity? The answer is that it will be up to 11% less efficient. The reason is the lower gravity. If you try to electrolyse water at zero g, such as in the space station, we knew it was less efficient because the gas bubbles have no net force on them. The force arises through different densities generating a weight difference, and the lighter gas rises, but in zero g, there is no lighter gas – they might have different masses, but they all have no weight. So how do they know this effect will apply on Mars or the Moon? They carried out such experiments on board free-fall flights with the help of the European Space Agency. Of course, these free-fall experiments are somewhat brief as the pilot of the aircraft will have this desire not to fly into the Earth.

The reason the electrolysis is slower is because gas bubble desorption is hindered. Getting the gas off the electrodes occurs because there are density differences, and hence a force, but in zero gravity there is no such force. One possible solution being considered is a shaking electrolyser. Next thing we shall see is requests for funding to build different sorts of electrolysers. They have considered using them in centrifuges to construct models to compute what the lower gravity would do, but an alternative might be to have such a process operating within a centrifuge. It does not need to be a fast spinning centrifuge as all you are trying to do is to generate the equivalent of 1 g, Also, one suggestion is that people on Mars or the Moon might want to spend a reasonable fraction of their time inside one such large centrifuge, to help keep the bone density up.

The final oddity comes from Physics World. As you may be aware, according to Einstein’s relativity, time, or more specifically, clocks, run slower as the gravity increases. Apparently this was once tested by taking a clock up a mountain and comparing it with one kept at the base, and General Relativity was shown to predict the correct result. However, now we have improved clocks. Apparently the best atomic clocks are so stable they would be out by less than a second after running for the age of the universe. This precision is astonishing. In 2018 researchers at the US National Institute for Standards and Technology compared two such clocks and found their precision was about 1 part in ten to the power of eighteen. It permits a rather astonishing outcome: it is possible to detect the tiny frequency difference between the two clocks if one is a centimeter higher than the other one. This will permit “relativistic geodesy”, which could be used to more accurately measure the earth’s shape, and the nature of the interior, as variations in density outcrops would cause minute changes in gravitational potential. Needless to say, there is a catch: they may be very precise but they are not very robust. Taking them outside the lab leads to difficulties, like stopping.

Now they have done better – using strontium atoms, uncertainty to less that 1 part in ten to the power of twenty! They now claim they can test for quantum gravity. We shall see more in the not too distant future.