Cosmic Catastrophes

Astronomers have found a record explosion (Monthly Notices of the Royal Astronomical Society, Volume 522, Issue 3, July 2023, Pages 3992-4002, https://doi.org/10.1093/mnras/stad1000) . The total energy output was stated to be 1.5 x 10^53 ergs. (People still use ergs??) An erg is 10^-7 of a Joule, so the explosion generated 1.5 x 10^46 Joules. These numbers are sort of mind-boggling. Try thinking of tonnes of TNT. The equivalent would be about 3.4 x 10^36 tonnes, or well over 10^28 of the largest hydrogen bombs ever exploded. That is 10 with 28 zeros after it. Of course we can hardly see it. It is about eight billion light years away, which is probably just as well. It is more than ten time brighter than any supernova ever recorded and so far has been going for three years. The “fireball” is about 100 times the size of the solar system and that mass is two trillion times brighter than the sun. If you want to impress your friends, the explosion is known as AT2021lwx. Astronomers have charming names for things.

So what could have caused it? One option would be a tidal disruption event. This is essentially when a star is tidally disrupted by the black hole and the black hole disrupts the star, leading to star matter pouring into the black hole. Oddly enough, this depends on a tidal radius, which in turn depends on the density of the star, so that for a given stellar mass and radius there is a corresponding upper black hole mass for which this cannot occur. Larger is not better for this. For this to be the cause of this event, the star would have to be almost fifteen times the mass of the sun, which is somewhat unlikely because such a massive star only lives for about 15 million years. It is hard to see how such a massive star could be born there, because the black hole would consume the gas first, and it  is hard to see how it could move there, so that is probably not the cause.

A better alternative is thought to be a vast cloud of gas, probably thousands of times more massive than the sun, falling into a black hole. The energy is simply gravitational potential energy being converted to heat as the gas falls towards the black hole and it estimated the temperature reached about 13,000 degrees C. The gas and dust was believed to be in a disk circulating the black hole, and something must have dislodged it and made it start to fall into the black hole. However, so far nothing has been modelled.

So, at the end of the day, we don’t know what it is.

On a much much smaller scale astronomers have noticed a optical outburst named ZTF SLRN-2020 (Nature, 617: 38 – 39) that lasted roughly ten days, and then slowly decayed over six months. The start of the burst coincided with infrared emission that lasted long after the optical emission had decayed. The optical radiation was featureless continuous emission at the red end, as well as lines corresponding to molecular absorption.

The first thought was this was a classical nova. This appears to happen when a white dwarf accretes hydrogen from a close companion star. A white dwarf is effectively a dead star, and is what is left over after nuclear fusion has stopped. They have the mass of the sun and the size of Earth, so they are dense. Hydrogen landing on it will trigger nuclear fusion. However, if this were the cause we would expect to see spectral lined from elements entrained in the gas and we don’t.

Another possibility was a so-called red nova, which is caused by two stars merging. The nature of the light is quite similar, except the power output was far too small. Further, the star’s radius did not change appreciably. After some very detailed observing, they found the source was a sun-like star and the power output was consistent with the other object that did the merging being a giant planet. So it appears the star has swallowed a planet.

How would it do that? If planets get too large for the distance between them, gravity drives them into elliptical orbits with exchanged energy, i.e. one goes closer to the star and one goes further. If the exchange of angular momentum leads to the inner one having a very high eccentricity, its perigee can have a close enough approach to the star that frictional interactions cause the orbit to decay. Once that starts there is no escape for the planet.

Advertisement

The Biggest Problem of Them All

From Physics World, something unexpected. The “problem” is, er, the Universe. Problems don’t come much bigger! To put this in perspective, the major theory used to describe it is Einstein’s General Relativity. There is only one problem with that theory: everything should collapse into a singular “point” and it doesn’t. To get around that problem, Einstein introduced something he called “the Cosmological Constant”, which effectively was something pulled out of thin air in an ad hoc way to at least admit the obvious problem that the Universe remained large. Worse, this steady universe could not be extrapolated to the infinite past. Somewhere along the line it had to be different from now. It was then that Georges Lemaȋtre proposed a theory that space was expanding, to which Einstein replied that his maths were correct, but his physics abominable. However, Einstein had to backtrack because Hubble showed it was expanding. (Actually, Lemaȋtre had provided evidence, but Hubble’s was better.) The idea was that all matter started from a point, and space expanded to let the energy collapse into matter. Fred Hoyle jokingly referred to this as “The Big Bang”.

All was well. Space was expanding uniformly, so the galaxies were moving away from each other uniformly, on a very large scale, but with localized variation in between. Why space was expanding was left unanswered. Thus on a very large scale, the gravitational interactions between distant galaxies was diminishing, which violates the concept of the law of conservation of energy. (Energy does not have to be conserved, though. Energy is a tricky topic in general relativity.) Hoyle was keen on a steady state universe, and while he accepted the Universe was expanding, he took the idea that if everything was moving apart, that loss of gravitational energy was made up for by the creation of new matter. This was not generally accepted, and the question of this energy imbalance remained.

There is worse. To maintain a constant expansion rate, general relativity requires a constant energy density, and how can that arise when space is expanding? The only way would seem to be that this energy density is a property of vacuum. Vacuum is, therefore, not nothing. There is nothing like an opportunity in physics to get speculation going, and “not nothing” is such an opportunity. Quantum field theory announced the vacuum is full of extraordinarily tiny harmonic oscillators, which convey a zero point energy to space. We have our energy accounted for. All was good, until it wasn’t. It was obvious to take the quantum field theory and see if it fitted with observation on galaxies. It did not. In fact the error was a factor of ten multiplied by itself 120 times (Adler, R. J., Casey, B., Jacob, O. C. 1995. Vacuum catastrophe: an elementary exposition of the cosmological constant problem. Am J. Phys 63: 620 – 626.) This was the most horrendous disagreement between calculation and observation ever.

Then came a shock: not only were the galaxies moving apart, but such motion was accelerating. This was caused, in terms of labelling, by something called dark energy. It should be noted that at this point, dark energy is merely a term that essentially is a holding term to recognize that something must be causing the acceleration. So, what is it? The good news is that whatever it is can also account for the general expansion. All we need is something that is getting bigger as the universe expands.

Now, we have a proposition, which is called “cosmological coupling”. The concept starts with the observation that the  mass of black holes at the heart of distant galaxies have been growing about ten times faster than simply accreting mass or merging with other black holes would allow. The coupling means that the growth of the black holes matched the accelerating expansion of the universe. The concept seems to be that the singularity of black holes is replaced by additional “vacuum energy”. Their coupling means that if the volume of the universe doubles, so does the mass of black holes, but the number of black holes remains constant. The logic is that “something” must give rise to the expansion of the universe, and since no other object exhibits similar behaviour, black holes must be the “something”. Is that valid? There is a problem for me with that explanation, apart from the fact that correlation does not mean causation. The evidence is the Universe took on massive expansion initially, then calmed down, and then the expansion accelerated again. During the initial expansion there would be few if any black holes. Then, suddenly, there was a massive growth of them, but that happened at a time when the universe expansion was at its slowest, then when the final acceleration started, the black holes had settled down to minimal growth. The required correlation for the hypothesis seems to be, if anything, an anti correlation.

Climate Change: A Space Solution”?

By now, many around the world will have realized we are experiencing climate change, thanks to our predilection for burning fossil fuels. The politicians have made their usual platitudinous statements that this problem will be solved, say twenty years out. It is now thirty years since these statements started being made, and we find ourselves worse off than when the politicians started. Their basic idea seems to be that the crisis gets unmanageable in, say, sixty years, so we can leave it for now. What actually happens is, er, nothing in the immediate future. It can be left for politicians thirty years out from now. Then, when the thirty years has passed it is suddenly discovered that it is all a little harder than expected, but they can introduce things like carbon trading, which employs people like themselves, and they can exhort people to buy electric cars. (If you live somewhere like Calgary and want to go skiing at Banff, it appears you need to prepare your car four hours before using it, or maintain battery warmers because the batteries do not like the cold one bit.)

Bromley et al. in PLOS Climate (https://doi.org/10.1371/journal.pclm.0000133) have a solution. To overcome the forcing of the greenhouse gases currently in the atmosphere, according to this article all you have to do is to reduce the solar input by 1.8%. What could be simpler? This might be easier than increasing the albedo.

The question then is, how to do this? The proposed answer is to take fine fluffy dust from the Moon and propel it to the Earth-Sun L1 position. This will provide several days of shading, while the solar winds and radiation slowly clear this dust away. How much such dust? About ten billion kg, which is about a thousand times more mass than humans have currently ever sent into space. Over a ten year period, this corresponds to a sphere of radius roughly 200 m, which corresponds to the annual excavation from many open pit mines on Earth. The advantage of using the Moon, of course is that the gravitational force is about 17% that of Earth so you need much less energy to eject the dust. The difficulty is that you have to put sufficient equipment on the Moon’s surface to gather and eject the dust. One difficulty I see here is that while there is plenty of dust on the Moon, it is not in a particularly deep layer, which mean the equipment has to keep moving. Larger fluffy particles are apparently preferred, but fluffy particles would probably be formed in a fluid eruption, and as far as we know, that is less likely on the Moon.

Then there are problems. The most obvious one, apart from the cost of the whole exercise, is the need for accuracy. If the dust is outside the lines from the edges of the Sun-Earth, then the scattering can increase the solar radiation to Earth. Oops. The there is another problem. Unlike L4 and L5, which are regions, L1 really is a point where an object will corotate. If a particle is even 1 km off the point, it could drift away by up to 1000 km in a year, and if it does that, perforce it will drift out of the Sun-Earth line, in which case the dust will be enhancing the illumination of Earth. Again, oops. Added to this are a small number of further effects, the most obvious being solar wind and radiation pressure which will push objects away from L1.

The proposed approach is to launch dust at 4.7 km/s towards L1, and do it from the Moon when the Moon is close to being in line, so that the dust, as it streams towards L1 continues to provide shielding while it is in-flight. The launching would require 10^17 J, which is roughly the energy generated by a few square km of solar panels. One of the claimed advantages of this is that the dust could be sent in pulses, timed to cool places with major heat problems. It is probably unsurprising that bigger particles are less efficient at shading sunlight, at least on a per mass scale, simply because there is mass behind the front surface doing nothing. Particles too small neither last very long in the required position, nor do they offer as much shielding. As it happens, somewhat fortuitously, the best size is 0.2 μm, and that happens to be the average size of lunar regolith dust.

One of the advantages claimed for this method is that once a week or so is over, there are no long-term consequences from that dust. One of the disadvantages is that which goes for any planetary engineering proposal: What is the minimum agreement required from the world population, how do you get it, and what happens if someone does it anyway? Would you vote for it?

Why is there no Disruptive Science Being Published?

One paper (Park et al. Nature 613: pp 138) that caught my attention over the post-Christmas period made the proposition that scientific papers are getting less disruptive over time, until now, in the physical sciences, there is essentially very little disruption currently. First, what do I mean by disruption? To me, this is a publication that is at cross-purposes with established thought. Thus the recent claim there is no sterile neutrino is at best barely disruptive because the existence of it was merely a “maybe” solution to another problem. So why has this happened? One answer might be we know everything so there is neither room nor need for disruption. I can’t accept that. I feel that scientists do not wish to change: they wish to keep the current supply of funds coming their way. Disruptive papers keep getting rejected because what reviewer who has spent decades on research want to let through a paper that essentially says he is wrong? Who is the peer reviewer for a disruptive paper?

Let me give a personal example. I made a few efforts to publish my theory of planetary formation in scientific journals. The standard theory is that the accretion disk dust formed planetesimals by some totally unknown mechanism, and these eventually collided to form planets. There is a small industry in running computer simulations of such collisions. My paper was usually rejected, the only stated reason being it did not have computer simulations. However, the proposition was that the growth was caused chemically and used the approximation there were no collisions. There was no evidence the reviewer read the paper past the absence of mention of simulations in the abstract. No comment about the fact here was the very first mechanism stated as to how accretion started and with a testable mathematical relationship regarding planetary spacing.

If that is bad, there is worse. The American Physical Society has published a report of a survey relating to ethics (Houle, F. H., Kirby, K. P. & Marder, M. P. Physics Today 76, 28 (2023). In a 2003 survey, 3.9% of early physicists admitted that they had been required to falsify data, or they did it anyway, to get to publication faster, to get more papers. By 2020, that number has risen to 7.3%. Now, falsifying data will only occur to get the result that fits in with standard thinking, because if it doesn’t, someone will check it.

There is an even worse problem: that of assertion. The correct data is obtained, any reasonable interpretation will say it contradicts the standard thinking, but it is reported in a way that makes it appear to comply. This will be a bit obscure for some, but I shall try to make it understandable. The paper is: Maerker, A.; Roberts, J. D. J. Am. Chem.Soc. 1966, 88, 1742-1759. At the time there was a debate whether cyclopropane could delocalize electrons. Strange effects were observed and there were two possible explanations: (1) it did delocalize electrons; (2) there were electric field effects. The difference was that both would stabilize positive charge on an adjacent centre, but the electric field effects would be opposite if the charge was opposite. So while it was known that putting a cyclopropyl ring adjacent to a cationic centre stabilized it, what happened to an anionic centre? The short answer is that most efforts to make R – (C-) – Δ, where Δ means cyclopropyl failed, whereas R – (C-) – H is easy to make. Does that look as if we are seeing stabilization? Nevertheless, if we put the cyclopropyl group on a benzylic carbon by changing R to a phenyl group φ so we have φ – (C-) – Δ an anion was just able to be made if potassium was the counter ion. Accordingly it was stated that the fact the anion was made was attributed to the stabilizing effect of cyclopropyl. No thought was given to the fact that any chemist who cannot make the benzyl anion φ – (C-) – H should be sent home in disgrace. One might at least compare like with like, but not apparently if you would get the answer you don’t want. What is even more interesting is that this rather bizarre conclusion has gone unremarked (apart from by me) since then.

This issue was once the source of strong debate, but a review came out and “settled” the issue. How? By ignoring every paper that disagreed with it, and citing the authority of “quantum mechanics”. I would not disagree that quantum mechanics is correct, but computations can be wrong. In this case, they used the same  computer programmes that “proved” the exceptional stability of polywater. Oops. As for the overlooked papers, I later wrote a review with a logic analysis. Chemistry journals do not publish logic analyses. So in my view, the reason there are no disruptive papers in the physical sciences is quite clear: nobody really wants them. Not enough to ask for them.

Finally, some examples of papers that in my opinion really should have  done better. Weihs et al. (1998) arXiv:quant-ph/9810080 v1 claimed to demonstrate clear violations of Bell’s inequality, but the analysis involved only 5% of the photons? What happened to the other 95% is not disclosed. The formation of life is critically dependent on reduced chemicals being available. A large proportion of ammonia was found in ancient seawater trapped in rocks at Barberton (de Ronde et al. Geochim.  Cosmochim. Acta 61: 4025-4042.) Thus information critical for an understanding of biogenesis was obtained, but the information was not even mentioned in the abstract or in keywords, so it is not searchable by computer. This would have disrupted the standard thinking of the ancient atmosphere, but nobody knew about it. In another paper, spectroscopy coupled with the standard theory predicted strong bathochromic shifts (to longer wavelengths) for a limited number of carbenium ions, but strong hypsochromic shifts were observed without comment (Schmitze, L. R.; Sorensen, T. S. J. Am. Chem. Soc. 1982, 104, 2600-2604.) So why was no fuss made about these things by the discoverers? Quite simply, they wanted to be in with the crowd. Be good, get papers, get funded. Don’t rock the boat! After all, nature does not care whether we understand or not.

Fusion Energy on the Horizon? Again?

The big news recently as reported in Nature (13 December) was that researchers at the US National Ignition Facility carried out a reaction that made more energy than was put in. That looks great, right? The energy crisis is solved. Not so fast. What actually happened was that 192 lasers delivered 2.05 MJ of energy onto a pea-sized gold cylinder containing a frozen pellet of deuterium and tritium. The reason for all the lasers was to ensure that the input energy was symmetrical, and that caused the capsule to collapse under pressures and temperatures only seen in stars, and thermonuclear weapons. The hydrogen isotopes fused into helium, releasing additional energy and creating a cascade of fusion reactions. The laboratory claimed the reaction released 3.15 MJ of energy, roughly 54% more than was delivered by the lasers, and double the previous record of 1.3 MJ.

Unfortunately, the situation is a little less rosy than that might appear. While the actual reaction was a net energy producer based on the energy input to the hydrogen, the lasers were consuming power even when not firing at the hydrogen, and between start-up and shut-down they consumed 322 MJ of energy. So while more energy came out of the target than went in to compress it, if we count the additional energy consumed elsewhere but necessary to do the experiment, then slightly less than 1% of what went in came out. That is not such a roaring success. However, before we get critical, the setup was not designed to produce power. Rather it was designed to produce data to better understand what is required to achieve fusion. That is the platitudinal answer. The real reason was to help nuclear weapons scientists understand what happens with the intense heat and pressure of a fusion reaction. So the first question is, “What next?” Weapons research, or contribute towards fusion energy for peaceful purposes?

Ther next question is, will this approach contribute to an energy program. If we stop and think, the gold pellet of frozen deuterium had to be inserted, then everything line up for a concentrated burst. You get a burst of heat, but we still only got 3 MJ of heat. You may be quite fortunate to convert that to 1 MJ of electricity. Now, if it takes, say, a thousand second before you can fire up the next capsule, you have 1 kW of power. Would what you sell that for pay for the gold capsule consumption?

That raises the question, how do you convert the heat to electricity? The most common answer offered appears to be to use it to boil water and use the steam to drive a turbine. A smarter way might be to use magnetohydrodynamics. The concept is the hot gas is made to generate a high velocity plasma, and as that is slowed down, the kinetic energy of the plasma is converted to electricity. The Russians tried to make electricity this way by burning coal in oxygen to make a plasma at about 4000 degrees K. The theoretical maximum energy U is given by

    U  =  (T – T*)/T

where T is the maximum temperature and T* is the temperature when the plasma degrades and the extraction of further electricity is impossible. As you can see, it was possible to get approximately 60% energy conversion. Ultimately, this power source failed, mainly because the cola produces a slag which damaged the electrodes. In theory, the energy could be drawn in almost 100 % efficiency.

Once the recovery of energy is solved, there remains then problem of increasing the burn rate. Waiting for everything to cool down then adding an additional pellet cannot work, but expecting a pellet of hydrogen to remain in the condensed form when inserted into a temperature of, say, a million degrees, is asking a lot.

This will be my last post for the year, so let me wish you all a Very Merry Christmas, and a prosperous and successful New Year. I shall post again in mid-January, after a summer vacation.

Meanwhile, for any who fell they have an interest in physics, in the Facebook Theoretical Physics group, I am posting a series that demonstrates why this year’s Nobel Prize was wrongly assigned as Alain Aspect did not demonstrate violations of Bell’s inequality. Your challenge, for the Christmas period, is to prove me wrong and stand up for the Swedish Academy. If it is too difficult to find, I may post the sequence here if there were interest.

Solar Cycles

As you may know, our sun has a solar cycle of about 11 years, and during that time the sun’s magnetic field changes, oscillating between very strong then there is a minimum, then back to the next cycle. During the minimum, there are far fewer sunspots, and the power output is also at a minimum. The last minimum started about 2017, so now we can expect increased activity. It may come as something of a disappointment that some of the peak temperatures here happened during solar minima as we can expect that the next few years will be even hotter and the effects of climate change more dramatic, but that is not what this post is about. The question is, is out sun a typical star, or is it unusual?

That raises the question, if it were unusual, how can we tell?

The power output may vary, but not extremely. The output generally is reasonably constant. We can attribute the variation in the solar output we receive over different years of about 0.1% of a degree Kelvin (or Centigrade) to that. There may appear to be more greater changes as the frequency and strength of aurorae are more significant. So how do we tell whether other stars have similar cycles? As you might guess, the power input from other stars is trivial compared even with that small variation. Any variation in total power output would be extremely difficult to detect, especially over time since instrument calibration could easily vary by more. A non-scientist may have trouble with this statement, but it would be extremely difficult to make a sensitive instrument that would record a dead flat line for a tiny constant power source over an eleven-year period. Over shorter time periods the power input from a star does vary in a clearly detectable way, and has been the basis of the Kepler telescope detecting planets.

However, as outlined in Physics World (April 5) there is a way to detect changes in magnetic fields. Stars are so hot they ionize elements, and some absorption lines in the spectrum due to ionized calcium happen to be sensitive to the stellar magnetic field. One survey showed that about half the stars surveyed appeared to have such starspot cycles, and the periodic time could be measured for half of those with the cycles. It should be noted that the inability to detect the lines does not mean the star does not have such a cycle; it may mean that, working at the limits of detection anyway, the signals were too weak to be certain of their presence.

The average length of the length of such solar cycles was about ten years, which is similar to our sun’s eleven-year cycle, although one star had a cycle lasting four years. One star, HD 166620 had a cycle seventeen years long, although “had” is the operative tense. From somewhere between 1995 and 2004, HD 166620’s starspot cycle simply turned off. (The uncertainty in the timing was because the study was discontinued due to a change of observatories, and it changed to one receiving an upgrade that was not completed until 2004.) We now await it starting up again.

Maybe that could be a long wait. In 1645 the Sun entered what we call the Maunder minimum. During the bottom of a solar cycle we would expect at least a dozen or so sunspots per year, and at the maximum, over 100. Between 1672 and 1699 fewer than 50 sunspots were observed. It appeared that for about 70 years the sun’s magnetic field was mostly turned off. So maybe HD 166620 is sustaining a similar minimum. Maybe there is a planet with citizens complaining about the cold.

What causes that? Interestingly, (Metcalfe et al. Astrophys. J. Lett. 826 L2 2016) showed that by correlating stellar rotation with age for stars older than the sun, while stars start out spinning rapidly, magnetic braking gradually slows them down, and as they slow it is argued that Maunder Minimum events may become more regular, and eventually the star slows sufficiently that the dynamo effect is insufficient and they enter a grand minimum. So eventually the Sun’s magnetic dynamo may shut down completely. Apparently, some stars display somewhat chaotic activity, some have spells of lethargy, thus HD 101501 shut down between 1980 – 1990, before reactivating, a rather short Maunder Minimum.

So when you hear people say the sun is just an average sort of star, they are fairly close to the truth. But when you hear them say the power output will steadily increase, that may not be exactly correct.

Is Science Sometimes Settled Wrongly

In a post two weeks ago I raised the issue of “settled science”. The concept was there had to be things you were not persistently checking. Obviously, you do not want everyone wasting time rechecking the melting point of benzoic acid, but fundamental theory is different. Who settles that, and how? What is sufficient to say, “We know it must be that!” In my opinion, admittedly biased, there really is something rotten in the state of science. Once upon a time, namely in the 19th century, there were many mistakes made by scientists, but they were sorted out by vigorous debate. The Solvay conferences continued that tradition in the 1920s for quantum mechanics, but something went wrong in the second half of the twentieth century. A prime example occurred in 1952, when David Bohm decided the mysticism inherent in the Copenhagen Interpretation of quantum mechanics required a causal interpretation, and he published a paper in the Physical Review. He expected a storm of controversy and he received – silence. What had happened was that J. Robert Oppenheimer, previously Bohm’s mentor, had called together a group of leading physicists to find an error in the paper. When they failed, Oppenheimer told the group, “If we cannot disprove Bohm we can all agree to ignore him”. Some physicists are quite happy to say Bohm is wrong; they don’t actually know what Bohm said, but they know he is wrong. (https://www.infinitepotential.com/david-bohm-silenced-by-his-colleagues/   ) If that were one isolated example, that would be wrong, but not exactly a crisis. Unfortunately, it is not an isolated case. We cannot know how bad the problem is because we cannot sample it properly.

A complicating issue is how science works. There are millions of scientific papers produced every year. Thanks to time constraints, very few are read by several people. The answer to that would be to publish in-depth reviews, but nobody appears to publish logic analysis reviews. I believe science can be “settled” by quite unreasonable means. As an example, my career went “off the standard rails” with my PhD thesis.

My supervisor’s projects would not work, so I selected my own. There was a serious debate at the time whether strained systems could delocalize their electrons into adjacent unsaturation in the same way double bonds did. My results showed they did not, but it became obvious that cyclopropane stabilized adjacent positive charge. Since olefins do this by delocalizing electrons, it was decided that cyclopropane did that too. When the time came for my supervisor to publish, he refused to publish the most compelling results, despite suggesting this sequence of experiments was his only contribution, because the debate was settling down to the other side. An important part of logic must now be considered. Suppose we can say, if theory A is correct, then we shall see Q. If we see Q we can say that the result is consistent with A, but we cannot say that theory B would not predict Q also. So the question is, is there an alternative?

The answer is yes. The strain arises from orbitals containing electrons being bent inwards towards the centre of the system, hence coming closer to each other. Electrons repel each other. But it also becomes obvious that if you put positive charge adjacent to the ring, that charge will attract the electrons and override the repulsion on and attract electrons moving towards the positive charge. That lowers the energy, and hence stabilizes the system. I actually used an alternative way of looking at it: If you move charge, by bending the orbitals, you should generate a polarization field, and that stabilizes the positive charge. So why look at it like that? Because if the cause of a changing electric field is behind a wall, say, you cannot tell the difference between charge moving or of charge being added. Since the field contains the energy the two approaches give the same strain energy but by considering an added pseudocharge it was easy to put numbers on effects.

However, the other side won, by “proving” delocalization through molecular orbital theory, which, as an aside, assumes it is delocalized. Aristotle had harsh words for people who prove what they assume after a tortuous path. As another aside, the same quantum methodology proved the stability of “polywater” – where your water could turn into a toffee-like consistency. A review came out, and confirmed the “other side” by showing numerous examples of where the strained ring stabilized positive charge. It also it ignored everything that contradicted it.

Much later I wrote a review that showed this first one had ignored up to sixty different types of experimental result that contradicted the conclusion. That was never published by a journal – the three reasons for rejection, in order, were: not enough pictures and too many mathematics; this is settled; and some other journals that said “We do not publish logic analyses”.

I most certainly do not want this post to simply turn into a general whinge, so I shall stop there other than to say I could make five other similar cases that I know about. If that happens to or comes to the attention of one person, how general is it? Perhaps a final comment might be of interest. As those who have followed my earlier posts may know, I concluded that the argument that the present Nobel prize winners in physics found violations of Bell’s Inequality is incorrect in logic. (Their procedure violates Noether’s theorem.) When the prize was announced, I sent a polite communication to the Swedish Academy of Science, stating one reason why the logic was wrong, and asking them, if I had missed something, to inform me where I was wrong. So far, over five weeks later, no response. It appears no response might be the new standard way of dealing with those who question the standard approach.

Success! Defence Against Asteroids

Most people will know that about 64 million years ago an asteroid with a diameter of about 10 km struck the Yucatán peninsula and exterminated the dinosaurs, or at least did great damage to them from which they never recovered. The shock-wave probably also initiated the formation of the Deccan Traps, and the unpleasant emission of poisonous gases which would finish off any remaining dinosaurs. The crater is 180 km wide and 20 km deep. That was a very sizeable excavation. Rather wisely, we would like to avoid a similar fate, and the question is, can we do anything about it? NASA thinks so, and they carried out an experiment.

I would be extremely surprised if, five years ago, anyone reading this had heard of Dimorphos. Dimorphos is a small asteroid with dimensions about those of the original Colosseum, i.e.  before vandals, like the Catholic Church took stones away to make their own buildings. By now you will be aware that Dimorphos orbits another larger asteroid called Didymos. What NASA has done was to send a metallic object of dimensions 1.8 x 1.9 x 2.6 meters, of mass 570 kg, and velocity 22,530 km/hr to crash into Dimorphos to slightly slow its orbital speed, which would change its orbital parameters. It would also change then orbital characteristics of the two around the sun. Dimorphos has a “diameter” of about 160 m., Didymos about 780 m. Neither are spherical hence the quotation marks.

This explains why NASA selected Dimorphos for the collision. First, it is not that far from Earth, while the two on their current orbits will not collide with Earth on their current orbits. Being close to Earth, at least when their orbits bring them close, lowers the energy requirement to send an object there. It is also easier to observe what happens hence more accurately determine the consequences. The second reason is that Dimorphos is reasonably small and so if a collision changes its dynamics, we shall be able to see by how much. At first sight you might say that conservation of momentum makes that obvious, but it is actually more difficult to know because it depends on what takes the momentum away after the collision. If it is perfectly inelastic, the object gets “absorbed” by the target which stays intact, then we simply add its relative momentum to that of the target. However, real collisions are seldom inelastic, and it would have been considered important to determine how inelastic. A further possibility is that the asteroid could fragment, and send bits in different directions. Think of Newton’s cradle. You hit one end and the ball stops but another flies off from the other end, and the total stationary mass is the same. NASA would wish to know how well the asteroid held together. A final reason for selecting Dimorphos would be that by being tethered gravitationally to Didymos, it could not go flying off is some unfortunate direction, and eventually collide with Earth. It is interesting that the change of momentum is shared between the two bodies through their gravitational interaction.

So, what happened, apart from the collision. There was another space craft trailing behind: the Italian LICIACube (don’t you like these names? It is an acronym for “Light Italian Cubesat for Imaging Asteroids”, and I guess they were so proud of the shape they had to have “cube” twice!). Anyway, this took a photograph before and after impact, and after impact Dimorphos was surrounded by a shower of material flung up from the asteroid. You could no longer see the asteroid for the cloud of debris. Of course Dimorphos survived, and the good news is we now know that the periodic time of Dimorphos around Didymos has been shortened by 32 minutes. That is a genuine success. (Apparently, initially a change  by as little as 73 seconds would have been considered a success!) Also, very importantly, Dimorphos held together. It is not a loosely bound rubble pile, which would be no surprise to anyone who has read my ebook “Planetary Formation and Biogenesis”.

This raises another interesting fact. The impact slowed Dimorphos down relative to Didymos, so Dimorphos fell closer to Didymos, and sped up. That is why the periodic time was shortened. The speeding up is because when you lower the potential energy, you bring the objects closer together and thus lower the total energy, but this equals the kinetic energy except the kinetic energy has the opposite sign, so it increases. (It also shortens the path length, which also lowers the periodic time..)

The reason for all this is to develop a planetary protection system. If you know that an asteroid is going to strike Earth, what do you do? The obvious answer is to divert it, but how? The answer NASA has tested is to strike it with a fast-moving small object. But, you might protest, an object like that would not make much of a change in the orbit of a dinosaur killer. The point is, it doesn’t have to. Take a laser light and point it at a screen. Now, give it a gentle nudge so it changes where it impacts. If the screen as a few centimeters away the lateral shift is trivial, but if the screen is a kilometer away, the lateral shift is now significant, and in fact the lateral shift is proportional to the distance. The idea is that if you can catch the asteroid far enough away, the asteroid won’t strike Earth because the lateral shift will be sufficient.

You might protest that asteroids do not travel in a straight line. No, they don’t, and in fact have trajectories that are part of an ellipse. However, this is still a line, and will still shift laterally. The mathematics are a bit more complicated because the asteroid will return to somewhere fairly close to where it was impacted, but if you can nudge it sufficiently far away from Earth it will miss. How big a nudge? That is the question about which this collision was designed to provide us with clues.

If something like Dimorphos struck Earth it would produce a crater about 1.6 km wide and 370 m deep, while the pressure wave would knock buildings over tens of km away. If it struck the centre of London, windows would break all over South-East England. There would be no survivors in central London, but maybe some on the outskirts. This small asteroid would be the equivalent a good-sized hydrogen bomb, and, as you should realize, a much larger asteroid would do far more damage. If you are interested in further information, I have some data and a discussion of such collisions in my ebook noted above.

2022 Nobel Prize in Physics

When I woke up on Wednesday, I heard the Physics prize being announced: it was given for unravelling quantum entanglement, and specifically to Alain Aspect for the first “convincing demonstration of violations of Bell’s Inequalities”. In the unlikely event you recall my posts: https://wordpress.com/post/ianmillerblog.wordpress.com/542  and https://wordpress.com/post/ianmillerblog.wordpress.com/547 will realize that I argue he did no such thing. In my ebook “Guidance Waves” I made this point ten years ago.

So, how do these Inequalities work? Basically, to use these inequalities you need measurements on a sequence of a number of equivalent things (in Aspect’s case, photons), where the measurements that can have one of two values (such as pass/fail) are made on the items at three DIFFERENT conditions. Bell illustrated this need for different conditions by washing socks at 25 degrees, 35 degrees and 45 degrees C. What Bell did was to derive a relationship that mixed up the passes and fails from different conditions.

The issue I dispute involves rotational invariance. Each photon is polarized, which means it will go through a filter aligned with its polarization, and not one at right angles to it. In between, there is a probabilistic relation called the Malus Law. So what Aspect did was to pass photons through polarizing filters orients at different angles, and for example a + result was obtained with a filter that started vertical and a fail with a filter that started horizontal. Those configurations were set A. Set B involved rotating a filter clockwise by 22.5 degrees, set C by rotating a filter by 45 degrees. Three joint sets were collected: A+.B-; B+.C-, and A+.C-. These sets have to be different for the inequality to be used.

My objection is simple. Aspect showed that rotating one filter always achieved a constant value, so the source was giving off photons with all polarizations equally. If so, B+.C- is exactly the same as A+.B- except the key parts of the apparatus have been rotated by 22.5 degrees. If the background has no angular preference, it should not matter what angle the action is to it. Thus, if you are playing billiards, you do not have to worry about whether the table is pointing north or east. (If you use that as an excuse for a poor shot, quite rightly you will be put in your place.) This fact that the orientation of how you measure something does not affect the answer unless there is an angular asymmetry is shown by Noether’s theorem, on which all the conservation laws of physics depend. The angular symmetry leads to the conservation of angular momentum, a physical law on which ironically enough Aspect’s experiment depends to get the entanglement, and he then proceeded to ignore it when evaluating the results of his experiment.

Interestingly, if the background has a directional preference, the above argument does not apply. Thus, a compass needle always tries to point north because unlike the billiards example, there is a force that interacts with the needle that comes from the planet’s magnetic field. Now, suppose you employ a polarized source. Now there will be a preferred background direction, and the Aspect experiment does generate sufficient variables. However, calculations show that if you properly count the number of photons and apply the Malus Law, the results shpould comply with Bell’s inequality.

Accordingly, I am in the weird position of having published an argument that shows something in the scientific literature is wrong, nobody has ever found a flaw in my argument, yet the something has been awarded a Nobel Prize. Not many people get into that position. Does that mean that entanglement does not occur? No, it does not. The entanglement is simply a consequence of the law of conservation of angular momentum. There are further consequences of my argument, but they go beyond this post.

Nuclear War is Not Good

Yes, well that is sort of obvious, but how not good? Ukraine has brought the scenario of a nuclear war to the forefront, which raises the question, what would the outcome be? You may have heard estimates from military hawks that apart from those killed due to the blasts and those who got excessively irradiated, all would be well. Americans tend to be more hawkish because the blasts would be “over there”, although if the enemy were Russia, Russia should be able to bring it to America. There is an article in Nature (579, pp 485 – 487) that paints a worse picture. In the worst case, they estimate deaths of up to 5 billion, and none of these are due to the actual blasts or the radiation; they are additional extras. The problem lies in the food supply.

Suppose there was a war between India and Pakistan. Each fires nuclear weapons, first against military targets, then against cities. Tens of millions die in the blasts. However, there is a band of soot that rises into the air, and temperatures drop. Crop yields drop dramatically from California to China, and affect dozens of countries. Because of the limited food yields, more than a billion people will suffer from food shortages. The question then is, how valid are these sort of predictions?

Nuclear winter was first studied during the cold war. The first efforts described how such smoke would drop the planet into a deep freeze, lasting for months, even in summer. Later studies argued this effect was overdone and it would not end up with such a horrific chill, and unfortunately that has encouraged some politicians who are less mathematically inclined and cannot realize that “less than a horrific chill” can still be bad.

India and Pakistan each have around 150 nuclear warheads, so a study in the US looked into what would happen in which the countries set off 100 Hiroshima-sized bombs. The direct casualties would be about 21 million people. But if we look at how volcanic eruptions cool the planet, and how soot goes into the atmosphere following major forest fires, modelling can predict the outcome. A India-Pakistan war would put 5 million tonne of soot into the atmosphere, while a US Russian war would loft 150 million tonne. The first war would lower global temperatures by a little more than 1 degree C           , but the second would lower it by 10 degrees C, temperatures not seen since the last Ice Age. One problem that may not be appreciated is that sunlight would heat the soot, and by heating the adjacent air, it causes it to rise, and therefore persist longer.

The oceans tell a different story. Global cooling would affect the oceans’ acidity, and the pH would soar upwards (making it more alkaline). The model also suggested that would make it harder to form aragonite, making life difficult for shellfish. Currently, the shellfish are in the same danger from too much acidity; depending on aragonite is a bad option! The biggest danger would come regions that are home to coral reefs. There are some places that cannot win. However, there is worse to come: possibly a “Nuclear Niño”, which is described as a “turbo-charged El Niño”. In the case of a Russia/US war, the trade winds would reverse direction and water would pool in the eastern pacific ocean. Droughts and heavy rain would plague different parts of the world for up to seven years.

One unfortunate effect is that this is modelled. Immediately, another group from Los Alamos carried out different calculations and came to less of a disastrous result. The difference depends in part on how they simulate the amount of fuel, and how that is converted to smoke. Soot comes from partial combustion, and what happens where in a nuclear blast is difficult to calculate.

The effects on food production could be dramatic. Even following the small India-Pakistan war, grain production could drop by approximately 12% and soya bean production by 17%. The worst effects would come from the mid-latitudes such as the US Midwest and Ukraine. The trade in food would dry up because each country would be struggling to feed itself. A major war would be devastating, for other reasons as well. It is all very well to say your region might survive the climate change, and somewhere like Australia might grow more grain if it gets adequate water, as at present it is the heat that is the biggest problem. But if the war also took out industrial production and oil production and distribution, now what? Tractors are not very helpful if you cannot purchase diesel. A return to the old ways of harvesting? Even if you could find one, how many people know how to use a scythe? How do you plough? Leaving the problem of knowing where to find a plough that a horse could pull, and the problem of how to set it up, where do you find the horses? It really would not be easy.