The Universe is Shrinking

Dark energy is one of the mysteries of modern science. It is supposed to amount to about 68% of the Universe, yet we have no idea what it is. Its discovery led to Nobel prizes, yet it is now considered possible that it does not even exist. To add or subtract 68% of the Universe seems a little excessive.

One of the early papers (Astrophys. J., 517, pp565-586) supported the concept. What they did was to assume type 1A supernovae always gave out the same light so by measuring the intensity of that light and comparing it with the red shift of the light, which indicates how fast it is going away, they could assess whether the rate of expansion of the universe was even over time. The standard theory at the time was that it was, and it was expanding at a rate given by the Hubble constant (named after Edwin Hubble, who first proposed this). What they did was to examine 42 type 1a supernovae with red shifts between 0.18 and 0.83, and compared their results on a graph with what they expected from the line drawn using the Hubble constant, which is what you expect with zero acceleration, i.e. uniform expansion. Their results at a distance were uniformly above the line, and while there were significant error bars, because instruments were being operated at their extremes, the result looked unambiguous. The far distant ones were going away faster than expected from the nearer ones, and that could only arise if the rate of expansion were accelerating.

For me, there was one fly in the ointment, so to speak. The value of the Hubble constant they used was 63 km/s/Mpc. The modern value is more like 68 or 72; there are two values, and they depend on how you measure them, but both are somewhat larger than this. Now it follows that if you have the speed wrong when you predict how far it travelled, it follows that the further away it is, the bigger the error, which means you think it has speeded up.

Over the last few years there have been questions as to exactly how accurate this determination of acceleration really is. There has been a question (arXiv:1912.04903) that the luminosity of these has evolved as the Universe ages, which has the effect that measuring the distance this way leads to overestimation of the distance. Different work (Milne et al. 2015.  Astrophys. J. 803: 20) showed that there are at least two classes of 1A supernovae, blue and red, and they have different ejecta velocities, and if the usual techniques are used the light intensity of the red ones will be underestimated, which makes them seem further away than they are.

My personal view is there could be a further problem. The type 1A occurs when a large star comes close to another star and begins stripping it of its mass until it gets big enough to ignite the supernova. That is why they are believed to have the same brightness: they ignite their explosion at the same mass so there are the same conditions, so there should be the same brightness. However, this is not necessarily the case because the outer layer, which generates the light we see, comes from the non-exploding star, and will absorb and re-emit energy from the explosion. Hydrogen and helium are poor radiators, but they will absorb energy. Nevertheless, the brightest light might be expected to come from the heavier elements, and the amount of them increases as the Universe ages and atoms are recycled. That too might lead to the appearance that the more distant ones are further away than expected, which in turn suggests the Universe is accelerating its expansion when it isn’t.

Now, to throw the spanner further into the works, Subir Sarkar has added his voice. He is unusual in that he is both an experimentalist and a theoretician, and he has noted that the 1A supernovae, while taken to be “standard candles”, do not all emit the same amount of light, and according to Sarkar, they vary by up to a factor of ten. Further, previously the fundamental data was not available, but in 1915 it became public. He did a statistical analysis and found that the data supported a cosmic acceleration but only with a statistical significance of three standard deviations, which, according to him, “is not worth getting out of bed for”.

There is a further problem. Apparently the Milky Way is heading off in some direction at 600 km/s, and this rather peculiar flow extends out to about a billion light years, and unfortunately most of the supernovae studied so far are in this region. This drops the statistical significance for cosmic expansion to two standard deviations. He then accuses the previous supporters of this cosmic expansion as confirmation bias: the initial workers chose an unfortunate direction to examine, but the subsequent ones “looked under the same lamppost”.

So, a little under 70% of what some claim is out there might not be. That is ugly. Worse, about 27% is supposed to be dark matter, and suppose that did not exist either, and the only reason we think it is there is because our understanding of gravity is wrong on a large scale? The Universe now shrinks to about 5% of what it was. That must be something of a record for the size of a loss.

Asteroid (16) Psyche – Again! Or Riches Evaporate, Again

Thanks to my latest novel “Spoliation”, I have had to take an interest in asteroid mining. I discussed this in a previous post (https://ianmillerblog.wordpress.com/2020/10/28/asteroid-mining/) in which I mentioned the asteroid (16) Psyche. As I wrote, there were statements saying the asteroid had almost unlimited mineral resources. Initially, it was estimated to have a density (g/cc) of about 7, which would make it more or less solid iron. It should be noted this might well be a consequence of extreme confirmation bias. The standard theory has it that certain asteroids differentiated and had iron cores, then collided and the rock was shattered off, leaving the iron cores. Iron meteorites are allegedly the result of collisions between such cores. If so, it has been estimated there have to be about 75 iron cores floating around out there, and since Psyche had a density so close to that of iron (about 7.87) it must be essentially solid iron. As I wrote in that post, “other papers have published values as low as 1.4 g/cm cubed, and the average value is about 3.5 g/cm cubed”. The latest value is 3.78 + 0.34.

These varied numbers show how difficult it is to make these observations. Density is mass per volume. We determine the volume by considering the size and we can measure the “diameter”, but the target is a very long way away, it is small, so it is difficult to get an accurate “diameter”. The next point is it is not a true sphere, so there are extra “bits” of volume with hills, or “bits missing” with craters. Further, the volume depends on a diameter cubed, so if you make a ten percent error in the “diameter” you have a 30% error overall. The mass has to be estimated from its gravitational effects on something else. That means you have to measure the distance to the asteroid, the distance to the other asteroid, and determine the difference from expected as they pass each other. This difference may be quite tiny. Astronomers are working at the very limit of their equipment.

A quick pause for some silicate chemistry. Apart from granitic/felsic rocks, which are aluminosilicates, most silicates come in two classes of general formula: A – olivines X2SiO4 or B – pyroxenes XSiO3, where X is some mix of divalent metals, usually mainly magnesium or iron (hence their name, mafic, the iron being ferrous). However, calcium is often present. Basically, these elements are the most common metals in the output of a supernova, with magnesium being the most. For olivines, if X is only magnesium, the density for A (forsterite) is 3.27 and for B (enstatite) 3.2. If X is only iron, the density for A (fayalite) is 4.39 and for B (ferrosilite) 4.00. Now we come to further confirmation bias: to maintain the iron content of Psyche, the density is compared to enstatite chondrites, and the difference made up with iron. Another way to maintain the concept of “free iron” is the proposition that the asteroid is made of “porous metal”. How do you make that? A porous rock, like pumice, is made by a volcano spitting out magma with water dissolved in it, and as the pressure drops the water turns to steam. However, you do not get any volatile to dissolve in molten iron.

Another reason to support the iron concept was that the reflectance spectrum was “essentially featureless”. The required features come from specific vibrations, and a metal does not have any. Neither does a rough surface that scatters light. The radar albedo (how bright it is with reflected light) is 0.34, which implies a surface density of 3.5, which is argued to indicate either metal with 50% porosity, or solid silicates (rock). It also means no core is predicted. The “featureless spectrum” was claimed to have an absorption at 3 μm, indicating hydroxyl, which indicates silicate. There is also a signal corresponding to an orthopyroxene. The emissivity indicates a metal content greater than 20% at the surface, but if this were metal, there should be a polarised emission, and that is completely absent. At this point, we should look more closely at what “metal” means. In many cases, while it is used to convey what we would consider as a metal, the actual use includes chemical compounds with a  metallic element. The iron levels may be as iron sulphide, the oxide, or, as what I believe the answer is, the silicate. I think we are looking at the iron content of average rock. Fortune does not await us there.

In short, the evidence is somewhat contradictory, in part because we are using spectroscopy at the limits of its usefulness. NASA intends to send a mission to evaluate the asteroid and we should wait for that data.

But what about iron cored asteroids? We know there are metallic iron meteorites so where did they come from? In my ebook “Planetary Formation and Biogenesis”, I note that the iron meteorites, from isotope dating, are amongst the oldest objects in the solar system, so I argue they were made before the planets, and there were a large number of them, most of which ended up in planetary cores. The meteorites we see, if that is correct, never got accreted, and finally struck a major body for the first time.

Food on Mars

Settlers on Mars will have needs, but the most obvious ones are breathing and eating, and both of these are likely to involve plants. Anyone thinking of going to Mars should think about these, and if you look at science fiction the answers vary. Most simply assume everything is taken care of, which is fair enough for a story. Then there is the occasional story with slightly more detail. Andy Weir’s “The Martian” is simple. He grows potatoes. Living on such a diet would be a little spartan, but his hero had no option, being essentially a Robinson Crusoe without a Man Friday. The oxygen seemed to be a given. The potatoes were grown in what seemed to be a pressurised plastic tent and to get water, he catalytically decomposed hydrazine to make hydrogen and then he burnt that. A plastic tent would not work. The UV radiation would first make the tent opaque so the necessary light would not get in very well, then the plastic would degrade. As for making water, burning hydrazine as it was is sufficient, but better still, would they not put their base where there was ice?

I also have a novel (“Red Gold”) where a settlement tries to get started. Its premise is there is a main settlement with fusion reactors and hence have the energy to make anything, but the main hero is “off on his own” and has to make do with less, but can bring things from the main settlement. He builds giant “glass houses” made with layers of zinc-rich glass that shield the inside from UV radiation. Stellar plasma ejections are diverted by a superconducting magnet at the L1 position between Mars and the sun (proposed years before NASA suggested it) and the hero lives in a cave. That would work well for everything except cosmic radiation, but is that going to be that bad? Initially everyone lives on hydroponically grown microalgae, but the domes permit ordinary crops. The plants grow in treated soil, but as another option a roof is put over a minor crater and water provided (with solar heating from space) in which macroalgae grow and marine microalgae, as well as fish and other species, like prawns. The atmosphere is nitrogen, separated from the Martian atmosphere, and some carbon dioxide, and the plants make oxygen. (There would have to be some oxygen to get started, but plants on Earth grew without oxygen initially.)

Since then there have been other quite dramatic proposals from more official sources that assume a lot of automation to begin with. One of the proposals involves constructing huge greenhouses by covering a crater or valley. (Hey, I suggested that!) but the roof is flat and made of plastic, the plastic being made from polyethylene 2,5-furandicarboxylate, a polyester made from carbohydrates grown by the plants. This is used as a bonding agent to make a concrete from Martian rock. (In my novel, I explained why a cement is very necessary, but there are limited uses.) The big greenhouse model has some limitations. In this, the roof is flat, and in essentially two layers, and in between are vertical stacks of algae growing in water. The extra value here is that water filters out the effect of cosmic rays, although you need several meters of it. Now we have a problem. The idea is that underneath this there is a huge habitat, and for every cubic meter of water, we have one tonne mass, and on Mars, about 0.4 tonne of force on the lower flat deck. If this bottom deck is the opaque concrete, then something bound by plastic adhesion will slip. (Our concrete on bridges is only inorganic, and the binding is chemical, not physical, and further there is steel reinforcing.) Below this there would need to be many weight-bearing pillars. And there would need to be light generation between the decks (to get the algae to grow) and down below. Nuclear power would make this easy. Food can be grown as algae in between decks, or in the ground down below.

As I see it, construction of this would take quite an effort and a huge amount of materials. The concept is the plants could be grown to make the cement to make the habitat, but hold on, where are the initial plants going to grow, and who/what does all the chemical processing? The plan is to have that in place from robots before anyone gets there but I think that is greatly overambitious. In “Red Gold” I had the glass made from regolith processed with the fusion energy. The advantage of glass over this new suggestion is weight; even on Mars with its lower gravity millions of tonnes remains a serious weight. The first people there will have to live somewhat more simply.

Another plan that I have seen involves finding a frozen lake in a crater, and excavating an “under-ice” habitat. No shortage of water, or screening from cosmic rays, but a problem as I see it is said ice will melt from the heat, erode the bottom of the sheet, and eventually it will collapse. Undesirable, that is.

All of these “official” options use artificial lighting. Assuming a nuclear reactor, that is not a problem in itself, although it would be for the settlement under the ice because heat control would be a problem. However, there is more to getting light than generating energy. What gives off the light, and what happens when its lifetime expires? Do you have to have a huge number of spares? Can they be made on Mars?

There is also the problem with heat. In my novel I solved this with mirrors in space focussing more sunlight on selected spots, and of course this provides light to help plants grow, but if you are going to heat from fission power a whole lot more electrical equipment is needed. Many more things to go wrong, and when it could take two years to get a replacement delivered, complicated is what you do not want. It is not going to be that easy.

Climate Change: Are We Baked?

The official position from the IPCC’s latest report is that the problem of climate change is getting worse. The fires and record high temperatures in Western United States and British Columbia, Greece and Turkey may be portents of what is coming. There have been terrible floods in Germany and New Zealand has had its bad time with floods as well. While Germany was getting flooded, the town of Westport was inundated, and the Buller river had flows about 30% higher than its previous record flood. There is the usual hand-wringing from politicians. Unfortunately, at least two serious threats that have been ignored.

The first is the release of additional methane. Methane is a greenhouse gas that is about 35 times more efficient at retaining heat than carbon dioxide. The reason is absorption of infrared depends on the change of dipole moment during absorption. CO2 is a linear molecule and has three vibrational modes. One involves the two oxygen atoms both moving the same way so there is no change of dipole moment, the two changes cancelling each other. Another is as if the oxygen atoms are stationary and the carbon atom wobbles between them. The two dipoles now do not cancel, so that absorbs, but the partial cancellation reduces the strength. The third involves molecular bending, but the very strong bond means the bend does not move that much, so again the absorption is weak. That is badly oversimplified, but I hope you get the picture.

Methane has four vibrations, and rather than describe them, try this link: http://www2.ess.ucla.edu/~schauble/MoleculeHTML/CH4_html/CH4_page.html

Worse, its vibrations are in regions totally different from carbon dioxide, which means it is different radiation that cannot escape directly to space.

This summer, average temperatures in parts of Siberia were 6 degrees Centigrade above the 1980 – 2000 average and methane is starting to be released from the permafrost. Methane forms a clathrate with ice, that is it rearranges the ice structure and inserts itself when under pressure, but the clathrate decomposes on warming to near the ice melting point. This methane has formed from the anaerobic digestion of plant material and been trapped by the cold, so if released we get delivered suddenly all methane that otherwise would have been released and destroyed over several million years. There are about eleven billion tonnes of methane estimated to be in clathrates that could be subject to decomposition, about the effect of over 35 years of all our carbon dioxide emissions, except that as I noted, this works in a totally fresh part of the spectrum. So methane is a problem; we all knew that.

What we did not know a new source has been identified as published in the Proceedings of the National Academy of Sciences recently. Apparently significantly increased methane concentrations were found in two areas of northern Siberia: the Tamyr fold belt and the rim of the Siberian Platform. These are limestone formations from the Paleozoic era. In both cases the methane increased significantly during heat waves. The soil there is very thin so there is very little vegetation to decay and it was claimed the methane was stored in and now emitted from fractures in the limestone.

The  second major problem concerns the Atlantic Meridional Overturning Circulation (AMOC), also known as the Atlantic conveyor. What it does is to take warm water that gets increasingly salty up the east Coast of the US, then switch to the Gulf Stream and warm Europe (and provide moisture for the floods). As it loses water it gets increasingly salty and with its increased density it dives to the Ocean floor and flows back towards the equator. Why this is a problem is that the melting Northern Polar and Greenland ice is providing a flood of fresh water that dilutes the salty water. When the density of the water is insufficient to cause it to sink this conveyer will simply stop. At that point the whole Atlantic circulation as it is now stops. Europe chills, but the ice continues to melt. Because this is a “stopped” circulation, it cannot be simply restarted because the ocean will go and do something else. So, what to do? The first thing is that simply stopping burning a little coal won’t be enough. If we stopped emitting CO2 now, the northern ice would keep melting at its current rate. All we would do is stop it melting faster.

Scientists Behaving Badly

You may think that science is a noble activity carried out by dedicated souls thinking only of the search for understanding and of improving the lot of society. Wrong! According to an item published in Nature ( https://doi.org/10.1038/d41586-021-02035-2) there is rot in the core. A survey of 64,000 researchers at 22 universities in the Netherlands was carried out, 6,813 actually filled out the form and returned it, and an estimated 8% of scientists who so returned their forms in the anonymous survey confessed to falsifying or fabricating data at least once between 2017 and 2020. Given that a fraudster is less likely to confess, that figure is probably a clear underestimate.

There is worse. More than half of respondents also reported frequently engaging in “questionable research practices”. These include using inadequate research designs, which can be due to poor funding and hence more understandable, and frankly this could be a matter of opinion. On the other hand, if you confess to doing it you are at best slothful. Much worse, in my opinion, was deliberately judging manuscripts or fund applications while peer reviewing unfairly. Questionable research practices are “considered lesser evils” than outright research misconduct, which includes plagiarism and data fabrication. I am not so sure of that. Dismissing someone else’s work or fund application hurts their career.

There was then the question of “sloppy work”, which included failing to “preregister experimental protocols (43%), make underlying data available (47%) or keep comprehensive research records (56%)” I might be in danger here. I had never heard about “preregistering protocols”. I suspect that is more for the medical research than for physical sciences. My research has always been of the sort where you plan the next step based on the last step you have taken. As for “comprehensive records, I must admit my lab books have always been cryptic. My plan was to write it down, and as long as I could understand it, that was fine. Of course, I have worked independently and records were so I could report more fully and to some extent for legal reasons.

If you think that is bad, there is worse in medicine. On July 5 an item appeared in the British Medical Journal with the title “Time to assume that health research is fraudulent until proven otherwise?” One example: a Professor of epidemiology apparently published a review paper that included a paper that showed mannitol halved the death rate from comparable injuries. It was pointed out to him that that paper that he reviewed was based on clinical trials that never happened! All the trials came from a lead author who “came from an institution” that never existed! There were a number of co-authors but none had ever contributed patients, and many did not even know they were co-authors. Interestingly, none of the trials had been retracted so the fake stuff is still out there.

Another person who carried out systematic reviews eventually realized that only too many related to “zombie trials”. This is serious because it is only by reviewing a lot of different work can some more important over-arching conclusions be drawn, and if a reasonable percentage of the data is just plain rubbish everyone can jump to the wrong conclusions. Another medical expert attached to the journal Anaesthesia found from 526 trials, 14% had false data and 8% were categorised as zombie trials. Remember, if you are ever operated on, anaesthetics are your first hurdle! One expert has guessed that 20% of clinical trials as reported are false.

So why doesn’t peer review catch this? The problem for a reviewer such as myself is that when someone reports numbers representing measurements, you naturally assume they were the results of measurement. I look to see that they “make sense” and if they do, there is no reason to suspect them. Further, to reject a paper because you accuse it of fraud is very serious to the other person’s career, so who will do this without some sort of evidence?

And why do they do it? That is easier to understand: money and reputation. You need papers to get research funding and to keep your position as a scientist. It is very hard to detect, unless someone repeats your work, and even then there is the question, did they truly repeat it? We tend to trust each other, as we should be able to. Published results get rewards, publishers make money, Universities get glamour (unless they get caught out). Proving fraud (as opposed to suspecting it) is a skilled, complicated and time-consuming process, and since it shows badly on institutions and publishers, they are hardly enthusiastic. Evil peer review, i.e. dumping someone’s work to promote your own is simply strategic, and nobody will do anything about it.

It is, apparently, not a case of “bad apples”, but as the BMJ article states, a case of rotten forests and orchards. As usual, as to why, follow the money.

Interpreting Observations

The ancients, with a few exceptions, thought the Earth was the centre of the Universe and everything rotated around it, thus giving day and night. Contrary to what many people think, this was not simply stupid; they reasoned that it could not be rotating. An obvious experiment that Aristotle performed was to throw a stone high into the air so that it reached its maximum height directly above. When it dropped, it landed directly underneath it and its path was vertical to the horizontal. Aristotle recognised that up at that height and dropped, if the Earth was rotating the angular momentum from the height should carry it eastwards, but it did not. Aristotle was a clever reasoner, but he was a poor experimenter. He also failed to consider consequences of some of his other reasoning. Thus he knew that the Earth was a sphere, and he knew the size of it and thanks to Eratosthenes this was a fairly accurate value. He had reasoned correctly why that was, which was that matter fell towards the centre. Accordingly, he should also have realised his stone should also fall slightly to the south. (He lived in Greece; if he lived here it would move slightly northwards.) When he failed to notice that he should have realized his technique was insufficiently accurate. What he failed to do was to put numbers onto his reasoning, and this is an error in reasoning we see all the time these days from politicians. As an aside, this is a difficult experiment to do. If you don’t believe me, try it. Exactly where is the point vertically below your drop point? You must not find it by dropping a stone!

He also realised that Earth could not orbit the sun, and there was plenty of evidence to show that it could not. First, there was the background. Put a stick in the ground and walk around it. What you see is the background moves and moves more the bigger the circle radius, and smaller the further away the object is. When Aristarchus proposed the heliocentric theory all he could do was make the rather unconvincing bleat that the stars in the background must be an enormous distance away. As it happens, they are. This illustrates another problem with reasoning – if you assume a statement in the reasoning chain, the value of the reasoning is only as good as the truth of the assumption. A further example was that Aristotle reasoned that if the earth was rotating or orbiting the sun, because air rises, the Universe must be full of air, and therefore we should be afflicted by persistent easterly winds. It is interesting to note that had he lived in the trade wind zone he might have come to the correct conclusion for entirely the wrong reason.

But if he did he would have a further problem because he had shown that Earth could not orbit the sun through another line of reasoning. As was “well known”, heavy things fall faster than light things, and orbiting involves an acceleration towards the centre. Therefore there should be a stream of light things hurling off into space. There isn’t, therefore Earth does not move. Further, you could see the tail of comets. They were moving, which proves the reasoning. Of course it doesn’t because the tail always goes away from the sun, and not behind the motion at least half the time. This was a simple thing to check and it was possible to carry out this checking far more easily than the other failed assumptions. Unfortunately, who bothers to check things that are “well known”? This shows a further aspect: a true proposition has everything that is relevant to it in accord with it. This is the basis of Popper’s falsification concept.

One of the hold-ups involved a rather unusual aspect. If you watch a planet, say Mars, it seems to travel across the background, then slow down, then turn around and go the other way, then eventually return to its previous path. Claudius Ptolemy explained this in terms of epicycles, but it is easily understood in term of both going around the sun provided the outer one is going slower. That is obvious because while Earth takes a year to complete an orbit, it takes Mars over two years to complete a cycle. So we had two theories that both give the correct answer, but one has two assignable constants to explain each observation, while the other relies on dynamical relationships that at the time were not understood. This shows another reasoning flaw: you should not reject a proposition simply because you are ignorant of how it could work.I went into a lot more detail of this in my ebook “Athene’s Prophecy”, where for perfectly good plot reasons a young Roman was ordered to prove Aristotle wrong. The key to settling the argument (as explained in more detail in the following novel, “Legatus Legionis”) is to prove the Earth moves. We can do this with the tides. The part closest to the external source of gravity has the water fall sideways a little towards it; the part furthest has more centrifugal force so it is trying to throw the water away. They may not have understood the mechanics of that, but they did know about the sling. Aristotle could not detect this because the tides where he lived are miniscule but in my ebook I had my Roman with the British invasion and hence had to study the tides to know when to sail. There you can get quite massive tides. If you simply assume the tide is cause by the Moon pulling the water towards it and Earth is stationary there would be only one tide per day; the fact that there are two is conclusive, even if you do not properly understand the mechanics.

Some Lesser Achievements of Science

Most people probably think that science is a rather dull quest for the truth, best left to the experts, who are all out to find the truth. Well, not exactly. Here is a video link where Sean Carroll points out that most physicists are really uninterested in understanding what quantum mechanics is about: https://youtu.be/ZacggH9wB7Y

This is rather awkward because quantum mechanics is one of the two greatest scientific advances of the twentieth century, and here we find all but a few of its exponents really neither understand what is going on nor do they care. What happens is they have a procedure by which they can get answers, so that is all that matters, is it not? Not in my opinion. What happens thereafter is that many of these are University teachers, and when they don’t care, that gets passed on to the students, so they don’t care. The system is degenerating.

But, you protest, we still get the right answers. That leaves open the question, do we really? From my experience in chemistry, we know that the only theories required to explain chemical observations (apart from maybe what atoms are made of) are electromagnetic theory and quantum mechanics. Those in the know will know there are floods of computational papers published so we must understand? Not at all. Almost all the papers calculate something that is known, and because integrating the differential equations means a number of constants are required, and because it is impossible to solve the equations analytically, the constants can be assigned so the correct answers are obtained. Fortunately, for very similar problems the same constants will suffice. If you find that hard to believe, the process is called validation, and you can read about it in John Pople’s Nobel Prize lecture. Actually, I believe all the computations are wrong except for the hydrogen molecule because everybody uses the wrong wave functions, but that is another matter.

That scientists do not care about their most important theory is bad, but there is worse, as published in Nature ( https://doi.org/10.1038/d41586-021-01436-7) Apparently, in 2005 three PhD students wrote a computer program called SCIgen for amusement. What this program does is write “scientific papers”. The research for them? Who needs that? It cobbles together words with random titles, text and charts and is essentially nonsense. Anyone can write them. (Declaration: I did not use this software for this or any other post!) While the original purpose was for “maximum amusement” and papers were generated for conferences, because the software is freely available various people have sent them to scientific journals , the peer review process failed to spot the gibberish, and the journals published them. There are apparently hundreds of these nonsensical papers floating around. Further, they can be for relatively “big names” because apparently articles can get through under someone’s name without the someone knowing anything about it. Why give someone else an additional paper? A big name is more likely to get through peer review and the writer needs to get it out there because they can be published with genuine references, although of course with no relevance to the submission. The reason for doing this is simple: it pads the number of citations for the cited authors, which is necessary to make their CV look better and to improve the chances when applying for funds. With money at stake, it is hardly surprising that sort of fraud has crept in.

Another unsettling aspect to scientific funding has been uncovered (Nature 593: 490 -491). Funding panels are more likely to give EU early-career grants to applicants connected to the granting panelists’ institutions, in other words the panelists have this tendency to give the money to “themselves”. Oops. A study of the grants showed that “applicants who shared both a home and a host organization with one panellist or more received a grant 40% more often than average” and “the success rate for connected applicants was approximately 80% higher than average in the life sciences and 40% higher in the social sciences and humanities, but there seemed to be no discernible effect in physics and engineering.” Here, physics is clean!  One explanation might be that the best applicants want to go to the most prestigious institutions. Maybe, but would that not apply to physics? An evaluation to test such bias in the life sciences showed “successful and connected applicants scored worse on these performance indicators than did funded applicants without such links, and even some unsuccessful applicants.” You can draw your own conclusions, but they are not good looking.

Dark Matter Detection

Most people have heard of dark matter. Its existence is clear, at least so many so state. Actually, that is a bit of an exaggeration. All we know is that galaxies do not behave exactly as General Relativity would have us think. Thus the outer parts of galaxies orbit the centre faster than they should and galaxy clusters do not have the dynamics expected. Worse, if we look at gravitational lensing, where light is bent as it goes around a galaxy, it is bent as if there is additional mass there that we just cannot see. There are two possible explanations for this. One is there is additional matter there we cannot see, which we call dark matter. The other alternative is that our understanding of what gravity behaves like is wrong on a large scale. We understand it very well on the scale of our solar system, but that is incredibly small when compared with a galaxy so it is possible we simply cannot detect such anomalies with our experiments. As it happens, there are awkward aspects of each, although the modified gravity does have the advantage that one explanation is that we might simply not understand how it should be modified.

One way of settling this dispute is to actually detect dark matter. If we detect it, case over. Well, maybe. However, so far all attempts to detect it have failed. That is not critical because to detect something, we have to know what it is, or what its properties are. So far all we can say about dark matter is that its gravity affects galaxies. It is rather hard to do an experiment on a galaxy so that is not exactly helpful. So what physicists have done is to make a guess as to what it will be, and. not surprisingly, make the guess in the form they can do something about it if they are correct. The problem now is we know it has to have mass because it exerts a gravitational effect and we know it cannot interact with electromagnetic fields, otherwise we would see it. We can also say it does not clump because otherwise there would be observable effects on close stars. There will not be dark matter stars. That is not exactly much to work on, but the usual approach has been to try and detect collisions. If such a particle can transfer sufficient energy to the molecule or atom, it can get rid of the energy by giving off a photon. So one such detector had huge tanks containing 370 kg of liquid xenon. It was buried deep underground, and from theory massive particles of dark matter could be separated from occasional neutron events because a neutron would give multiple events. In the end, they found nothing. On the other hand, it is far from clear to me why dark matter could not give multiple events, so maybe they saw some and confused it with stray neutrons.

On the basis that a bigger detector would help, one proposal (Leane and Smirnov, Physical Review Letters 126: 161101 (2021) suggest using giant exoplanets. The idea is that as the dark matter particles collide with the planet, they will deposit energy as they scatter and with the scattering eventually annihilate within the planet. This additional energy will be detected as heat. The point of the giant is that its huge gravitational field will pull in extra dark matter.

Accordingly, they wish someone to measure the temperatures on the surface of old exoplanets of mass between Jupiter and 55 times Jupiter’s mass, and temperatures above that otherwise expected can be allocated to dark matter. Further, since dark matter density should be higher near the galactic centre, and collisional velocities higher, the difference in surface temperatures between comparable planets may signal the detection of dark matter.

Can you see problems? To me, the flaw lies in “what is expected?” In my opinion, one is the question of getting sufficient accuracy in the infrared detection. Gravitational collapse gives off excess heat. Once a planet gets to about 16 Jupiter masses it starts fusing deuterium. Another lies in estimating the heat given off by radioactive decay. That should be understandable from the age of the planet, but if it had accreted additional material from a later supernova the prediction could be wrong. However, for me the biggest assumption is that the dark matter will annihilate, as without this it is hard to see where sufficient energy will come from. If galaxies all behave the same way, irrespective of age (and we see some galaxies from a great distance, which means we see them as they were a long time ago) then this suggests the proposed dark matter does not annihilate. There is no reason why it should, and that our detection method needs it to will be totally ignored by nature. However, no doubt schemes to detect dark matter will generate many scientific papers in the near future and consume very substantial research grants. As for me, I would suggest one plausible approach, since so much else has failed by assuming large particles, is to look for small ones. Are there any unexplained momenta in collisions from the large hadron collider? What most people overlook is that about 99% of the data generated is trashed (because there is so much of it), but would it hurt to spend just a little effort on examining if fine detail that which you do not expect to see much?

How Many Tyrannosaurs Were There?

Suppose you were transported back to the late Cretaceous, what is the probability that you would see a Tyrannosaurus? That depends on a large number of factors, and to simplify, I shall limit myself to T Rex. There were various Tyrannosaurs, but probably in different times and different places. As far as we know, T Rex was limited to what was effectively an island land mass known as Laramidia that has now survived as part of Western North America. In a recent edition of Science, a calculation was made, and it starts with the premise, known as “Damuth’s Law” that population density is negatively correlated with body mass through a power law that involves two assignable constants, plus the body mass. What does that mean? It is an empirical relationship that says the bigger the animal, the fewer will be found in a given area. The reason is obvious: the bigger the animal, the more it will eat, and a given area has only so much food. Apparently one of the empirical constants has been assigned a value of 0.75, more or less, so now we are down to one assignable constant.

If we concentrate on the food requirement, then it depends on what it eats, and what it does with it. To explain the last point, carnivores kill prey, so there has to be enough prey there to supply the food, AND to be able to reproduce. There has to be a stable population of prey, otherwise the food runs out and everyone dies. The bigger the animal, the more food it needs to generate body mass and to provide the energy to move, however mammals have a further requirement over animals like snakes: they burn food to provide body heat, so mammals need more food per unit mass. It also depends on how specialized the food is. Thus pandas, specializing on eating bamboo, depend on bamboo growth rates (which happens to be fast) and on something else not destroying the bamboo. For Tyrannosaurs, they presumably would concentrate on eating large animals. Anything that was a few centimeters high would probably be safe, apart from being accidentally stood on, because the Tyrannosaur could not get its head down low enough and keep it there long enough to catch it. The smaller raptors were also probably safe because they could run faster. So now the problem is, how many large animals, and was there a restriction? My guess is it would take on any large herbivore. In terms of the probability of meeting one, it also depends on how they hunt. If they hunted in packs, which is sometimes postulated, you are less likely to meet them, but you are in more trouble if you do.

That now gets back to how many large herbivores would be in a given area, and that in turn depends on the amount of vegetation, and its food value. We have to make guesses about that. We also have to decide whether the Tyrannosaur generated its own heat. We cannot tell exactly, but the evidence does seem to support the fact that it was concerned about heat as it probably had feathers. The article assumed that the dinosaur was about half-way between mammals and large lizards as far as heat generation goes. Provided the temperatures were warm, something as large as a Tyrannosaur would probably be able to retain much of its own heat as surface area is a smaller fraction of volume than for small animals.The next problem is assigning body mass, which is reasonably straightforward for a given skeleton, but each animal starts out as an egg.  How many juvenile ones were there? This is important because juvenile ones will have different food requirements; they eat smaller herbivores. The authors took a distribution that is somewhat similar to that for tigers. If so, an area the size of California could support 3,800 T. Rex. We now need the area over which they roamed, and with a considerable possible error range and limiting ourselves to land that is above sea level now, they settled on 2.3 + 0.88 million square kilometers, which, at any one time would support about 20,000 individuals. If we take a mid-estimate of how long they roamed, which is 2.4 million years, we get, with a very large error range, that the total number of T. Rex that ever lived was about 2.5 billion individuals. Currently, there are 32 individual fossils (essentially all are partial), which shows how difficult fossilization really is. Part of this, of course, arises because fossilization is dependent on appropriate geology and conditions. So there we are: more useless information, almost certainly erroneous, but fun to speculate on.

How Fast is the Universe Expanding?

In the last post I commented on the fact that the Universe is expanding. That raises the question, how fast is it expanding? At first sight, who cares? If all the other galaxies will be out of sight in so many tens of billions of years, we won’t be around to worry about it. However, it is instructive in another way. Scientists make measurements with very special instruments and what you get are a series of meter readings, or a printout of numbers, and those numbers have implied dimensions. Thus the number you see on your speedometer in your car represents miles per hour or kilometers per hour, depending on where you live. That is understandable, but that is not what is measured. What is usually measured is actually something like the frequency of wheel revolutions. So the revolutions are counted, the change of time is recorded, and the speedometer has some built-in mathematics that gives you what you want to know. Within that calculation is some built-in theory, in this case geometry and an assumption about tyre pressure.

Measuring the rate of expansion of the universe is a bit trickier. What you are trying to measure is the rate of change of distance between galaxies at various distances from you, average them because they have random motion superimposed, and in some cases regular motion if they are in clusters. The velocity at which they are moving apart is simply change of distance divided by change of time. Measuring time is fine but measuring distance is a little more difficult.  You cannot use a ruler.  So some theory has to be imposed.

There are some “simple” techniques, using the red shift as a Doppler shift to obtain velocity, and brightness to measure distance. Thus using different techniques to estimate cosmic distances such as the average brightness of stars in giant elliptical galaxies, type 1a supernovae and one or two other techniques it can be asserted the Universe is expanding at 73.5 + 1.4 kilometers per second for every megaparsec. A megaparsec is about 3.3 million light years, or three billion trillion kilometers.

However, there are alternative means of determining this expansion, such as measured fluctuations in the cosmic microwave background and fluctuations in matter density of the early Universe. If you know what the matter density was then, and know what it is now, it is simple to calculate the rate of expansion, and the answer is, 67.4 +0.5 km/sec/Mpc. Oops. Two routes, both giving highly accurate answers, but well outside any overlap and hence we have two disjoint sets of answers.

So what is the answer? The simplest approach is to use an entirely different method again, and hope this resolves the matter, and the next big hope is the surface brightness of large elliptical galaxies. The idea here is that most of the stars in a galaxy are red dwarfs, and hence the most “light” from a galaxy will be in the infrared. The new James Webb space telescope will be ideal for making these measurements, and in the meantime standards have been obtained from nearby elliptical galaxies at known distances. Do you see a possible problem? All such results also depend on the assumptions inherent in the calculations. First, we have to be sure we actually know the distance accurately to the nearby elliptical galaxies, but much more problematical is the assumption that the luminosity of the ancient galaxies is the same as the local ones. Thus in earlier times, since the metals in stars came from supernovae, the very earliest stars will have much less so their “colour” from their outer envelopes may be different. Also, because the very earliest stars formed from denser gas, maybe the ratio of sizes of the red dwarfs will be different. There are many traps. Accordingly, the main reason for the discrepancy is that the theory used is slightly wrong somewhere along the chain of reasoning. Another possibility is the estimates of the possible errors are overly optimistic. Who knows, and to some extent you may say it does not matter. However, the message from this is that we have to be careful with scientific claims. Always try to unravel the reasoning. The more the explanation relies on mathematics and the less is explained conceptually, the greater the risk that whoever is presenting the story does not understands it either.