My Introduction to the Scientific Method – as actually practised

It is often said if you can’t explain something to a six-year-old, you don’t understand.

I am not convinced, but maybe I don’t understand. Anyway, I thought I would follow from my previous post with an account of my PhD thesis. It started dramatically. My young supervisor gave me a choice of projects but only one looked sensible. I started that, then found the answer just published. Fortunately, only month wasted, but I had no project and supervisor was off on vacation. Head of Department suggested I find myself a project, so I did. There was a great debate going on whether the electrons in cyclopropane could delocalize into other parts of a molecule. To explain, carbon forms bonds at an angle of 109.5 degrees, but the three carbons of cyclopropane have to be formally at 60 degrees. In bending them around, the electrons come closer together and the resultant electric repulsions mean the overall energy is higher. The higher energy difference is called strain energy. One theory was the strain energy could be relieved if the electrons could get out and spread themselves over more space. Against that, there was no evidence of single bonds being able to do this.

My proposal was to put a substituted benzene ring on one corner, and an amine on the other. The idea was, amines are bases and react with acid, and when they do that the electrons on the amine are trapped. If the cyclopropane ring could delocalize electrons there was one substituent I could put on the benzene ring that would have different effects on that basicity depending on whether the cyclopropane ring did delocalize electrons or not. There was a test through something called the Hammett equation. My supervisor had published on this, but this would be the first time the equation might be used to do something of significance. Someone had tried that scheme with carboxylic acids, but with an extra carbon atom they were not very responsive and there were two reports with conflicting answers. My supervisor, when he came back, was not very thrilled with this proposal, but his best alternative was to measure the rates of a sequence of reactions for which I had found a report that said the reaction did not go. So he agreed. Maybe I should have been warned. Anyway, I had some long-winded syntheses to do.

When it came to reaching the end-position, my supervisor went to North America on sabbatical, and then sequentially looking for a new position in North America, so I was on my own. The amine results did not yield the desired result because the key substituent, a nitro group, reacted with the amine very quickly. That was a complete surprise. I could make the salt, but the solution with some amine quickly discoloured. However, in a fleeting visit my supervisor made a useful suggestion: react the acids in toluene with a diazo compound. While the acids previously had been too similar in properties in water, it turned out that toluene greatly amplified the differences. The results were clear: the cyclopropane ring did not delocalize electrons.

However, all did not go well. The quantum mechanical people who had shown the extreme stability of polywater through electron delocalization turned their techniques to this problem and asserted it did. In support, they showed that the cyclopropane ring stabilized adjacent positive charge. However, if the strain energy arose through increased electron repulsion, a positive charge would reduce that. There would be extra stability with a positive charge adjacent, BUT negative charge would destabilize it. So there were two possible explanations, and a clear means of telling the difference.

Anions on a carbon atom are common in organic chemistry. All attempts at making such an anion adjacent to a cyclopropane ring failed. A single carbon atom with two hydrogen atoms, and a benzene ring attached forms a very stable anion (called a benzyl anion). A big name replaced one of the hydrogen atoms of a benzyl anion with a cyclopropane ring, and finally made something that existed, although only barely. He published a paper and stated it was stabilized by delocalization. Yes, it was, and the stabilization would have come from the benzene ring. Compared with any other benzyl anion it was remarkably unstable. But the big names had spoken.

Interestingly, there is another test from certain spectra. In what is called an n->π* transition (don’t worry if that means nothing to you) there is a change of dipole moment with the negative end becoming stronger close to a substituent. I calculated the change based on the polarization theory, and came up with almost the correct answer. The standard theory using delocalization has the spectral shift due to the substituent in the opposite direction.

My supervisor, who never spoke to me again and was not present during the thesis write-up, wrote up a paper on the amines, which was safe because it never showed anything that would annoy the masses, but he never published the data that came from his only contribution!

So, what happened? Delocalization won. A review came out that ignored every paper that disagreed with its interpretation, including my papers. Another review dismissed the unexpected spectral shift I mentioned by saying “it is unimportant”. I ended up writing an analysis to show that there were approximately 60 different sorts of observation that were not in accord with the delocalization proposition. It was rejected by review journals as “This is settled” (that it was settled wrongly was irrelevant) and “We do not publish logic analyses.” Well, no, it seems they do not, and do not care that much.

The point I am trying to make here is that while this could be regarded as not exceptionally important, if this sort of wrong behaviour happens to one person, how much happens across the board? I believe I now know why science has stopped making big advances. None of those who are established want to hear anyone question their own work. The sad part is, that is not the only example I have.

Polymerised Water

In my opinion, probably the least distinguished moment in science in the last sixty years occurred in the late 1960s, and not for the seemingly obvious reason. It all started when Nikolai Fedyakin condensed steam in quartz capillaries and found it had unusual properties, including a viscosity approaching that of a syrup. Boris Deryagin improved production techniques (although he never produced more than very small amounts) and determined a freezing point of – 40 oC, a boiling point of » 150 oC, and a density of 1.1-1.2. Deryagin decided there were only two possible reasons for this anomalous behaviour:

(a) the water had dissolve quartz,

(b) the water had polymerised.

Since recently fused quartz is insoluble in water at atmospheric pressures, he concluded that the water must have polymerised. There was no other option. An infrared spectrum of the material was produced by a leading spectroscopist from which force constants were obtained, and a significant number of papers were published on the chemical theory of polywater. It was even predicted that an escape of polywater into the environment could catalytically convert the Earth’s oceans into polywater, thus extinguishing life. Then there was the inevitable wake-up call: the IR spectrum of the alleged material bore a remarkable resemblance to that of sweat. Oops. (Given what we know now, whatever they were measuring could not have been what everyone called polywater, and probably was sweat, and how that happened from a very respected scientist remains unknown.)

This material brought out some of the worst in logic. A large number of people wanted to work with it, because theory validated it existence. I gather the US navy even conducted or supported research into it. The mind boggles here: did they want to encase enemy vessels in toffee-like water, or were they concerned someone might do it to them? Or even worse, turn the oceans into toffee, and thus end all life on Earth? The fact that the military got interested, though, shows it was taken very seriously. I recall one paper that argued Venus was like it is because all its water polymerised!

Unfortunately, I think the theory validated the existence because, well, the experimentalists said it did exist, so the theoreticians could not restrain themselves from “proving” why it existed. The key to the existence is that they showed through molecular orbital theory that the electrons in water had to be delocalized. Most readers won’t see the immediate problem because we are getting a little technical here, but to put it in perspective, molecular orbital theory assumes the electrons are delocalized over the whole molecule. If you further assume water molecules come together, the firsr assumption requires the electrons to be delocalised, which in turn forces the system to become one molecule. If you cannot end up with what you assumed in the first place, your theoretical work is not exactly competent, let alone inspired.

Unfortunately, these calculations involve what are called quantum mechanics. Quantum mechanics is one of the most predictive theories ever, and almost all your electronic devices have parts that would not have been developed but for knowledge of quantum mechanics. The problem is that for any meaningful problem there is usually no analytical solution from the formal quantum theory generally used, and any actual answer requires some rather complicated mathematics, and in chemistry, because of the number of particles, some approximations. Not everyone agreed. The same computer code in different hands sometimes produced opposite results with no explanation of why the results differed. If there were no differences in the implied physics between methods that gave opposing results, then the calculation method was not physical. If there were differences in the physics, then these should have been clearly explained. The average computational paper gives very little insight to what is done and these papers were actually somewhat worse than usual. It was, “Trust me, I know what I’m doing.” In general, they did not.

So, what was it? Essentially, ordinary water with a lot of dissolved silica, i.e. option (a) above. Deryagin was unfortunate in suffering in logic from the fallacy of the accident. Water at 100 degrees C does not dissolve quartz. If you don’t believe me, try boiling water it in a pot with a piece of silica. It does dissolve it at supercritical temperatures, but these were not involved. So what happened? Seemingly, water condensing in quartz capillaries does dissolve it. However, now I come to the worst part. Here we had an effect that was totally unexpected, so what happened? After the debacle, nobody was prepared to touch the area. We still do not know why silica in capillaries is so eroded, yet perhaps there is some important information here, after all water flows through capillaries in your body.

One of the last papers written on “anomalous water” was in 1973, and one of the authors was John Pople, who went on to win a Nobel Prize for his work in computational chemistry. I doubt that paper is one that he is most proud of. The good news is the co-author, who I assume was a post-doc and can remain anonymous because she almost certainly had little control on what was published, had a good career following this.

The bad news was for me. My PhD project involved whether electrons were delocalized from cyclopropane rings. My work showed they were not however computations from the same type of computational code said it did. Accordingly, everybody ignored my efforts to show what was really going on. More on this later.

Ossified Science

There was an interesting paper in the Proceedings of the National Academy of Sciences (118,  e2021636118,  https://doi.org/10.1073/pnas.2021636118 ) which argued that science is becoming ossified, and new ideas simply cannot emerge. My question is, why has it taken them this long to recognize it? That may seem a strange thing to say, but over the life of my career I have seen no radically new ideas get acknowledgement.

The argument in the paper basically fell down to one simple fact: over this period there has been a huge expansion in the quantity of scientists, research funding, and the number of publications. Progress in the career of a scientist depends on the number of papers produced. However, the more papers produced, the more likely is the science to stagnate because nobody has the time to read everything. People pick and choose what to read, the selection biased by the need not to omit people who may read your funding application. Reading is thus focused on established thinking. As the number of papers increase, citations flow increasingly towards the already well-cited papers. Lesser known authors are unlikely to ever become highly cited; if they do it is not through a cumulative process of analysis. New material is extremely unlikely to disrupt existing work, with the result that progress in large established scientific fields may be trapped in existing canon. That is fairly stern stuff.

It is important to note there are at least three major objectives relating to science. The first is developing methods to gain information, or, if you prefer, developing new experimental or observational techniques. The second is using those techniques to record more facts. The more scientists there are, the more successful these are, and over the period we have most certainly been successful in these objectives. The rapid provision of new vaccines for SARS-CoV-2 shows that when pushed, we find ways of how to do it. When I started my career, a very large clunky computer that was incredibly slow and had internal memory measured in bytes occupied a room. Now we have memory that stores terrabytes in something you can hold in your hand. So yes, we have learned how to do it, and we have acquired a huge amount of knowledge. There is a glut of facts available.

The third objective is to analyse those facts and derive theories so we can understand nature, and do not have to examine that mountain of data for any reason other than to verify that we are on the right track. That is where little has happened.

As the PNAS paper points out, policy reflects the “more is better” approach. Rewards are for the number of articles, and citations reflect the quality of them. The number of publications are easily counted, but the citations are more problematical. To get the numbers up, people carry out work most likely to reach a fast result. The citations are the ones most easily found, which means those that get a good start gather citations like crazy. There are also “citation games”: you cite mine, I’ll cite yours. These citations may have nothing in particular to add in terms of the science or logic, but they do add to the career prospects.

What happens when a paper is published? As the PNAS paper says, “cognitively overloaded reviewers and readers process new work only in relationship to existing exemplars”. If a new paper does not fit the existing dynamic, it will be ignored. If  the young researcher wants to advance, he or she must avoid trying to rock the boat. You may feel that the authors of this are overplaying a non-problem. Not so. One example shows how the scientific hierarchy thinks. One of the two major advances in theoretical physics in the twentieth century was quantum mechanics. Basically, all our advanced electronic technology depends on that theory, and in turn the theory is based on one equation published by Erwin Schrödinger. This equation is effectively a statement that energy is conserved, and that the energy is determined by a wave function ψ. It is too much to go into here, but the immediate consequence was the problem, what exactly does ψ represent?

Louis de Broglie was the first to propose that quantum motion was represented by a wave, and he came up with a different equation which stated the product of the momentum and wavelength was Planck’s constant, or the quantum of action. De Broglie then proposed that ψ was a physical wave, which he called the pilot wave. This was promptly ignored in favour of a far more complicated mathematical procedure that we can ignore for the present. Then, in the early 1950s David Bohm more or less came up with the same idea as de Broglie, which was quite different from the standard paradigm. So how was that received? I found a 1953 quote from J. R. Oppenheimer: “We consider it juvenile deviationism .. we don’t waste our time … [by] actually read[ing] the paper. If we cannot disprove Bohm, then we must agree to ignore him.” So much for rational analysis.

The standard theory states that if an electron is fired at two slits it goes through BOTH of them then gives an interference pattern. The pilot wave says the electron has a trajectory, goes through one slit only, and while it forms the same interference pattern, an electron going through the left slit never ends up in the right hand pattern. Observations have proved this to be correct (Kocsis, S. and 6 others. 2011. Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer Science 332: 1170 – 1173.) Does that change anyone’s mind? Actually, no. The pilot wave is totally ignored, except for the odd character like me, although my version is a little different (called a guidance wave) and it is ignored more.

Nanotech Antivirals

Most by now will have heard of nanotechnology, although probably rather few actually understand its full implications. There are strict limits to what can be done, and the so-called nanotech that was to be put into vaccines to allow Bill Gates to know where you were is simply not on. (Why anyone would think Bill Gates would care where you are also eludes me.) However, nanotechnology has some interesting uses in the fight against viruses. The Pfizer and Moderna vaccines that use messenger RNA to develop cell resistance, but the RNA is delivered to the cells by being encased in lipid nanoparticles. Lipids are technically and substance from living organisms that are soluble in organic solvents and insoluble in water, but they are often just fats. The lipid breaks open the cell wall, allowing the messenger RNA to get in, and of course that is the method of the virus as well: it is RNA encased in lipid. This technique can be used in other ways, thus such nanoparticles are showing promise for acting as delivery vehicles for other drugs and vaccines.

However, there may be an even more interesting use, as outlined in Nature Biotech. 39: 1172 – 4. The idea is that such nanomaterials could engage with viruses directly, either disrupting them or binding them. A route to disruption may involve nothing more than breaking apart the virus’ outer membrane. The binding approach works because many viruses rely on glycoproteins on their surface to bind to host cells. Anything that can mimic these cellular attachment points can bind the virus, effectively “nanosponges” for mopping them up. One way to make such

 “sponges” something like red blood cells have their contents removed then the remaining membrane is broken into thousands of tiny vesicles about 100 nanometers wide. They then get these vesicles to encase a biocompatible and biodegradable polymer, with the result that each such piece of polymer is coated with genuine cell membrane. Viruses recognize the cell membrane, attach and try to enter the cell, but for them the contents are something of a disappointment and they can’t get back out.

Such membranes obtained from human lung epithelial type II cells, or from human macrophages have angiotensin-converting enzyme 2 (ACE 2)and CD147, both of which SARS-C0V-2 binds to. Potentially we have a treatment that will clean up a Covid-19 infection. According to a study with mice it “showed efficacy” against the virus and showed no evidence of toxicity. Of course, there remains a way to go.

A different approach that shows promise is to construct nano-sized particles that are coated with something that will bind the virus. One example was used in a nasal spray for mice that led to a 99% reduction in viral load when treated with SARS-CoV-2 laden air. It is claimed the particles are not absorbed by the body, although so far the clinical study has not been peer reviewed. The advantage of this approach is that it can in principle be applied to a reasonably wide range of viruses. A further approach was to make “shells” out of DNA, and coat the inner side of these with something that will bind viruses. With several attachment sites, the virus cannot get out, and because of the bulk of the shell cannot bind to a cell and hence cannot infect. In this context, it is not clear whether the other approaches that bind viruses can still infect if the bound virus can attack from its other side.

Many viruses have an outer membrane that is a phospholipid bilayer, and this is essential for the virus to be able to fuse with cell membranes. A further approach is to disrupt the viral membrane, thus stop the fusing. One example is to form a nano-sized spherical surfactant particle and coat it with entities such as peptides that bind to viral glycoproteins. The virus attaches, then the surfactant simply destroys the viral membrane. As can be seen, there are a wide range of possible approaches. Unfortunately, as yet they are still at the preliminary stages and while efficacy has been shown in vitro and in mice, it is unclear what the long-term effects will be. Of course, if the patient is dying of a viral attack, long-term problems are not on his/her mind. One of the side-effects of SARS-CoV-2 may be that it has stimulated genuine research into the topic. This the Biden administration is firing $3 billion at research. It is a pity it takes a pandemic to get us into action, though.

Unexpected Astronomical Discoveries.

This week, three unexpected astronomical discoveries. The first relates to white dwarfs. A star like our sun is argued to eventually run out of hydrogen, at which point its core collapses somewhat and it starts to burn helium, which it converts to carbon and oxygen, and gives off a lot more energy. This is a much more energetic process than burning hydrogen to helium, so although the core contracts, the star itself expands and becomes a red giant. When it runs out of that, it has two choices. If it is big enough, the core contracts further and it burns carbon and oxygen, rather rapidly, and we get a supernova. If it does not have enough mass, it tends to shed its outer matter and the rest collapses to a white dwarf, which glows mainly due to residual heat. It is extremely dense, and if it had the mass of the sun, it would have a volume roughly that of Earth.

Because it does not run fusion reactions, it cannot generate heat, so it will gradually cool, getting dimmer and dimmer, until eventually it becomes a black dwarf. It gets old and it dies. Or at least that was the theory up until very recently. Notice anything wrong with what I have written above?

The key is “runs out”. The problem is that all these fusion reactions occur in the core, but what is going on outside. It takes light formed in the core about 100,000 years to get to the surface. Strictly speaking, that is calculated because nobody has gone to the core of a star to measure it, but the point is made. It takes that long because it keeps running into atoms on the way out, getting absorbed and re-emitted. But if light runs into that many obstacles getting out, why do you think all the hydrogen would work its way to the core? Hydrogen is light, and it would prefer to stay right where it is. So even when a star goes supernova, there is still hydrogen in it. Similarly, when a red giant sheds outer matter and collapses, it does not necessarily shed all its hydrogen.

The relevance? The Hubble space telescope has made another discovery, namely that it has found white dwarfs burning hydrogen on their surfaces. A slightly different version of “forever young”. They need not run out at all because interstellar space, and even intergalactic space, still has vast masses of hydrogen that, while thinly dispersed, can still be gravitationally acquired. The surface of the dwarf, having such mass and so little size, will have an intense gravity to make up for the lack of exterior pressure. It would be interesting to know if they could determine the mechanism of the fusion. I would suspect it mainly involves the CNO cycle. What happens here is that protons (hydrogen nuclei) in sequence enter a nucleus that starts out as ordinary carbon 12 to make the element with one additional proton, which then decays to produce a gamma photon, and sometimes a positron and a neutrino until it gets to nitrogen 15 (having been in oxygen 15) after which if it absorbs a proton it spits out helium 4 and returns to carbon 12. The gamma spectrum (if it is there) should give us a clue.

The second is the discovery of a new Atira asteroid, which orbits the sun every 115 days and has a semi-major axis of 0.46 A.U. The only known object in the solar system with a smaller semimajor axis is Mercury, which orbits the sun in 89 days. Another peculiarity of its orbit is that it can only be seen when it is away from the line of the sun, and as it happens, these times are very difficult to see it from the Northern Hemisphere. It would be interesting to know its composition. Standard theory has it that all the asteroids we see have been dislodged from the asteroid belt, because the planets would have cleaned out any such bodies that were there from the time of the accretion disk. And, of course, we can show that many asteroids were so dislodged, but many does not mean all. The question then is, how reliable is that proposed cleanout? I suspect, not very. The idea is that numerous collisions would give the asteroids an eccentricity that would lead them to eventually collide with a planet, so the fact they are there means they have to be resupplied, and the asteroid belt is the only source. However, I see no reason why some could not have avoided this fate. In my ebook “Planetary Formation and Biogenesis” I argue that the two possibilities would have clear compositional differences, hence my interest. Of course, getting compositional information is easier said than done.

The third “discovery” is awkward. Two posts ago I wrote how the question of the nature of dark energy might not be a question because it may not exist. Well, no sooner had I posted, than someone came up with a claim for a second type of dark energy. The problem is, if the standard model is correct, the Universe should be expanding 5 – 10% faster than it appears to be doing. (Now, some would say that indicates the standard model is not quite right, but that is apparently not an option when we can add in a new type of “dark energy”.) This only applied for the first 300 million years or so, and if true, the Universe has suddenly got younger. While it is usually thought to be 13.8 billion years old, this model has it at 12.4 billion years old. So while the model has “invented” a new dark energy, it has also lost 1.4 billion years in age. I tend to be suspicious of this, especially when even the proposers are not confident of their findings. I shall try to keep you posted.

The Universe is Shrinking

Dark energy is one of the mysteries of modern science. It is supposed to amount to about 68% of the Universe, yet we have no idea what it is. Its discovery led to Nobel prizes, yet it is now considered possible that it does not even exist. To add or subtract 68% of the Universe seems a little excessive.

One of the early papers (Astrophys. J., 517, pp565-586) supported the concept. What they did was to assume type 1A supernovae always gave out the same light so by measuring the intensity of that light and comparing it with the red shift of the light, which indicates how fast it is going away, they could assess whether the rate of expansion of the universe was even over time. The standard theory at the time was that it was, and it was expanding at a rate given by the Hubble constant (named after Edwin Hubble, who first proposed this). What they did was to examine 42 type 1a supernovae with red shifts between 0.18 and 0.83, and compared their results on a graph with what they expected from the line drawn using the Hubble constant, which is what you expect with zero acceleration, i.e. uniform expansion. Their results at a distance were uniformly above the line, and while there were significant error bars, because instruments were being operated at their extremes, the result looked unambiguous. The far distant ones were going away faster than expected from the nearer ones, and that could only arise if the rate of expansion were accelerating.

For me, there was one fly in the ointment, so to speak. The value of the Hubble constant they used was 63 km/s/Mpc. The modern value is more like 68 or 72; there are two values, and they depend on how you measure them, but both are somewhat larger than this. Now it follows that if you have the speed wrong when you predict how far it travelled, it follows that the further away it is, the bigger the error, which means you think it has speeded up.

Over the last few years there have been questions as to exactly how accurate this determination of acceleration really is. There has been a question (arXiv:1912.04903) that the luminosity of these has evolved as the Universe ages, which has the effect that measuring the distance this way leads to overestimation of the distance. Different work (Milne et al. 2015.  Astrophys. J. 803: 20) showed that there are at least two classes of 1A supernovae, blue and red, and they have different ejecta velocities, and if the usual techniques are used the light intensity of the red ones will be underestimated, which makes them seem further away than they are.

My personal view is there could be a further problem. The type 1A occurs when a large star comes close to another star and begins stripping it of its mass until it gets big enough to ignite the supernova. That is why they are believed to have the same brightness: they ignite their explosion at the same mass so there are the same conditions, so there should be the same brightness. However, this is not necessarily the case because the outer layer, which generates the light we see, comes from the non-exploding star, and will absorb and re-emit energy from the explosion. Hydrogen and helium are poor radiators, but they will absorb energy. Nevertheless, the brightest light might be expected to come from the heavier elements, and the amount of them increases as the Universe ages and atoms are recycled. That too might lead to the appearance that the more distant ones are further away than expected, which in turn suggests the Universe is accelerating its expansion when it isn’t.

Now, to throw the spanner further into the works, Subir Sarkar has added his voice. He is unusual in that he is both an experimentalist and a theoretician, and he has noted that the 1A supernovae, while taken to be “standard candles”, do not all emit the same amount of light, and according to Sarkar, they vary by up to a factor of ten. Further, previously the fundamental data was not available, but in 1915 it became public. He did a statistical analysis and found that the data supported a cosmic acceleration but only with a statistical significance of three standard deviations, which, according to him, “is not worth getting out of bed for”.

There is a further problem. Apparently the Milky Way is heading off in some direction at 600 km/s, and this rather peculiar flow extends out to about a billion light years, and unfortunately most of the supernovae studied so far are in this region. This drops the statistical significance for cosmic expansion to two standard deviations. He then accuses the previous supporters of this cosmic expansion as confirmation bias: the initial workers chose an unfortunate direction to examine, but the subsequent ones “looked under the same lamppost”.

So, a little under 70% of what some claim is out there might not be. That is ugly. Worse, about 27% is supposed to be dark matter, and suppose that did not exist either, and the only reason we think it is there is because our understanding of gravity is wrong on a large scale? The Universe now shrinks to about 5% of what it was. That must be something of a record for the size of a loss.

Asteroid (16) Psyche – Again! Or Riches Evaporate, Again

Thanks to my latest novel “Spoliation”, I have had to take an interest in asteroid mining. I discussed this in a previous post (https://ianmillerblog.wordpress.com/2020/10/28/asteroid-mining/) in which I mentioned the asteroid (16) Psyche. As I wrote, there were statements saying the asteroid had almost unlimited mineral resources. Initially, it was estimated to have a density (g/cc) of about 7, which would make it more or less solid iron. It should be noted this might well be a consequence of extreme confirmation bias. The standard theory has it that certain asteroids differentiated and had iron cores, then collided and the rock was shattered off, leaving the iron cores. Iron meteorites are allegedly the result of collisions between such cores. If so, it has been estimated there have to be about 75 iron cores floating around out there, and since Psyche had a density so close to that of iron (about 7.87) it must be essentially solid iron. As I wrote in that post, “other papers have published values as low as 1.4 g/cm cubed, and the average value is about 3.5 g/cm cubed”. The latest value is 3.78 + 0.34.

These varied numbers show how difficult it is to make these observations. Density is mass per volume. We determine the volume by considering the size and we can measure the “diameter”, but the target is a very long way away, it is small, so it is difficult to get an accurate “diameter”. The next point is it is not a true sphere, so there are extra “bits” of volume with hills, or “bits missing” with craters. Further, the volume depends on a diameter cubed, so if you make a ten percent error in the “diameter” you have a 30% error overall. The mass has to be estimated from its gravitational effects on something else. That means you have to measure the distance to the asteroid, the distance to the other asteroid, and determine the difference from expected as they pass each other. This difference may be quite tiny. Astronomers are working at the very limit of their equipment.

A quick pause for some silicate chemistry. Apart from granitic/felsic rocks, which are aluminosilicates, most silicates come in two classes of general formula: A – olivines X2SiO4 or B – pyroxenes XSiO3, where X is some mix of divalent metals, usually mainly magnesium or iron (hence their name, mafic, the iron being ferrous). However, calcium is often present. Basically, these elements are the most common metals in the output of a supernova, with magnesium being the most. For olivines, if X is only magnesium, the density for A (forsterite) is 3.27 and for B (enstatite) 3.2. If X is only iron, the density for A (fayalite) is 4.39 and for B (ferrosilite) 4.00. Now we come to further confirmation bias: to maintain the iron content of Psyche, the density is compared to enstatite chondrites, and the difference made up with iron. Another way to maintain the concept of “free iron” is the proposition that the asteroid is made of “porous metal”. How do you make that? A porous rock, like pumice, is made by a volcano spitting out magma with water dissolved in it, and as the pressure drops the water turns to steam. However, you do not get any volatile to dissolve in molten iron.

Another reason to support the iron concept was that the reflectance spectrum was “essentially featureless”. The required features come from specific vibrations, and a metal does not have any. Neither does a rough surface that scatters light. The radar albedo (how bright it is with reflected light) is 0.34, which implies a surface density of 3.5, which is argued to indicate either metal with 50% porosity, or solid silicates (rock). It also means no core is predicted. The “featureless spectrum” was claimed to have an absorption at 3 μm, indicating hydroxyl, which indicates silicate. There is also a signal corresponding to an orthopyroxene. The emissivity indicates a metal content greater than 20% at the surface, but if this were metal, there should be a polarised emission, and that is completely absent. At this point, we should look more closely at what “metal” means. In many cases, while it is used to convey what we would consider as a metal, the actual use includes chemical compounds with a  metallic element. The iron levels may be as iron sulphide, the oxide, or, as what I believe the answer is, the silicate. I think we are looking at the iron content of average rock. Fortune does not await us there.

In short, the evidence is somewhat contradictory, in part because we are using spectroscopy at the limits of its usefulness. NASA intends to send a mission to evaluate the asteroid and we should wait for that data.

But what about iron cored asteroids? We know there are metallic iron meteorites so where did they come from? In my ebook “Planetary Formation and Biogenesis”, I note that the iron meteorites, from isotope dating, are amongst the oldest objects in the solar system, so I argue they were made before the planets, and there were a large number of them, most of which ended up in planetary cores. The meteorites we see, if that is correct, never got accreted, and finally struck a major body for the first time.

Food on Mars

Settlers on Mars will have needs, but the most obvious ones are breathing and eating, and both of these are likely to involve plants. Anyone thinking of going to Mars should think about these, and if you look at science fiction the answers vary. Most simply assume everything is taken care of, which is fair enough for a story. Then there is the occasional story with slightly more detail. Andy Weir’s “The Martian” is simple. He grows potatoes. Living on such a diet would be a little spartan, but his hero had no option, being essentially a Robinson Crusoe without a Man Friday. The oxygen seemed to be a given. The potatoes were grown in what seemed to be a pressurised plastic tent and to get water, he catalytically decomposed hydrazine to make hydrogen and then he burnt that. A plastic tent would not work. The UV radiation would first make the tent opaque so the necessary light would not get in very well, then the plastic would degrade. As for making water, burning hydrazine as it was is sufficient, but better still, would they not put their base where there was ice?

I also have a novel (“Red Gold”) where a settlement tries to get started. Its premise is there is a main settlement with fusion reactors and hence have the energy to make anything, but the main hero is “off on his own” and has to make do with less, but can bring things from the main settlement. He builds giant “glass houses” made with layers of zinc-rich glass that shield the inside from UV radiation. Stellar plasma ejections are diverted by a superconducting magnet at the L1 position between Mars and the sun (proposed years before NASA suggested it) and the hero lives in a cave. That would work well for everything except cosmic radiation, but is that going to be that bad? Initially everyone lives on hydroponically grown microalgae, but the domes permit ordinary crops. The plants grow in treated soil, but as another option a roof is put over a minor crater and water provided (with solar heating from space) in which macroalgae grow and marine microalgae, as well as fish and other species, like prawns. The atmosphere is nitrogen, separated from the Martian atmosphere, and some carbon dioxide, and the plants make oxygen. (There would have to be some oxygen to get started, but plants on Earth grew without oxygen initially.)

Since then there have been other quite dramatic proposals from more official sources that assume a lot of automation to begin with. One of the proposals involves constructing huge greenhouses by covering a crater or valley. (Hey, I suggested that!) but the roof is flat and made of plastic, the plastic being made from polyethylene 2,5-furandicarboxylate, a polyester made from carbohydrates grown by the plants. This is used as a bonding agent to make a concrete from Martian rock. (In my novel, I explained why a cement is very necessary, but there are limited uses.) The big greenhouse model has some limitations. In this, the roof is flat, and in essentially two layers, and in between are vertical stacks of algae growing in water. The extra value here is that water filters out the effect of cosmic rays, although you need several meters of it. Now we have a problem. The idea is that underneath this there is a huge habitat, and for every cubic meter of water, we have one tonne mass, and on Mars, about 0.4 tonne of force on the lower flat deck. If this bottom deck is the opaque concrete, then something bound by plastic adhesion will slip. (Our concrete on bridges is only inorganic, and the binding is chemical, not physical, and further there is steel reinforcing.) Below this there would need to be many weight-bearing pillars. And there would need to be light generation between the decks (to get the algae to grow) and down below. Nuclear power would make this easy. Food can be grown as algae in between decks, or in the ground down below.

As I see it, construction of this would take quite an effort and a huge amount of materials. The concept is the plants could be grown to make the cement to make the habitat, but hold on, where are the initial plants going to grow, and who/what does all the chemical processing? The plan is to have that in place from robots before anyone gets there but I think that is greatly overambitious. In “Red Gold” I had the glass made from regolith processed with the fusion energy. The advantage of glass over this new suggestion is weight; even on Mars with its lower gravity millions of tonnes remains a serious weight. The first people there will have to live somewhat more simply.

Another plan that I have seen involves finding a frozen lake in a crater, and excavating an “under-ice” habitat. No shortage of water, or screening from cosmic rays, but a problem as I see it is said ice will melt from the heat, erode the bottom of the sheet, and eventually it will collapse. Undesirable, that is.

All of these “official” options use artificial lighting. Assuming a nuclear reactor, that is not a problem in itself, although it would be for the settlement under the ice because heat control would be a problem. However, there is more to getting light than generating energy. What gives off the light, and what happens when its lifetime expires? Do you have to have a huge number of spares? Can they be made on Mars?

There is also the problem with heat. In my novel I solved this with mirrors in space focussing more sunlight on selected spots, and of course this provides light to help plants grow, but if you are going to heat from fission power a whole lot more electrical equipment is needed. Many more things to go wrong, and when it could take two years to get a replacement delivered, complicated is what you do not want. It is not going to be that easy.

Climate Change: Are We Baked?

The official position from the IPCC’s latest report is that the problem of climate change is getting worse. The fires and record high temperatures in Western United States and British Columbia, Greece and Turkey may be portents of what is coming. There have been terrible floods in Germany and New Zealand has had its bad time with floods as well. While Germany was getting flooded, the town of Westport was inundated, and the Buller river had flows about 30% higher than its previous record flood. There is the usual hand-wringing from politicians. Unfortunately, at least two serious threats that have been ignored.

The first is the release of additional methane. Methane is a greenhouse gas that is about 35 times more efficient at retaining heat than carbon dioxide. The reason is absorption of infrared depends on the change of dipole moment during absorption. CO2 is a linear molecule and has three vibrational modes. One involves the two oxygen atoms both moving the same way so there is no change of dipole moment, the two changes cancelling each other. Another is as if the oxygen atoms are stationary and the carbon atom wobbles between them. The two dipoles now do not cancel, so that absorbs, but the partial cancellation reduces the strength. The third involves molecular bending, but the very strong bond means the bend does not move that much, so again the absorption is weak. That is badly oversimplified, but I hope you get the picture.

Methane has four vibrations, and rather than describe them, try this link: http://www2.ess.ucla.edu/~schauble/MoleculeHTML/CH4_html/CH4_page.html

Worse, its vibrations are in regions totally different from carbon dioxide, which means it is different radiation that cannot escape directly to space.

This summer, average temperatures in parts of Siberia were 6 degrees Centigrade above the 1980 – 2000 average and methane is starting to be released from the permafrost. Methane forms a clathrate with ice, that is it rearranges the ice structure and inserts itself when under pressure, but the clathrate decomposes on warming to near the ice melting point. This methane has formed from the anaerobic digestion of plant material and been trapped by the cold, so if released we get delivered suddenly all methane that otherwise would have been released and destroyed over several million years. There are about eleven billion tonnes of methane estimated to be in clathrates that could be subject to decomposition, about the effect of over 35 years of all our carbon dioxide emissions, except that as I noted, this works in a totally fresh part of the spectrum. So methane is a problem; we all knew that.

What we did not know a new source has been identified as published in the Proceedings of the National Academy of Sciences recently. Apparently significantly increased methane concentrations were found in two areas of northern Siberia: the Tamyr fold belt and the rim of the Siberian Platform. These are limestone formations from the Paleozoic era. In both cases the methane increased significantly during heat waves. The soil there is very thin so there is very little vegetation to decay and it was claimed the methane was stored in and now emitted from fractures in the limestone.

The  second major problem concerns the Atlantic Meridional Overturning Circulation (AMOC), also known as the Atlantic conveyor. What it does is to take warm water that gets increasingly salty up the east Coast of the US, then switch to the Gulf Stream and warm Europe (and provide moisture for the floods). As it loses water it gets increasingly salty and with its increased density it dives to the Ocean floor and flows back towards the equator. Why this is a problem is that the melting Northern Polar and Greenland ice is providing a flood of fresh water that dilutes the salty water. When the density of the water is insufficient to cause it to sink this conveyer will simply stop. At that point the whole Atlantic circulation as it is now stops. Europe chills, but the ice continues to melt. Because this is a “stopped” circulation, it cannot be simply restarted because the ocean will go and do something else. So, what to do? The first thing is that simply stopping burning a little coal won’t be enough. If we stopped emitting CO2 now, the northern ice would keep melting at its current rate. All we would do is stop it melting faster.

Scientists Behaving Badly

You may think that science is a noble activity carried out by dedicated souls thinking only of the search for understanding and of improving the lot of society. Wrong! According to an item published in Nature ( https://doi.org/10.1038/d41586-021-02035-2) there is rot in the core. A survey of 64,000 researchers at 22 universities in the Netherlands was carried out, 6,813 actually filled out the form and returned it, and an estimated 8% of scientists who so returned their forms in the anonymous survey confessed to falsifying or fabricating data at least once between 2017 and 2020. Given that a fraudster is less likely to confess, that figure is probably a clear underestimate.

There is worse. More than half of respondents also reported frequently engaging in “questionable research practices”. These include using inadequate research designs, which can be due to poor funding and hence more understandable, and frankly this could be a matter of opinion. On the other hand, if you confess to doing it you are at best slothful. Much worse, in my opinion, was deliberately judging manuscripts or fund applications while peer reviewing unfairly. Questionable research practices are “considered lesser evils” than outright research misconduct, which includes plagiarism and data fabrication. I am not so sure of that. Dismissing someone else’s work or fund application hurts their career.

There was then the question of “sloppy work”, which included failing to “preregister experimental protocols (43%), make underlying data available (47%) or keep comprehensive research records (56%)” I might be in danger here. I had never heard about “preregistering protocols”. I suspect that is more for the medical research than for physical sciences. My research has always been of the sort where you plan the next step based on the last step you have taken. As for “comprehensive records, I must admit my lab books have always been cryptic. My plan was to write it down, and as long as I could understand it, that was fine. Of course, I have worked independently and records were so I could report more fully and to some extent for legal reasons.

If you think that is bad, there is worse in medicine. On July 5 an item appeared in the British Medical Journal with the title “Time to assume that health research is fraudulent until proven otherwise?” One example: a Professor of epidemiology apparently published a review paper that included a paper that showed mannitol halved the death rate from comparable injuries. It was pointed out to him that that paper that he reviewed was based on clinical trials that never happened! All the trials came from a lead author who “came from an institution” that never existed! There were a number of co-authors but none had ever contributed patients, and many did not even know they were co-authors. Interestingly, none of the trials had been retracted so the fake stuff is still out there.

Another person who carried out systematic reviews eventually realized that only too many related to “zombie trials”. This is serious because it is only by reviewing a lot of different work can some more important over-arching conclusions be drawn, and if a reasonable percentage of the data is just plain rubbish everyone can jump to the wrong conclusions. Another medical expert attached to the journal Anaesthesia found from 526 trials, 14% had false data and 8% were categorised as zombie trials. Remember, if you are ever operated on, anaesthetics are your first hurdle! One expert has guessed that 20% of clinical trials as reported are false.

So why doesn’t peer review catch this? The problem for a reviewer such as myself is that when someone reports numbers representing measurements, you naturally assume they were the results of measurement. I look to see that they “make sense” and if they do, there is no reason to suspect them. Further, to reject a paper because you accuse it of fraud is very serious to the other person’s career, so who will do this without some sort of evidence?

And why do they do it? That is easier to understand: money and reputation. You need papers to get research funding and to keep your position as a scientist. It is very hard to detect, unless someone repeats your work, and even then there is the question, did they truly repeat it? We tend to trust each other, as we should be able to. Published results get rewards, publishers make money, Universities get glamour (unless they get caught out). Proving fraud (as opposed to suspecting it) is a skilled, complicated and time-consuming process, and since it shows badly on institutions and publishers, they are hardly enthusiastic. Evil peer review, i.e. dumping someone’s work to promote your own is simply strategic, and nobody will do anything about it.

It is, apparently, not a case of “bad apples”, but as the BMJ article states, a case of rotten forests and orchards. As usual, as to why, follow the money.