Natural Hydrogen

Smoking is a hazard to life, but there was an exceptional demonstration in Mali in 1987. According to an article in Science (vol 379, 631) since most of Mali needs water, some people were digging a well. They got down to 108 meters, but no water, so they gave up. Then, to their surprise, a wind started coming from the hole. How could that be. Someone stuck his head over the hole to look, and he was smoking. The wind exploded in his face. The well then caught fire, and continued burning with a colourless flame, with no soot. What they had discovered was a deposit of hydrogen. At first, this was regarded as an oddity, but according to the journal Science there is now more interest in “natural hydrogen”. As for the Malian hole, a local team installed an engine designed to burn hydrogen and hooked it up to a 300 kW generator. For the first time, the local village had electricity. The suggestion now is that hydrogen deposits may be more common than generally thought.  So why has it taken this long to find them? Mainly because the hydrogen does not originate from natural gas, so it is not found in the same places. Indeed, it is found in the very places that natural gas and oil are not found. The Malian gas was an accident; they were looking for water.

So where does it come from? Interestingly enough, the way it is made was a critical component of how I argued that the precursors to life originated. In my opinion, the planet originally accreted its water attached to rock, most of it to aluminosilicates, which later under heat and pressure lost their water and were extruded to the surface as granite. The reason Earth has far more granite than any other rocky planet is that it formed at a distance from the star where the accretion disk temperatures allowed aluminosilicates to phase separate early in the disk, and subsequently attract water and act as a cement to help form Earth. (Indeed, so far Earth is the only planet with significant amounts of granite, although I expect there will be some on the Venusian highlands. Granite floats on basalt, which is why we have continents.) The question then is, what happened to the water? Obviously, some was emitted and now comprises our oceans and fresh-water reserves, but was all of it? Was some retained deeper down? There is one estimate, in the Handbook of Chemistry and Physics, that suggests there is just as much down there as up here.

Suppose there is, and suppose it is deep enough to be hot. Near the surface, basaltic rock, which comprises olivines and pyroxenes, has its iron content as ferrous. Thus an olivine has the formula {Fe,Mg} SiO4. The brackets mean it can have any combination of them, and additionally, any other divalent element, so that the valence of the bracketed part sums to four. Pyroxenes have the formula {Fe,Mg}SiO3, where the valencies of the bracketed part sum to two. (Ferrous and magnesium both have a valency of 2.). However, there are two routes such rock can make hydrogen. The first is that water, ferrous, and heat make ferric and hydrogen. So water on much basaltic rock will make hydrogen if the pressure and the heat are high enough. The second is that given enough pressure, olivine at least converts three ferrous ions to two ferric plus one iron atom. That iron atom will react with hot water to make ferric oxide and hydrogen. The silicate mantle makes up over 80% of Earth’s volume, so there is no shortage of basaltic-type rock. The question then is, is the water there? It almost certainly was, once. There is a further point. If there is a source of carbon there, including carbonates, the hydrogen makes methane.

One of the further interesting things about the Mali “hole” is that flows so far have not depleted. Oil involves millions of years for the conversion; hydrogen is made in seconds as long as the water can meet fresh ferrous or metallic iron. What we don’t know is whether it accumulates in large volumes. It is one thing to make hydrogen and provide power for a small village; it is a totally different matter to make a serious change to the nature of our economy.

Hydrogen is not without its problems. One is storage, another is the question of pipelines. However, there is a large part of the economy that is unsuitable for electricity as a source of power. These include large vehicles, aeroplanes, and places where high temperature under reducing conditions is needed. An example is steel-making. Carbon is needed to reduce iron oxide to iron, but hydrogen works just as well. Further, currently a lot of hydrogen is made today, but in making it we emit 900 million tonne of CO2. If we tried to make that with electricity, we need a 1000 terrawatt of new “green” power generators. Getting it from the ground would be very attractive, except, of course, the hydrogen may not be anywhere near the demand. There are a number of seeps throughout the world, but most are too feeble even to consider, although again, they may conceal far deeper down. One problem with hydrogen is the small size of the molecule means it leaks. That makes it hard to transport, but it also makes it hard to accumulate naturally. Overall it is difficult to assess whether this is an answer to anything, or merely a curiosity.


To the Centre of the Earth

One of the interesting comments from a recent Physics World is that we do not know much about what the middle of our planet is like. Needless to say, that is hardly surprising – we cannot exactly go down there and fossick around. Not that you would want to. The centre of the Earth is about 6000 km down. About half-way down (3,000 km) we run into a zone that is believed to be molten iron. That, perforce, is hot. Then, further down, we find a solid iron core. You might wonder why it would be solid underneath the liquid? I shall come to that, but here is a small test if you are interested. Try to find an answer to that question before you get down to it.

In the meantime, how do we get any information of what is going on down there? Basically, from earthquakes. What earthquakes do is send extremely powerful shockwaves through the planet. These are effectively sound waves, although the frequency may not be in the hearing range. What we get are two wave velocities: compression and shear, and from these we can estimate the density of the materials, and isolate where there is a division between layers. That works because if we have a boundary of different composition on each side, waves will travel at different velocities through the materials. If there is a reasonably sharp boundary, the waves striking it are either transmitted or reflected, according to the velocities of the sound in each of the media, while the velocity of a sound wave is proportional to the square root of the shear modulus and inversely proportional to the density. Now, as you can see, by obtaining shear and compression velocities, we are able to sort out what is going on, again, assuming a sharp boundary. Boundaries between different phases, such as solid and liquid are usually sufficiently sharp. However, because of the number of phases, and the fact we get reflections and transmission at each boundary, there is more than a little work required to sort out what is going on from the wave patterns. To add to the problem, while the waves take multiple routes, and therefore take multiple times to get there, earthquakes are notorious for going on for some time.

Anyway, what has happened is that the physicists have worked out what these wave patterns should be like, and what we see is not quite what we expected from a nickel/iron core. Basically, the core is not quite as dense as expected. That means there must be something else there. That raises the question, what is it? It also raises the question, are the expectations realistic?

This question arises from the fact that the temperatures and pressures at the centre of the Earth convey unknown properties to materials. We can makes a good estimate of the pressure because that is the weight of the rock etc above a point and we know the mass of Earth. The temperature we can only really guess. The pressure on the surface of Earth is about 100,000 Pascals. The pressure at the centre of Earth is about 364 Gpa, or over 3.5 million times greater. If you did go there, you would be squashed. To give you an idea, the density of iron is a little over 7.87 times that of water. The density of iron at that pressure is 13.87 times that of water, or about 57% of the volume for the same mass. When iron was squeezed in a diamond anvil to a similar volume, it was found that for the Earth’s core the compressive sound velocity was 4% slower, and the shear velocity about 36% slower. They therefore concluded that the inner core had lighter elements, such as about 3% silicon and 3% sulphur.

Which raises the question, why those elements? The authors say these elements came through the growth of the inner core from the outer core. There is no real way of knowing, but for those who follow the mechanism of planetary formation outlined in my ebook “Planetary Formation and Biogenesis” other possible elements might be nitrogen and carbon. The reason lies in the problem of how the metal core separated out under the huge pressures, which slows separation greatly. My answer is that the metals separated out in the accretion disk, and the iron-cored meteorites we see now are residues of that process. The nickel-iron arrived pre-separated, and so was easier to separate out. At the same time, the temperatures were ideal for making iron nitride and iron carbide contaminants.

Now, why is the core a solid? The answer comes from how a liquid works. To be a liquid it has to flow. Heat is simply random kinetic energy, and in a liquid when a molecule strikes another, it slips past it, so there is no structure. When you cool a liquid at atmospheric pressure, the molecules form interactions that hold them in a configuration where they do not slip past each other, hence they form a crystal. However, at the extreme pressures of the Earth’s centre, the reason for a solid is quite different: they do not slip past each other because there is simply not enough room. They cannot push anything out of the way because there is nowhere for it to go.

More on Disruptive Science

No sooner do I publish a blog on disruptive science than what does Nature do? It publishes an editorial questioning whether it is (which is fair enough) and then, rather bizarrely, does it matter? Does the journal not actually want science to advance, but merely restrict itself to the comfortable? Interestingly, their disruptive examples they cite come from Crick (1953, structure of DNA) and the first planet orbiting another star. In my opinion, these are not disruptive. In Crick’s case, he merely used Rosalind Franklin’s data, and in the second case, this had been expected for years, and indeed I had seen a claim about twenty years earlier for a Jupiter-style planet around Epsilon Eridani. (Unfortunately, I did not write down the reference because I was not involved in that yet.) This result was rubbished because it was claimed the data was too inaccurate, yet the result that I wrote down complied quite well with what we now accept. I am always suspicious of discounting a result when it not only got a good value for the semimajor axis but also proposed a significantly eccentric orbit. For me, these two papers are merely obvious advances on previous theory or logic.

The proposed test by Nature for a disruptive paper is based on citations, the idea being that if a disruptive paper is cited, it is less likely for its predecessors to be cited. If the paper is consolidating, the previously disruptive papers continue to be cited. If this were to be a criterion, probably one of the most disruptive papers would be on the EPR paradox (Einstein, A., Podolsky, B., Rosen, N. 1935. Can quantum-mechanical description of physical reality be considered complete?  Phys. Rev. 47: 777-780.) Yet the remarkable thing about this paper is that people fall over themselves to point out that Einstein “got it wrong”. (That they do not actually answer Einstein’s point seems to be irrelevant to them.)

Nature spoke to a number of scholars who study science and innovation. Some were worried by Park’s paper, one of the worries being that declining disruptiveness could be linked to sluggish productivity and economic growth being seen in many parts of the world. Sorry, but I find that quite strange. It is true that an absence of discoveries is not exactly helpful, but economic use of a scientific discovery usually takes decades after the discovery. There is prolonged engineering, and if it is novel, a market for the product has to be developed. Then they usually displace something else. Very little economic growth follows quickly from scientific discovery. No need to go down this rabbit hole.

Information overload was considered a reason, and it was suggested that artificial intelligence sift and sort useful information, to identify projects with potential for a breakthrough. I completely disagree with this regarding disruption. Anyone who has done a computer search of scientific papers will know that unless you have a very clear idea of what you are looking for, you get a bewildering amount of irrelevant stuff. Thus, if I want to know the specific value of some measurement, the computer search will give me in seconds what previously could have taken days. But if the search constraints are abstract, almost anything can come out, including the erroneous material, examples being in my previous post. The computer, so far, cannot make value judgments because it has no criteria for so doing. What it will do is to comply with established thinking because they will be the constraints for the search. Disruption is something that you did not expect. How can a computer search for what is neither expected nor known? Particularly if that which is unexpected is usually mentioned as an uncomfortable aside in papers and not mentioned in abstracts or keywords. The computer will have to thoroughly understand the entire subject to appreciate the anomaly, and artificial intelligence is still a long way from that.

In a similar vein, Nature published a news item dated January 18. Apparently, people have been analysing advertisements and have come across something both remarkable and depressing: there are apparently hundreds of advertisements offering the sale of authorship in a reputable journal for a scientific paper. Prices range from hundreds to thousands of USD depending on the research area and the journal’s prestige, and the advertisement often cites the title of the paper, the journal, when it will be published (how do they know that) and the position of the authorship slots. This is apparently a multimillion-dollar industry. Interestingly, this advertising that specifies what title in what journal immediately raises suspicion, and a number of papers have been retracted. Another flag is that following peer review, further authors are added. If the authors actually contributed to the paper, they should have been known at the start. The question then is, why would anyone pay good coin for that? Unfortunately, the reason is depressingly simple: you need more citations to get more money, promotion, prizes, tenure, etc. It is a scheme to make money from those whose desire for position exceeds their skill level. And it works because nobody ever reads these papers anyway. The chances of being asked by anyone for details is so low it would be extremely unlucky to be caught out that way. Such an industry, of course, will be anything but disruptive. It only works as long as nobody with enough skill to recognize an anomaly actually reads the papers, because then the paper becomes famous, and thoroughly examined. This industry works because of the citation-counting but not understanding the content is the method of evaluating science. In short, evaluation by ignorant committee.

Why is there no Disruptive Science Being Published?

One paper (Park et al. Nature 613: pp 138) that caught my attention over the post-Christmas period made the proposition that scientific papers are getting less disruptive over time, until now, in the physical sciences, there is essentially very little disruption currently. First, what do I mean by disruption? To me, this is a publication that is at cross-purposes with established thought. Thus the recent claim there is no sterile neutrino is at best barely disruptive because the existence of it was merely a “maybe” solution to another problem. So why has this happened? One answer might be we know everything so there is neither room nor need for disruption. I can’t accept that. I feel that scientists do not wish to change: they wish to keep the current supply of funds coming their way. Disruptive papers keep getting rejected because what reviewer who has spent decades on research want to let through a paper that essentially says he is wrong? Who is the peer reviewer for a disruptive paper?

Let me give a personal example. I made a few efforts to publish my theory of planetary formation in scientific journals. The standard theory is that the accretion disk dust formed planetesimals by some totally unknown mechanism, and these eventually collided to form planets. There is a small industry in running computer simulations of such collisions. My paper was usually rejected, the only stated reason being it did not have computer simulations. However, the proposition was that the growth was caused chemically and used the approximation there were no collisions. There was no evidence the reviewer read the paper past the absence of mention of simulations in the abstract. No comment about the fact here was the very first mechanism stated as to how accretion started and with a testable mathematical relationship regarding planetary spacing.

If that is bad, there is worse. The American Physical Society has published a report of a survey relating to ethics (Houle, F. H., Kirby, K. P. & Marder, M. P. Physics Today 76, 28 (2023). In a 2003 survey, 3.9% of early physicists admitted that they had been required to falsify data, or they did it anyway, to get to publication faster, to get more papers. By 2020, that number has risen to 7.3%. Now, falsifying data will only occur to get the result that fits in with standard thinking, because if it doesn’t, someone will check it.

There is an even worse problem: that of assertion. The correct data is obtained, any reasonable interpretation will say it contradicts the standard thinking, but it is reported in a way that makes it appear to comply. This will be a bit obscure for some, but I shall try to make it understandable. The paper is: Maerker, A.; Roberts, J. D. J. Am. Chem.Soc. 1966, 88, 1742-1759. At the time there was a debate whether cyclopropane could delocalize electrons. Strange effects were observed and there were two possible explanations: (1) it did delocalize electrons; (2) there were electric field effects. The difference was that both would stabilize positive charge on an adjacent centre, but the electric field effects would be opposite if the charge was opposite. So while it was known that putting a cyclopropyl ring adjacent to a cationic centre stabilized it, what happened to an anionic centre? The short answer is that most efforts to make R – (C-) – Δ, where Δ means cyclopropyl failed, whereas R – (C-) – H is easy to make. Does that look as if we are seeing stabilization? Nevertheless, if we put the cyclopropyl group on a benzylic carbon by changing R to a phenyl group φ so we have φ – (C-) – Δ an anion was just able to be made if potassium was the counter ion. Accordingly it was stated that the fact the anion was made was attributed to the stabilizing effect of cyclopropyl. No thought was given to the fact that any chemist who cannot make the benzyl anion φ – (C-) – H should be sent home in disgrace. One might at least compare like with like, but not apparently if you would get the answer you don’t want. What is even more interesting is that this rather bizarre conclusion has gone unremarked (apart from by me) since then.

This issue was once the source of strong debate, but a review came out and “settled” the issue. How? By ignoring every paper that disagreed with it, and citing the authority of “quantum mechanics”. I would not disagree that quantum mechanics is correct, but computations can be wrong. In this case, they used the same  computer programmes that “proved” the exceptional stability of polywater. Oops. As for the overlooked papers, I later wrote a review with a logic analysis. Chemistry journals do not publish logic analyses. So in my view, the reason there are no disruptive papers in the physical sciences is quite clear: nobody really wants them. Not enough to ask for them.

Finally, some examples of papers that in my opinion really should have  done better. Weihs et al. (1998) arXiv:quant-ph/9810080 v1 claimed to demonstrate clear violations of Bell’s inequality, but the analysis involved only 5% of the photons? What happened to the other 95% is not disclosed. The formation of life is critically dependent on reduced chemicals being available. A large proportion of ammonia was found in ancient seawater trapped in rocks at Barberton (de Ronde et al. Geochim.  Cosmochim. Acta 61: 4025-4042.) Thus information critical for an understanding of biogenesis was obtained, but the information was not even mentioned in the abstract or in keywords, so it is not searchable by computer. This would have disrupted the standard thinking of the ancient atmosphere, but nobody knew about it. In another paper, spectroscopy coupled with the standard theory predicted strong bathochromic shifts (to longer wavelengths) for a limited number of carbenium ions, but strong hypsochromic shifts were observed without comment (Schmitze, L. R.; Sorensen, T. S. J. Am. Chem. Soc. 1982, 104, 2600-2604.) So why was no fuss made about these things by the discoverers? Quite simply, they wanted to be in with the crowd. Be good, get papers, get funded. Don’t rock the boat! After all, nature does not care whether we understand or not.

Ancient Birds

You probably have never heard of Janavis finalidens. It was a bird with webbed feet and a body vaguely reminiscent of a wild hen. It would have been the size approximating to a grey heron and had a mass estimated to be 1.5 kg. I use the past tense because it lived in the late Cretaceous, so birds had evolved away from the therapods well before the extinction. This bird was, in shape, very similar to modern birds, which is hardly surprising because flight puts limits on them, but there is one notable difference: the beak was lined with teeth. We do not know a lot about them, in part because their bones are rather fragile and more difficult to fossilize, but maybe also because they are not so spectacular as the monster dinosaurs of the time. This particular example was found a couple of decades ago in a quarry, and when it was worked out what it was, it was filed.

However, more modern equipment, specifically micro-computed tomography, has re-examined the samples. Originally, they thought they had a handful of bones from the spine, wings, shoulders and legs. However, one of the bones they thought had been a shoulder bone was a pterygoid, a bone from the bony palate of the skull.

Most current birds belong to a group called neognaths, which means “new jaws”. The key bones here are mobile, and they allow the birds to move the upper beak independently of the skull. There is a small group of birds, the emu, cassowary, ostrich, kiwi and tinamous (47 species, ground dwelling, but some can fly) have the bones in the upper palate fused together. These are also called paleognaths, or “ancient jaws”. You will probably suspect from this naming that it was believed that birds originally came with these fused jaws, but most subsequently evolved the ability to move the upper beak. In this context, non-avian dinosaurs also have fused palates, and the last common ancestor of all modern birds lived some 80 million years ago, so it would be reasonable to assume that it had a fixed palate like the other dinosaurs. Unfortunately, this is one of those theories that is hard to test because the small delicate pterygoid is usually missing from the fossils.

However, a recent article in Nature (Benito et al., vol 612, pp 100 – 105) indicated that Janavis‘ pterygoid “probably formed part of an unfused bony palate”. That means the upper beak was probably mobile. Note the uncomfortable “probably”. The resemblance of the pterygoid to that of modern chickens now suggests that the mobile upper beak evolved first, and the fused beaks arose later. That, of course, raises the question, how did it evolve, and why did some birds revert to the fused palate.

How the beak functions is crucially dependent on the bones of the upper palate. By unfusing it, it increases the flexibility of the beak and improves the use of the beak. However, fused palates are not necessarily a drawback, and might give beaks of larger birds additional support. For the kiwi, the beak is extremely long compared with the bird, and fusing the upper beak to the skull might give it more strength as it probes logs for food (often grubs in decaying logs). It might also be of interest that these birds, being flightless, tend to get most of their food from the ground, including plant material, but then again so do hens.

Accordingly, if you are concerned with the evolution of birds, admittedly not a common concern, you now have a problem, and the question is, how do you solve it? One way is to find plenty of fossils, but the difficulty is first, they are rare, and secondly, we have many samples from times when both were present, including now. How do you know you are not being misled? An important aspect of science is that once you have a reasonably well-defined problem and a possible solution, you can arrive at ways of testing it. One of the peculiarities of evolution is that as an egg grows to the adult, it often gives clues as to the evolutionary path it took. The most obvious example is the frog, first going through the tadpole stage. In the case of modern paleognaths, one approach being considered is look at their development stages. If there are differences, this would be a clue that the trait arose independently more than once.

Is Science Sometimes Settled Wrongly

In a post two weeks ago I raised the issue of “settled science”. The concept was there had to be things you were not persistently checking. Obviously, you do not want everyone wasting time rechecking the melting point of benzoic acid, but fundamental theory is different. Who settles that, and how? What is sufficient to say, “We know it must be that!” In my opinion, admittedly biased, there really is something rotten in the state of science. Once upon a time, namely in the 19th century, there were many mistakes made by scientists, but they were sorted out by vigorous debate. The Solvay conferences continued that tradition in the 1920s for quantum mechanics, but something went wrong in the second half of the twentieth century. A prime example occurred in 1952, when David Bohm decided the mysticism inherent in the Copenhagen Interpretation of quantum mechanics required a causal interpretation, and he published a paper in the Physical Review. He expected a storm of controversy and he received – silence. What had happened was that J. Robert Oppenheimer, previously Bohm’s mentor, had called together a group of leading physicists to find an error in the paper. When they failed, Oppenheimer told the group, “If we cannot disprove Bohm we can all agree to ignore him”. Some physicists are quite happy to say Bohm is wrong; they don’t actually know what Bohm said, but they know he is wrong. (   ) If that were one isolated example, that would be wrong, but not exactly a crisis. Unfortunately, it is not an isolated case. We cannot know how bad the problem is because we cannot sample it properly.

A complicating issue is how science works. There are millions of scientific papers produced every year. Thanks to time constraints, very few are read by several people. The answer to that would be to publish in-depth reviews, but nobody appears to publish logic analysis reviews. I believe science can be “settled” by quite unreasonable means. As an example, my career went “off the standard rails” with my PhD thesis.

My supervisor’s projects would not work, so I selected my own. There was a serious debate at the time whether strained systems could delocalize their electrons into adjacent unsaturation in the same way double bonds did. My results showed they did not, but it became obvious that cyclopropane stabilized adjacent positive charge. Since olefins do this by delocalizing electrons, it was decided that cyclopropane did that too. When the time came for my supervisor to publish, he refused to publish the most compelling results, despite suggesting this sequence of experiments was his only contribution, because the debate was settling down to the other side. An important part of logic must now be considered. Suppose we can say, if theory A is correct, then we shall see Q. If we see Q we can say that the result is consistent with A, but we cannot say that theory B would not predict Q also. So the question is, is there an alternative?

The answer is yes. The strain arises from orbitals containing electrons being bent inwards towards the centre of the system, hence coming closer to each other. Electrons repel each other. But it also becomes obvious that if you put positive charge adjacent to the ring, that charge will attract the electrons and override the repulsion on and attract electrons moving towards the positive charge. That lowers the energy, and hence stabilizes the system. I actually used an alternative way of looking at it: If you move charge, by bending the orbitals, you should generate a polarization field, and that stabilizes the positive charge. So why look at it like that? Because if the cause of a changing electric field is behind a wall, say, you cannot tell the difference between charge moving or of charge being added. Since the field contains the energy the two approaches give the same strain energy but by considering an added pseudocharge it was easy to put numbers on effects.

However, the other side won, by “proving” delocalization through molecular orbital theory, which, as an aside, assumes it is delocalized. Aristotle had harsh words for people who prove what they assume after a tortuous path. As another aside, the same quantum methodology proved the stability of “polywater” – where your water could turn into a toffee-like consistency. A review came out, and confirmed the “other side” by showing numerous examples of where the strained ring stabilized positive charge. It also it ignored everything that contradicted it.

Much later I wrote a review that showed this first one had ignored up to sixty different types of experimental result that contradicted the conclusion. That was never published by a journal – the three reasons for rejection, in order, were: not enough pictures and too many mathematics; this is settled; and some other journals that said “We do not publish logic analyses”.

I most certainly do not want this post to simply turn into a general whinge, so I shall stop there other than to say I could make five other similar cases that I know about. If that happens to or comes to the attention of one person, how general is it? Perhaps a final comment might be of interest. As those who have followed my earlier posts may know, I concluded that the argument that the present Nobel prize winners in physics found violations of Bell’s Inequality is incorrect in logic. (Their procedure violates Noether’s theorem.) When the prize was announced, I sent a polite communication to the Swedish Academy of Science, stating one reason why the logic was wrong, and asking them, if I had missed something, to inform me where I was wrong. So far, over five weeks later, no response. It appears no response might be the new standard way of dealing with those who question the standard approach.

Fire in Space

Most of us have heard of the dangers of space flight such as solar storms, cosmic rays, leaks to the space crafts, and so on, but there are some ordinary problems too. TV programs like to have space ships in battles, whereupon “shields fail” (there is no such thing as a shield, except the skin of the craft, but let’s leave that pass) and we then have fire. When you stop and think about it, fire of a space ship would be a nasty problem. It burns material that presumably had some use, it overheats things like electronics, which will stop them working, then we come to the real problem: if you don’t have spares, you cannot fix it. You often see scenes where engineers have been running around “beating the clock” but what do they use for parts? If they are going to make parts, out of what? If you say, recycle, then at the very least they should be assiduously collecting “smashed stuff”.

Accordingly, it would make sense for astronauts to prevent fires from starting in the first place. You may recall Apollo 1. The three astronauts were inside the command module practising a countdown. The module used pressurised oxygen, and somehow a fire broke out. Pure oxygen and flammable material is a bad mix, and the astronauts died from carbon monoxide poisoning. The hatch opened inwards, and the rapid increase of pressure from the fire meant that it was impossible to open the hatch. The fire was presumed to have started by some loose wiring arcing, and igniting something. Now, we know better than to have pure oxygen, but the problem remains. Fire in space would not be good.

One obvious defence is to reduce the amount of combustible materials. If there is nothing to burn, there will not be a fire, but that is not entirely practical, so the next question is, how do fires burn in space? At first sight that is obvious: the organic matter gets hot and oxygen reacts with it to make a flame. However, there is more to it.

First, how does a fire burn on Earth? For a simple look, light a candle. What you see is the heat melts the wax, the wax runs up the wick and vaporises. The combustion involves breaking the wax down into a number of smaller molecules (which can be seen as smoke if combustion is incomplete) and free radial fragments, which react with oxygen. Some of the fragments combine to form carbon (soot, if it doesn’t burn further). The carbon is important because it glows, giving off the orange colour, but it also radiates a lot of heat, and that heat that radiates downwards melts the wax. What you will notice is that the flame moves upwards. That is because the gas is hot, hence it expands and occupies less volume than a similar number of moles of air. Going up is simply a consequence of Archimedes’ Principle. As it goes up, it sucks in air from below, so there is a flow of gas entering the flame from below, and exiting above. If you can get hold of some methanol, you could light that. Its formula is CH3OH, which means there are no carbon-carbon bonds, which means it cannot form soot. Therefore, it will burn with a light blue coloured flame and it does not radiate much heat. Methanol burning on your skin will not burn you as long as the skin is below the flame.

Which brings us to space. Since fire is possible on a space ship, NASA has done some experiments, partly to learn more about fire, but also to learn how to put them out on a space ship. The first difference is that in the absence of gravity, flames do not go up, after all there is no “up”. Instead, they form little spheres. Further, since there is no gravity, Archimedes’ Principle is no longer important, so there is nothing to suck fresh air in. Oxygen has to enter by diffusion, and oxygen and fuel combine in a narrow zone on the surface of the sphere. The “fire” continues with no significant flame, and further, while a normal fire burns at about 1500 – 2000 degrees K, these fires using droplets of heptane eventually form cool fire, reaching temperatures of 500 – 800 degrees K. Also, the chemistry is different. On Earth, flames usually produce water, carbon dioxide and soot. In microgravity they were producing water, carbon monoxide and formaldehyde. In short, a rather poisonous mix. Cool fires in space are no less dangerous; just different dangerous. Dealing with them may be different too. Extinguishers that squirt gas may simply move the fire, or supply extra air to it. So far, I doubt we have worked out our final methods for fighting fires in space, but I am sure the general principle is to have the fewest possible combustible materials in the space ship.

How Science Has Operated

The October 10 edition of the magazine Palladium published an item called The Transformations of Science. The following includes some of my thoughts on what the article stated, relating to the issue of trust in science. In 1660 the Royal Society was formed, and it adopted the motto Nullius in verba, which means take no one’s word for it. This was their version of how science should be carried out: check everything. The article notes that Thomas Hobbes objected, maybe because he was not a member of the Royal Society. Hobbes pointed out that not everyone could make such observations and stated claims should be derived mathematically from axioms. This raised a problem: who was supposed to make the relevant observations and who was supposed to rely on whom? Unstated was another issue: do we require procedures derived from axioms that allows calculations to get the results we need (the epistemic approach), or do we try to understand what is going on (the ontological approach)? This hounds modern quantum mechanics.

None of this produced benefit, however. Back then, science was a curiosity. There was public interest, particularly when the Leyden jar was developed. Apparently, a large number of people would join hands and one would touch the jar, when they would all get an electrical shock. Michael Faraday gave public demonstrations that filled halls and showed phenomena that probably seemed like magic. However, it would not be long before science got too complicated. People might come to see Faraday do some amazing things with electricity, but they would hardly come to watch the manipulation of Maxwell’s partial differential equations.

Nullius in verba implies everything should be re-examined and re-verified. That makes little sense. How many times do you have to check the melting point of benzoic acid? Accordingly, what we have now is settled science. This is the authoritative version, but that brings its own problems. Authority is a powerful resource, and before long politicians saw a point of using it to justify their decisions, and to maintain this, the state supplied money to keep it going. Science made an impact in WW I, and a hugely more important role in WW II. The provision of electrical appliances following Faraday and Maxwell, and the more startling appliances that depend on quantum mechanics, together with the development of modern medicine, have made us dependent on science. The problem with this is how is it “settled”? Giordano Bruno was burnt at the stake for daring to go against “settled” science. He supported Copernicus’ heliocentric theory, and how could anyone reject the settled conclusion that everything went around Earth? Fortunately for heretics like me, we have now developed a different approach: ignore the heresy.

Sometimes science is either not settled or does not give clean answers. The recent issue of mask-wearing is indicative. Politicians had to make decisions based on limited information. From the scientific view, if we restrict our thoughts solely to the virus, mask wearing cannot do harm (i.e. make viruses more likely to infect) as long as people handle the masks properly, whereas they might do good. They were, however, very unpopular and many people objected because the state made them do something. The reputation of science was also damaged.

We have a similar problem with climate change and the effect of greenhouse gases. The problem is the epistemic approach. You hear comments that the climate varies and the variations are due to “natural causes”. Worse, the population has expanded dramatically based on the availability of cheap energy, and in doing so it has locked in the need for it, at least in the short term. To stop burning fossil fuels today would lead to serious economic problems tomorrow; failure to stop today will lead to catastrophic economic problems for our great grandchildren. But politicians never think past the next election, or at least not sufficiently well to act on those thoughts. The scientists lose face because they cannot predict exactly what will happen. The population, however, cannot understand the concept of partial differential equations and cannot understand the consequences of a number of different effects that sometimes reinforce or sometimes cancel. The so-called Southern Oscillation is an example. The scientists know fine well what the causes are, but putting numbers to them and combining them well into the future is a probabilistic effort.

So, what does this article recommend? The first is a reconciliation between exploratory and authoritative elements. That requires changes in scientific practice and public comprehension. It argues some fields should disclaim authority partly or completely. It even suggests some scientific journals should ban authoritative articles. It suggests some parts of science should be shed off and rarely interact with other parts, thereby preventing premature consensus. It also suggests funding has to be restructured, with exploratory science removed from central funding, where authority and settled science resides.

So, what do I think this means. Feel free to offer your thoughts. I shall add more thoughts in a later post, mainly on the issue of “settled science”.

Success! Defence Against Asteroids

Most people will know that about 64 million years ago an asteroid with a diameter of about 10 km struck the Yucatán peninsula and exterminated the dinosaurs, or at least did great damage to them from which they never recovered. The shock-wave probably also initiated the formation of the Deccan Traps, and the unpleasant emission of poisonous gases which would finish off any remaining dinosaurs. The crater is 180 km wide and 20 km deep. That was a very sizeable excavation. Rather wisely, we would like to avoid a similar fate, and the question is, can we do anything about it? NASA thinks so, and they carried out an experiment.

I would be extremely surprised if, five years ago, anyone reading this had heard of Dimorphos. Dimorphos is a small asteroid with dimensions about those of the original Colosseum, i.e.  before vandals, like the Catholic Church took stones away to make their own buildings. By now you will be aware that Dimorphos orbits another larger asteroid called Didymos. What NASA has done was to send a metallic object of dimensions 1.8 x 1.9 x 2.6 meters, of mass 570 kg, and velocity 22,530 km/hr to crash into Dimorphos to slightly slow its orbital speed, which would change its orbital parameters. It would also change then orbital characteristics of the two around the sun. Dimorphos has a “diameter” of about 160 m., Didymos about 780 m. Neither are spherical hence the quotation marks.

This explains why NASA selected Dimorphos for the collision. First, it is not that far from Earth, while the two on their current orbits will not collide with Earth on their current orbits. Being close to Earth, at least when their orbits bring them close, lowers the energy requirement to send an object there. It is also easier to observe what happens hence more accurately determine the consequences. The second reason is that Dimorphos is reasonably small and so if a collision changes its dynamics, we shall be able to see by how much. At first sight you might say that conservation of momentum makes that obvious, but it is actually more difficult to know because it depends on what takes the momentum away after the collision. If it is perfectly inelastic, the object gets “absorbed” by the target which stays intact, then we simply add its relative momentum to that of the target. However, real collisions are seldom inelastic, and it would have been considered important to determine how inelastic. A further possibility is that the asteroid could fragment, and send bits in different directions. Think of Newton’s cradle. You hit one end and the ball stops but another flies off from the other end, and the total stationary mass is the same. NASA would wish to know how well the asteroid held together. A final reason for selecting Dimorphos would be that by being tethered gravitationally to Didymos, it could not go flying off is some unfortunate direction, and eventually collide with Earth. It is interesting that the change of momentum is shared between the two bodies through their gravitational interaction.

So, what happened, apart from the collision. There was another space craft trailing behind: the Italian LICIACube (don’t you like these names? It is an acronym for “Light Italian Cubesat for Imaging Asteroids”, and I guess they were so proud of the shape they had to have “cube” twice!). Anyway, this took a photograph before and after impact, and after impact Dimorphos was surrounded by a shower of material flung up from the asteroid. You could no longer see the asteroid for the cloud of debris. Of course Dimorphos survived, and the good news is we now know that the periodic time of Dimorphos around Didymos has been shortened by 32 minutes. That is a genuine success. (Apparently, initially a change  by as little as 73 seconds would have been considered a success!) Also, very importantly, Dimorphos held together. It is not a loosely bound rubble pile, which would be no surprise to anyone who has read my ebook “Planetary Formation and Biogenesis”.

This raises another interesting fact. The impact slowed Dimorphos down relative to Didymos, so Dimorphos fell closer to Didymos, and sped up. That is why the periodic time was shortened. The speeding up is because when you lower the potential energy, you bring the objects closer together and thus lower the total energy, but this equals the kinetic energy except the kinetic energy has the opposite sign, so it increases. (It also shortens the path length, which also lowers the periodic time..)

The reason for all this is to develop a planetary protection system. If you know that an asteroid is going to strike Earth, what do you do? The obvious answer is to divert it, but how? The answer NASA has tested is to strike it with a fast-moving small object. But, you might protest, an object like that would not make much of a change in the orbit of a dinosaur killer. The point is, it doesn’t have to. Take a laser light and point it at a screen. Now, give it a gentle nudge so it changes where it impacts. If the screen as a few centimeters away the lateral shift is trivial, but if the screen is a kilometer away, the lateral shift is now significant, and in fact the lateral shift is proportional to the distance. The idea is that if you can catch the asteroid far enough away, the asteroid won’t strike Earth because the lateral shift will be sufficient.

You might protest that asteroids do not travel in a straight line. No, they don’t, and in fact have trajectories that are part of an ellipse. However, this is still a line, and will still shift laterally. The mathematics are a bit more complicated because the asteroid will return to somewhere fairly close to where it was impacted, but if you can nudge it sufficiently far away from Earth it will miss. How big a nudge? That is the question about which this collision was designed to provide us with clues.

If something like Dimorphos struck Earth it would produce a crater about 1.6 km wide and 370 m deep, while the pressure wave would knock buildings over tens of km away. If it struck the centre of London, windows would break all over South-East England. There would be no survivors in central London, but maybe some on the outskirts. This small asteroid would be the equivalent a good-sized hydrogen bomb, and, as you should realize, a much larger asteroid would do far more damage. If you are interested in further information, I have some data and a discussion of such collisions in my ebook noted above.

Did Mars Have an Ocean?

It is now generally recognized that Mars has had fluid flows, and a number of riverbeds, lake beds, etc have been identified, but there are also maps on the web of a proposed Northern Ocean. It has also been proposed that there has been polar wander, and this Northern Ocean was more an equatorial one when it was there about 3.6 billion years ago. The following is a partial summary from my ebook “Planetary Formation and Biogenesis”, where references to scientific papers citing the information can be found.

Various options include: (with bracketed volumes of water in cubic kilometre): a northern lake (54,000), the Utopia basin, (if interconnected, each with 1,000,000), filled to a possibly identified ‘shoreline’ (14,000,000), to a massive northern hemisphere ocean (96,000,000). Of particular interest is that the massive channels (apart from two that run into Hellas) all terminate within an elevation of 60 m of this putative shoreline.

A Northern Ocean would seem to require an average temperature greater than 273 degrees K, but the faint sun (the sun is slowly heating and three and a half billion years ago, when it is assumed water flowed, it had only about two thirds its current output) and an atmosphere restricted to CO2/H2O leads in most simulations to mean global temperatures of approximately 225 degrees K. There is the possibility of local variations, however, and one calculation claimed that if global temperatures were thirty degrees higher, local conditions could permit Hellas to pond if the subsurface contained sufficient water, and with sufficient water, the northern ocean would be possible and for maybe a few hundred years be ice free. A different model based on simulations, assuming a 1 bar CO2 atmosphere with a further 0.1 bar of hydrogen, considered that a northern ocean would be stable up to about three billion years. There is quite an industry of such calculations and it is hard to make out how valid they are, but this one seems not to be appropriate. If we had one bar pressure of carbon dioxide for such a long time there would be massive carbonate deposits, such as lime, or iron carbonates, and these are not found in the required volumes. Also, the gravity of Earth is insufficient to hold that amount of hydrogen and Mars has only 40% of Earth’s gravity. This cannot be correct.

This northern ocean has been criticized on the basis that the shoreline itself is not at a constant gravitational potential, and variations of as much as 1.8 km in altitude are found. This should falsify the concept, except that because this proposed ocean is close to the Tharsis volcanic area, the deformation of forming these massive volcanoes could account for the differences. The magma that is ejected had to come from somewhere, and where it migrated from would lead to an overall lowering of the surface there, while where it migrated to would rise.

Support for a northern sea comes from the Acidalia region, where resurfacing appears to have occurred in pulses, finishing somewhere around 3.65 Gy BP.  Accumulation of bright material from subsequent impacts and flow-like mantling was consistent with a water/mud northern ocean. If water flows through rock to end in a sea, certain water-soluble elements are concentrated in the sea, and gamma ray spectra indicates that this northern ocean is consistent with enhanced levels of potassium and possibly thorium and iron. There may, however, be other reasons for this. While none of this is conclusive, a problem with such data is that we only see the top few centimeters and better evidence could be buried in dust.

Further possible support comes from the Zhurong rover that landed in Utopia Planitia (Liu, Y., and 11 others. 2022. Zhurong reveals recent aqueous activities in Utopia Planitia, Mars. Science Adv., 8: eabn8555). Duricrusts formed cliffs perched through loose soil, which requires a substantial amount of water, and also avoids the “buried in dust” problem. The authors considered these were formed through regolith undergoing cementation through rising or infiltration of briny groundwater. The salt cements precipitate from groundwater in a zone where active evaporation and accumulation can occur. Further, it is suggested thus has occurred relatively recently. On the other hand, ground water seepage might also do it, although the water has to be salty.

All of which is interesting, but the question remains: why was the water liquid? 225 degrees K is about fifty degrees below water’s freezing point. Second, because the sun has been putting out more heat, why is the water not flowing now? Or, alternatively, as generally believed, why did it flow for a brief period than stop? My answer, somewhat unsurprisingly since I am a chemist, is that it depends on chemistry. The gases had to be emitted from below the surface, such as from volcanoes or fumaroles. The gases could not have been adsorbed there as the planet accreted otherwise there would be comparable amounts of neon as to nitrogen on the rocky planets, and there is not. That implies the gases were accreted as chemical compounds; neon was not because it has no chemistry. When the accreted compounds are broken down with water, ammonia forms. Ammonia dissolves very rapidly in water, or ice, and liquefies it down to about 195 degrees K, which is well within the proposed range stated above. However, ammonia is decomposed slowly by sunlight, to form nitrogen, but it will be protected when dissolved in water. The one sample of seawater from about 3.2 billion years ago is consistent with Earth having about 10% of its nitrogen still as ammonia. However, on Mars ammonia would slowly react with carbon dioxide being formed, and end up as solids buried under the dust.

Does this help a northers sea? If this is correct, there should be substantial deposits of nitrogen rich solids below the dust. If we went there to dig, we would find out.