Belief in Science

I recently read something from a libertarian who accused people who “believe in science” as followers of a religion. The first point was, how can you so believe when you find well-credentialled scientists on both sides of an issue? The writer claimed that certain people may prefer people do not know this, but dissenting experts exist on many scientific questions that some pronounce as settled by a consensus. It claims credentialled maverick are often maligned as having been corrupted by industry, while scientists who view the established position are pure and incorruptible. So what do you think of that?

One comment lies in “well-credentialled” and “both sides of the issue”. By the time I started science in earnest, so to speak, Sir Richard Doll had produced a study showing heavy smoking greatly increased the risk of lung cancer. At the time, epidemiology was a “fringe” part of medicine but the statistics showed the truth. Naturally, there were people who questioned it, but I recall a review that showed that cigarette smoke contained a large number of chemicals that could be called carcinogens. One, 3,4-benzopyrene, was so active it guaranteed a cancer if a smear was applied to a certain strain of mice. Yes, they were particularly susceptible, but the point was obvious. Cigarette smoke contains a number of carcinogenic chemicals. Yet for years papers were produced that showed cigarette smoke was “harmless” and something else must be doing it. These were published in fringe journals and while the authors had credentials, they were being paid by the tobacco industry. So much for “both sides of the issue”.

A major criticism that comes to my mind is that the author does not understand what he is talking about. Science is not a collection of facts; it is a methodology for attempting to obtain the truth. Like any other activity, it does not always work well, but in principle you do not accept a statement based on the reputation of the speaker; that is the logic fallacy ad verecundiam. Unfortunately, what actually happens is so many are just plain lazy so they do not seek to check but rather accept it. Of course, if it is not important to you the consensus usually makes a lot of sense because you have not examined the details. Further, you accept it because you are only marginally interested. If someone says that a vaccine has passed a clinical trial in which x thousand people took part, there were no adverse effects and the vaccine worked, I take their word. The alternative is to check everything yourself, but you know that a procedure is in place where independent people have checked the statements, so I accept they are true. Science works by recording what we know and building on it. If there was a huge conspiracy to hide some real problems with a vaccine, the conspirators would eventually be found out, and an extended length of time would be spent in a rather uncomfortable cell.

Another criticism was that the “truth” comes down from a set of anointed scientists, and dissenters can be ignored because they are outside the group that matters. There is an element of truth in this. The anointed always get funding, and they get to peer review other funding applications. Dissent from the anointed means it is far more difficult to get funding. Further, the number of citations and publications you get means more funding. This leads to gaming the system, but such gaming cannot work with a dissenter. Sometimes, up to fifty scientists may agree to be authors on a number of papers (If you have fifty, they should produce fifty times the output of one.) But nobody counts the degree of share, and worse, they can keep citing all the papers within the set when one is being written, so automatically the number of citations jumps. Nobody notices they are self-citations or looks to see if they are relevant. That may seem unfair to others, but with money at stake, scientists also do what they can. This funding anomaly does lead to a group consensus.

Another example lies in climate change. Whether there is consensus is irrelevant; the question is, is there a definitive observation? I concede that initially I was suspicious, largely because there was a lot of modelling but not many facts. The theory was known, but the models ignored too many variables, and nothing seemed to have happened to the climate. The theory suggested there was an effect, but at first there was not much evidence for it. Then the evidence of warmer times started to come, but against that is climate has always changed. What was needed was a definitive set of measurements, and eventually they came (Lyman, J. M. and 7 others, 2010. Nature 465: 334-337.) What this showed was between 1993 and 2008 there was been an increase in the heat power delivered to the oceans of 0.64 w.m-2. That may not seem to be much, but multiply that across the area of oceans and you will see the planet is getting a very substantial net power input over a long period of time. We are cooking ourselves, but like the proverbial frog, we seem not to notice enough to do much about it.

One final comment. I wrote a chapter on climate change in my first ebook, which was about how to form theories, and which not only included the reasons why we should recognize the effect is real, but also I listed some previous technologies that could go some way towards reducing our dependencies on fossil fuels. These were all published or recorded in various archives, and one of the interesting things about this is that none of the recommended technologies have been proposed to be used. It is almost as if work done in the 1970-80s might as well not have been carried out. So what seemed to be “state of the art” then is now forgotten. There are problems in dealing with scientific issues and getting value from them, but group consensus is only temporary, and anything that can be forgotten probably will be. You don’t get science funding resurrecting the wheel, but you get somewhere. The question is, do we really want to get somewhere?

Advertisement

How Did We Escape the RNA World?

In my ebook “Planetary Formation and Biogenesis” I argue that life had to start with nucleic acids because only nucleic acids can provide a plausible mechanism for reproduction, and, of course, that is exactly what they do now – they reproduce. The RNA world may not qualify as life as more is required, but if this step is not achieved there can be no life. The first reproducing agent had to be RNA because ribose is the only sugar that occurs at least partially in a furanose form. (The furanose is a five-membered ring; the pyranose is a six-membered ring and is generally more stable.) Why do we need the furanose? In my ebook I show various reasons, but the main one is that the only plausible experiment so far to show phosphate esters could have formed naturally lead to AMP and ATP. While the ribose is only about 20% furanose, NO pyranose formed phosphate esters.

Later, DNA was used primarily for reproduction for a very simple reason: it uses 2-deoxyribose. The removal of the 2-hydroxyl from ribose makes the polymer several orders of magnitude more stable. So why did this not be part of the starting mix? Leaving aside the fact we do not really know how to get 2-deoxyribose in any synthesis that could reasonably have happened in some sort of pond without help (complicated laboratory chemical syntheses are out!) there is a more important reason: at the beginning high accuracy in reproduction is undesirable. The first such life forms (i.e. things that reproduce) are not going to be very useful. They were chosen at random and should have all sorts of defects. What we need is rapid evolution, and we are more likely to get that from something that mutates more often. Further, RNA can act as a catalyst, which speeds up- everything.

Bonfio (Nature, 605, p231-2) raises two questions. The first borders on silly: why did proteins as enzymes replace most of RNA catalytic activity? The short answer is they are immensely better. They speed things up by factors of billions, and they are stable, so they can be reused over and over again. So why did they not arise immediately? Consider the enzyme that degrades protein; it has 315 properly sequenced amino acids. If we limit ourselves to 20 different ones, and allow for the initial ones being either left- or right-handed, except for glycine, the probability of random selection is 2 in 39^315. That is, 39 multiplied by itself 315 times. To put that in perspective, there are just 10^85 elementary particles in the visible universe. It was simply impossible. But that raises the second, and extremely interesting question: how could ordered protein selection emerge with such horrendous odds against?

What happens now is that messenger RNA has three nucleotide sequences “recognized” and “transfers” this information to transfer RNA which selects an amino acid and attaches it to the growing chain, then goes back to the messenger RNA to get the next selection information. That is grossly oversimplified, but you might get the picture. The question is, how could this emerge? The answer appears to include non—canonical nucleotides. RNA comprises mainly adenine, guanine, cytosine and uracil, and these are the “information” holders, but there are some additional entities present. One is adenosine with a threonylcarbamoyl group attached. The details are not important at this level – merely that there is something additional there. The important fact is there is no phosphate linkage so this is not in the chain. At first sight, these are bad because they block chain formation. Thus for every time this hydrogen-bonded to a uracil, say, it would block the chain synthesis and stop reproduction. However, it turns out that they may assist peptide synthesis. The non-canonical nucleotide at the terminal point of a RNA strand attracts amino acids. This becomes a donor strand, and it transfers to a similar RNA with a nascent peptide, and we have ordered synthesis. It is claimed that this can be made to happen under conditions that could plausibly occur on Earth. The peptide synthesis involves the generation of a chimeric peptide – RNA intermediate, perhaps the precursor of the modern ribosome. Of course, we are still a long way from an enzyme. However, we have (maybe) located how the peptides could be synthesised in non-random way, and from the RNA we can reproduce a useful sequence, but we are still a very long way from the RNA knowing what sequences will work. The assumption is, they will eventually self-select, based on Darwinian principles, but that would be a slow and very inefficient process. However, as I note in the ebook, the early peptides with no catalytic properties are not necessarily wasted. The most obvious first use would be to incorporate them in the cell wall, which would permit the formation of channels able to bring in fresh nutrients and get rid of excess water pressure. The evolution of life probably a very long time during which much stewing and testing was carried out until something sufficiently robust evolved.

Betelgeuse Dimmed

First, I apologize for the initial bizarre appearance of my last post. For some reason, some computer decided to slice and dice. I have no idea why, or for that matter, how. Hopefully, this post will have better luck.

Some will recall that around October 2019 the red supergiant Betelgeuse dimmed, specifically from magnitude +0.5 down to +1.64. As a variable star, its brightness oscillates, but it had never dimmed like this before, at least within our records. This generated a certain degree of nervousness or excitement because a significant dimming is probably what happens initially before a supernova. There has been no nearby supernova since that of the crab nebula in 1054 AD.

To put a cool spot into perspective, if Betelgeuse replaced the sun, its size is such it would swallow Mars, and its photosphere might almost reach Saturn. Its mass is estimated at least ten times, or possibly up to twenty times, the mass of the sun. Such a variation sparks my interest because when I pointed out that my proposed dependence of characteristic planetary orbital semimajor axes on the cube of the mass of the star ran into trouble because the stellar masses were not known that well I got criticised by an astronomer: they knew the masses to within a few percent. The difference between ten times the sun’s mass and twenty times is more than a few percent. This is a characteristic of science. They can measure stellar masses fairly accurately in double star systems, then they “carry over” the results,

But back to Betelgeuse. Our best guess as to distance is between 500 – 600 light years. Interestingly, we have observed its photosphere, the outer “shell” of the star that is transparent to photons, at least to a degree, and this is non-spherical, presumably due to stellar pulsations that send matter out from the star. The star may seem “stable” but actually its surface (whatever that means) is extremely turbulent. It is also surrounded by something we could call an atmosphere, an envelope of matter about 250 times the size of the star. We don’t really know its size because these asymmetric pulsations can add several astronomical units (the Earth-sun distance) in selected directions.

Anyway, back to the dimming. Two rival theories were produced: one involved the development of a large cooler cell that came to the surface and was dimmer than the rest of Betelgeuse’s surface. The other was the partial obscuring of the star by a dust cloud. Neither proposition really explained the dimming, nor did they explain why Betelgeuse was back to normal by the end of February, 2020. Rather unsurprisingly, the next proposition was that the dimming was caused by both of those effects.

Perhaps the biggest problem because telescopes could only look at the star sone of them however a Japanese weather satellite ended up providing just the data they needed. This was somewhat inadvertent. The weather satellite was in geostationary orbit 35,786 km above the Western Pacific. It was always looking at half of Earth, and always the same half, but the background was also always constant, and in the background was Betelgeuse. The satellite revealed that the star overall cooled by 140 degrees C. This was sufficient to reduce the heating of a nearby gas cloud, and when it cooled, dust condensed and formed obscuring dust. So both theories were right, and even more strangely, both contributed roughly equally to what was called “the Great Dimming”.

It also suggested more was happening to the atmospheric structure of the star before this happened. By looking at the infrared lines, it became apparent that water molecules in the upper atmosphere that would normally create absorption lines in the star’s spectrum suddenly changed to form emission lines. Something had made them become unexpectedly hotter. The current thinking is that a shock-wave from the interior propelled a lot of gas outwards from the star, leading to a cooler surface, while heating the outer atmosphere. That is regarded as the best current explanation. It is possible that there was a similar dimming event in the 1940s, but otherwise we have not noticed much, but possibly it could have occurred but our detection methods may not have been accurate enough. People may not want to get carried away with, “I think it might be dimmer.” Anyway, for the present, no supernova. But one will occur, probably within the next 100,000 years. Keep looking upwards!

Energy Sustainability

Sustainability is the buzzword. Our society must use solar energy, lithium-ion batteries, etc to save the planet, at least that is what they say. But have they done their sums?. Lost in this debate is the fact that many of the technologies use relatively difficult to obtain elements. In a previous post I argued that battery technology was in trouble because there is a shortage of cobalt, required to make the cathode work for a reasonable number of cycles. Others argue that we could obtain sufficient elements. But if we are going to be sustainable, we have to be sustainable for an indefinite length of time, and mining is not sustainable; you can only dig up the ore once. Of course, there are plenty of elements left. There is more gold in the sea than has ever been mined; the problem is that it is too dilute. Similarly, most elements are present in a lump of basalt; just not much of anything useful and it is extremely difficult to get it out. The original copper mines of Cyprus, where even lumps of copper could occasionally be found, are all worked out, at least to the extent that mining is no longer profitable there.

The answer is to recycle, right? Well, according to an article [Charpentier Poncelet, A. et al. Nature Sustain. https://doi.org/10.1038/s41893-022- 00895-8 (2022)] there are troubles. The problem is that even if we recycle, every time we do something we lose some of the metal. Losses here refer to material emitted into the environment, stored in waste-disposal facilities, or diluted in material where the specific characteristics of the elements are no longer required. The authors define a lifetime as the average duration of their use, from mining through to being entirely lost. As with any such definition-dependent study, there will be some points where you disagree.

The first loss for many elements lies in the production state. Quite often it is only economical to obtain one or two elements, and the remaining minor components of the ore disappear in slag. These losses are mainly important for specialty elements. Production losses account for 30% of rare earth metals, 50% for cobalt, 70% for indium, and greater than 95% for arsenic, gallium, germanium, hafnium, selenium and tellurium. So most of those elements critical for certain electronic and photo-electric effects are simply thrown out. We are a wasteful lot.

Manufacturing and use incur very few losses. There are some, but because materials are purified ready for use, pieces that are not immediately used can be remelted and used. There are exceptions. 80% of barium is lost through use; it is used in drilling muds.

The largest losses come from the waste management and recycling stage. Metals undergoing multiple life cycles are still lost this way; it just takes longer to lose them. Recycling losses occur when the metal accumulates in a dust (zinc) or slag(e.g. chromium or vanadium), or get lost in another stream, thus copper is prone to dissolve in an iron stream. Longest lifetimes occur for non-ferrous metals (8 to 76 years) precious metals (4 to 192 years), and ferrous metals (8 to 154 years). The longest lifetimes are for gold and iron.

Now for the problem areas. Lithium has a life-cycle use of 7 years, then it is all gone. But lithium-ion batteries last about this long, which suggests that as yet (if these data are correct) there is very little real recycling of lithium. Elements like gallium and tellurium last less than a year, while indium manages a year. Before you protest that most of the indium goes into swipeable mobile phone screens and mobile phones last longer than a year, at least for some of us, remember that losses occur through being discarded at the mining stage, where the miner/processor can’t be bothered. Of the fifteen metals most lost during mining/processing, thirteen are critical for sustainable energy, such as cobalt (lithium-ion batteries), neodymium (permanent magnets), indium, gallium, germanium, selenium and tellurium (solar cells) and scandium (solid oxide fuel cells). If we look at the recycled content of “new material” lithium is less than 1% as is indium. Gallium and tellurium are seemingly not recycled. Why are they not recycled? Metals that are recycled tend to be like iron, aluminium, the precious metals and copper. It is reasonably easy to find uses for them where purity is not critical. Recycling and purifying most of the others requires technical skill and significant investment. If we think of lithium-ion batteries, the lithium reacts with water, and if it starts burning it is unlikely to be put out. Some items may have over a dozen elements, and some are highly toxic, and not to be in the hands of the amateur. What we see happening is that the “easy” metals are recycled by organizations that are really low-technology organizations. It is not an area attractive to the highly skilled because the economic risk/return is just not worth it, while the less-skilled simply cannot do it safely.

The First Atmosphere

Ι have now published the second edition of my ebook “Planetary Formation and Biogenesis”. It has just under 1290 references, each about a different aspect of the issue, although there is almost certainly a little double counting because references follow chapters, and there will be some scientific papers that are of sufficient importance to be mentioned in two chapters. Nevertheless, there is plenty of material there. The reason for a second edition is that there has been quite a lot of additional; information from the past decade. And, of course, no sooner did I publish than something else came out, so I am going to mention that in this post. In part this is because it exemplifies some of what I think is wrong with modern science. The paper, for those interested, is from Wilcoski et al. Planet Sci J. 3: 99. It is open access so you can read it.

First, the problem it attempts to address: the standard paradigm is that Earth’s atmosphere was initially oxidised, and comprised carbon dioxide and nitrogen. The question then is, when did this eventuate? What we know is the Earth was big enough that if still in the accretion disk it would have had an atmosphere of hydrogen and helium. If it did not accrete until after the disk was expelled, it would have no atmosphere initially, and an atmosphere had to come from some other process. The ebook shows the evidence and in my opinion it probably had the atmosphere of hydrogen. Either way, the accretion disk gets expelled, and assuming our star was the same as others, for the first few hundred million years the star gave off a lot of extremely energetic UV radiation, and that would be sufficient to effectively blow any atmosphere away. So under that scenario, for some number of hundred million years there would be no atmosphere.

There is an opposing option. Shortly after the Moon-forming event, there would be a “Great Bombardment” of massive impactors. There are various theories this would form a magma ocean and there is a huge steam atmosphere, but there is surprisingly little evidence for this, which many hold onto no matter what. The one piece of definite evidence are some zircons from the Jack Hills in Australia, and these are about 4.2 – 4.3 billion years old – the oldest of any rock we have. Some of these zircons show clear evidence that they formed under temperatures not that different from today. In particular, there was water that had oxygen isotope ratios expected of water that had come from rain.

So, let me revisit this paper. The basic concept is that the Earth was bombarded with massive asteroids and the iron core hit the magma ocean, about half of it was sent into the atmosphere (iron boils at 2861 degrees C) where it reacted with water to form hydrogen and ferrous oxide. The hydrogen reacted with nitrogen to form ammonia.

So, what is wrong with that? First, others argue that iron in the magma ocean settles to the core. That, according to them, is why we have a core. Alternatively, others argue if it comes from an asteroid, it emulsifies in the magma. Now we have the iron doing three different kind of things depending on what answer you want. It can do one of them, but not all of them. Should iron vapour get into the atmosphere, it would certainly reduce steam and make hydrogen, but then the hydrogen would not do very much, but rather would be lost to space because of the sun’s UV. The reaction of hydrogen with nitrogen only proceeds to make much ammonia when there is intense pressure. That could happen deep underground. However, in atmospheric pressure at temperatures above the boiling point of iron, ammonia would immediately dissociate and form nitrogen and hydrogen. The next thing that is wrong is that very few asteroids have an iron core. If one did, what would happen to the asteroid when it hit magma? As an experiment, throw ice into water and watch what happens before it tries to reverse its momentum and float (which an asteroid would not do). Basically, the liquid is what gets splashed away. Rock is a very poor conductor of heat, so the asteroid will sink quite deeply into the liquid and will have to melt off the silicates before the iron starts to melt, and then, being denser, it will sink to the core. On top of that it was assumed the atmosphere contained 100 bars of carbon dioxide, and two bars of nitrogen, in other words an atmosphere somewhat similar to that of Venus today. Assuming what was there to get the answer you want is, I suppose, one way of going about things, in a circular sort of way. However, with tidal heating from a very close Moon, such an atmosphere with that much water would never rain, which contradicts the zircon data. What we have is a story that contradicts the very limited physical evidence we have, which has no evidence in favour of it, and was made up to get the answer wanted so they could explain where the chemicals that formed life might have come from. Needless to say, my ebook has a much better account, and has the advantage that no observations contradict it.