Geoengineering: Shade the World

As you may have noticed when not concerned about a certain virus, global warming has not gone away. The virus did some good. I live on a hill and can look down on some roads, and during our lock-down the roads were strangely empty. Some people seemed to think we had found the answer to global warming, as much less petrol was bing burnt, but the fact is, even if nobody drove we were still producing net amounts of CO2 and other greenhouse gases, and even if we were not doing that, the amounts currently in the air are still out of equilibrium and would continue to melt ice and lead to high temperatures. In the northern hemisphere now you have a summer so maybe you notice.

So, what can we do? One proposal is to shade the Earth’s surface. The idea is that if you can reflect more incoming solar radiation back to space there is less energy on the surface and . . .  Yes, it is the ‘and’ wherein lies the difficulties. We get less radiation striking the surface, so we cool the surface, but then what? According to one paper recently published in Geophysical Research Letters ( ) the answer is not good news. They have produced simulations of models, and focus on what are called storm tracks, which are relatively narrow zones in oceans where storms such as tropical cyclones and mid-latitude cyclones travel through prevailing winds. Such geoengineering, according to the models, would weaken these storms. Exactly why this is bad eludes me. I would have thought lower energy storms would be good; why do we want hundreds of thousands of citizens have their properties leveled by hurricanes, typhoons, or simply tropical cyclones as they are known in the Southern Hemisphere? This weakening happens through a smaller pole to equator temperature difference because most of the light reflected is over the tropics. Storms are heat engines at work, and the greater the temperature difference, the more force can be generated. The second law of thermodynamics at work. Fine. We are cooling the surface, and while it may seem that we are ignoring the ice melting of the polar regions, we are not because most of the heat comes from ocean currents, and they are heated by the tropics.

More examples: we would reduce wind extremes in midlatitudes, possibly lead to less efficient ventilation of air pollution, may possibly decrease low cloud cover the storm‐track regions and weaken poleward energy transport. In short, a reasonable amount of that is what we want to do anyway. It is also claimed we would get increased heat waves. I find that suspicious, given that less heat is available. It is claimed that such activities would alter the climate. Yes, but that is what we would be trying to do, namely alter it from what it might have been. It is also claimed that the models show there could possibly  be regional reductions in rainfall. Perhaps, but that sort of thing is happening anyway. Australia had dreadful bushfires this year. I gather forest fires were going well in North America also.

One aspect of this type of study that bothers me is it is all based on models. The words like ‘may’, ‘could’ and ‘possibly’ turn up frequently. That, to me, indicates the modelers don’t actually have a lot of confidence in their models. The second thing that bothers me is they have not looked at nature. Consider the data from Travis et al.(2002) Nature 418, 601.  For the three days 11-14 Sept. 2001 the average diurnal temperature ranges averaged from 4000 weather stations across the US increased on average 1.1 degrees C above the average from 1971 – 2000, with the highest temperatures on the 14th. They were on average 1.8 degrees C greater than the average for the two adjacent three-day periods. The three days with the increase were, of course, the days when all US aircraft were grounded and there were no jet contrails. Notice that this is the difference between day and night; at night the contrails retain heat better, while in daytime they reflect sunlight.  Unfortunately, what was not stated in the paper was what the temperatures were. One argument is that models show while the contrails reflect more light during the day, they keep in more heat during the night. Instead of calculations, why not show the actual data?

The second piece of information is that the eruption of Mount Pinatubo sent aerosols into the atmosphere and for about a year the average global temperature dropped 1 degree C. Most of that ash was at low latitudes in the northern hemisphere. There are weather reports from this period so that should give clues as to what would happen if we tried this geoengineering. This overall cooling was real and the world economies did not come to an end. The data from that could contribute to addressing the unkn owns.

So, what is the answer? In my opinion, the only real answer is to try it out for a short period and see what happens. Once the outcomes are evaluated we can then decide what to do. The advantage of sending dust into the stratosphere is it does not stay there. If it does not turn out well, it will not be worse than what volcanoes do anyway. The disadvantage is to be effective we have to keep doing it. Maybe from various points of view it is a bad idea, but let us make up our minds from evaluating proper information and not rely on models that are no better than the assumptions used. Which choice we make should be based on data, not on emotion.

Scientific Discoveries, How to Make Them, and COVID 19

An interesting problem for a scientist is how to discover something? The mediocre, of course, never even try to solve this while it is probably only a small percentage that gets there. Basically, it is done by observing clues then using logic to interpret them. The method is called induction, and it can lead to erroneous conclusions. Aristotle worked put how to do it, and then dropped the ball at least twice in his two biggest blunders when he forgot to follow his own advice. (In fairness, he probably made his blunders before he worked put his methodology, and lost interest in correcting them. The Physica was one of his earliest works.) 

The clues come from nature, and picking them up relies on keeping eyes open and more importantly, the mind open. The first step is to seek patterns in what you observe, and try to correlate your observations. The key here is Aristotle’s comment that the whole is more than the sum of the parts. That looks like New Age nonsense, but look at it from the mathematics of set theory. A set is simply a collection of data, usually expressed as numbers, but not anything should go into it. As an example, I could list all green things I can see, but that would be pointless. I could list all plants, and now I am making progress into botany. The point is, the set comprises all the elements inside it, together with the rule that conveys set membership. It is the rule that we seek if we wish to make a discovery and in effect we have to guess it by examining the data. This process is called induction, and if we get some true statements, we can move on to deduction. 

There are, of course, problems. Thus we could say:

All plants have chlorophyll

Chlorophyll is green

Therefore all plants are green.

That is untrue. The chlorophyll will be green, but the plant may have additional dyes/pigments. An obvious case is red seaweed. The problem here is the lazy “therefore”. Usually it is somewhat more difficult, especially in medicine.

Which, naturally in these times, it brings me to COVID-19. What we find is very young people, especially girls, are more or less untroubled. The old have a lot more trouble, and, it turns out more so old men. Now part of the trouble will be that the old have weaker immune systems, and often other weaknesses in their bodies. Unlike wine, age does not improve the body. That is probably a confusing observation, because it leads nowhere and is somewhat obvious.

Anyway, we have a new observation: if we restrict ourselves to severe cases in hospitals, there is a serious excess of bald men. Now, a correlation is not causative, and trying to work out the cause can be fraught with difficulty. In this case, we can immediately dismiss the idea that hair has anything to do with it. However, baldness is also correlated with higher levels of androgens, which are male sex hormones. It was also found that the severe cases in males also usually had high levels of androgens. By itself, we can show this is not a cause either.

So, this leads to a deeper investigation, and it is found that the virus uses an enzyme called TMPRSS2 to cleave the Sars-Cov-2 spike protein, and this permits the cleaved spike to attack the ACE2 receptors on the patient’s cells, and thus permit the viral RNA to enter the cell and begin replicating. What the androgens do is to activate a gene in the virus that expresses TMPRSS2, so what the androgens do is to increase the amount of enzyme necessary to attack a cell. This suggests as a treatment something that will inhibit the viral gene so no TMPRSS2 is expressed. We await developments. (Suppressing androgens in men is not a good idea – they start to grow breasts. However, it also suggests that ACE inhibitors, used to reduce hypertension, might offer some assistance.) Now, the value of a theory can be shown by whether it helps explains something else. In this case, it argues that since pre-puberty children should be more resistant, and girls keep this benefit longer. That is found. It does not prove we are correct, but it is comforting. That is an example of induced science. Induction does not necessarily produce the truth, and conclusions can be wrong. We find out by pursuing the consequences, and either finding we have discovered something, or go back to the drawing board.

The Virus, and How Science Works, or Doesn’t

It may come as no particular surprise to hear that COVID-19 has become a source of fake news, conspiracy theories, whatever. Bill Gates was one victim. In various assertions, he created the virus, patented it, and was going to develop a vaccine and in it he would monitor people using quantum-dot spy software. Various forms got more likes, shares or comments on Facebook than most news items. Leaving aside the stupidity on view, what about facts? Nobody seems to have asked if he patented it, what is the patent number? Mike Pompeo alleged without a shred of evidence the virus originated in a Chinese laboratory. Political gain and nationalism sure beats truth as an objective there. According to Nature (581, 371-4) an academic subdiscipline has sprung up, tracking the false information, and studying how it is spread. The interesting thing about this is the observation that social-media are run to maximise user engagement and evidence-based information is way back in priorities. 

Also missing was an answer to the question, how does science work? If you watch certain TV shows, someone carries out some weird mathematics on a blackboard, and hey, we have it. It isn’t like that. Apart from a few academics that like to generate papers to keep up their publications, and for people applying standard theory (for example, NASA sending a rocket to a site on Mars, and then it is not a trivial task for a genius on a blackboard) the usual problem is for a new problem where the answer is not known, we sift through the evidence, try to find relationships, use such a relationship to form a hypothesis, then design some method to test it on new situations.

COVID-19 became a problem because genuine information was scarce, in turn because nobody knew, but look what happened as shreds came to light. President Trump advocated an “unproven cure”. But who says? The general feeling seems to be to trust the experts with “good credentials” (the logic falacy ad verecundiam). Since about 1970 there have been hardly any debates, and the funding models of science have forced only too many to “get in behind”. As an example of where wheels fell off, think chloroquine and its hydroxy derivative. 

First, two quotes from Gao et al.Bioscience Trends, 14: 72-3. “results from more than 100 patients have demonstrated that chloroquine phosphate is superior to the control treatment in inhibiting the exacerbation of pneumonia, improving lung imaging findings, promoting a virus- negative conversion, and shortening the disease course according to the news briefing. Severe adverse reactions to chloroquine phosphate were not noted.” and “The drug is recommended for inclusion in the next version of the Guidelines for the Prevention, Diagnosis, and Treatment of Pneumonia Caused by COVID-19 issued by the National Health Commission of the People’s Republic of China.” The Chinese issued a handbook that indicates how and when to use it. 

Then, from Gautret et al. DOI : 10.1016/j.ijantimicag.2020.105949 Twenty cases were treated with hydroxychloroquine. Those who refused, and the cases at another centre were used as a control. Those treated “showed a significant reduction of the viral carriage at D6-post inclusion compared to controls, and much lower average carrying duration than reported of untreated patients in the literature. Azithromycin added to hydroxychloroquine was significantly more efficient for virus elimination.”  Yes, a small sample, and patients who were known to have an allergic reaction to the drug, or other strong contraindications were excluded from the study. There was a third French report of about 80 patients that showed similar good results. Those two papers cited are fairly clear. It does not mean that an iron-clad conclusion should be drawn, but it does suggest potential effectiveness. 

However, a paper was published in The Lancet, one of the most respected medical journals that used statistical analysis from data from 96,032 patients, some of whom were treated with these drugs, and concluded the drugs were not helpful and more likely to cause death. So that should settle it, right? When I read this, my initial reaction was, not so fast. Of those treated, approximately 15% had coronary heart disease, 6% other heart problems, about 14% diabetes, 30% hypertension, 31% hyperlipidaemia, 10% smoked, 17% formerly smoked. Thus 96% had something wrong with them before treatment and 27% smoked or had smoked. Of course, some would not have such problems; some would qualify in two or three categories. The control group had 81,144 patients, and overall, 11.1% died in hospital, with 9.3% in the control group. So treatment made things worse. Convinced?

Do you see a problem? First, the control group may well have had a large number of young people who had mild symptoms, which lowers the death rate, which, as an aside, is remarkably high. New Zealand had a death rate of 1.46%. Second, we have no data on how treatment was selected and carried out. But, you say, statistics do not lie. Actually, that is not true, at least if care is not taken. My first reaction was to think, Simpson’s paradox (, which shows it is possible to get the opposite conclusion if there are confounding variables, and this is particularly troublesome in medical reports where such variables are all over the place. I had had discussions with friends previously where I expressed optimism for the hydroxychloroquine, based on the two papers cited above, then I expressed the “not so fast” view about The Lancet paper. Needless to say, friends thought I was simply refusing to accept the truth.

However, there have been further developments. The Editors of The Lancet published a brief comment stating that “Important scientific questions have been raised about data reported in the paper…” Shortly after a bombshell: (…) The data appeared to come from a small US company called Surgisphere, “whose handful of employees appear to include a science fiction writer and an adult-content model”. They refuse to explain their data or methodology. The Australian data came from hospitals that say they have never heard of Surgisphere, and worse, the casualties from the trials exceeded the total Australian casualties. It seems a case can be made that Surgisphere generated fake news, and it was published in two of the most respected medical journals (the other was New England Journal of Medicine).

Following these papers based on Surgisphere results, the WHO attempted to end the use of chloroquine and hydroxychloroquine for COVID-19, and a number of hospitals have complied and stopped using it. 

However, to add to the confusion the University of Oxford published this: “A total of 1542 patients were randomised to hydroxychloroquine and compared with 3132 patients randomised to usual care alone. There was no significant difference in the primary endpoint of 28-day mortality (25.7% hydroxychloroquine vs. 23.5% usual care” ( Now the University of Oxford should be a reliable source, and it clearly shows no benefit in this set of patients but my question still is, how was this set selected? The trial will be randomized, but the overall death rate of 23.5% in “usual care” seems to signal this is a selected set. (Recall the NZ death rate of 1.46%; our doctors are good, but I would not expect them to be that superior to the University of Oxford, so is something else going on?)

So what is going on? I have no idea. My guess is that the chloroquine and hydroxy-derivative do convey benefit to some patients, but not all, and/or they convey benefit but only if some other variable is present. In this context, there is one proposal that chloroquine plus zinc has an effect (… ) (although on checking this link before posting shows it has a problem. Who knows what is real?). That apparently came partly from Turkey, and Turkey claims to have been successful with HCQ (  If so, the effectiveness in other trials might depend on the diet. Why would zinc have any chance? The chloroquine structure has three nitrogen atoms more or less focused in one direction. Zinc has an affinity for nitrogen, and tries to form octahedral ligands. What that means is, if the chloroquine or derivative can take zinc up to the virus, it has a strong affinity for more amine functions, and could well bind to a nucleobase. If so, the RNA could not reproduce. This produces a hypothesis that has a causal basis and may comply with the data, but only if we had a zinc analysis for all nutrients taken by the patients. Further, it will not work once the virus takes a certain hold because it would be unsafe to put enough zinc into the patient to have a chance.

This example shows in part how difficult science can be, not helped by the likes of The Lancet item. The short answer, in my opinion, is we cannot be sure what works, and hydroxychloroquine probably is at best a means of reducing the virus load and letting the body recover if it can, but then is that not desirable? It would also be helpful if people would stop poresenting false of grossly incomplete information. Maybe one of these days we shall know what works and what doesn’t, but probably not very quickly.

Materials that Remember their Original Design

Recall in the movie Terminator 2 there was this robot that could turn into a liquid then return to its original shape and act as if it were solid metal. Well, according to Pu Zhang at Binghampton University in the US, something like that has been made, although not quite like the evil robot. What he has made is a solid that acts like a metal that, with sufficient force, can be crushed or variously deformed, then brought back to its original shape spontaneously by warming.

The metal part is a collection of small pieces of Field’s alloy, an alloy of bismuth, indium and tin. This has the rather unusual property of melting at 62 degrees Centigrade, which is the temperature reached by fairly warm water. The pieces have to be made with flat faces of the desired shape so that they effectively lock themselves together and it is this locking that at least partially gives the body its strength. The alloy pieces are then coated with a silicone shell using a process called conformal coating, a technique used to coat circuit boards to protect them from the environment and the whole is put together with 3D printing. How the system works (assuming it does) is that when force is applied that would crush or variously deform the fabricated object, as the metal pieces get deformed, the silicone coating gets stretched. The silicone is an elastomer, so as it gets stretched, just like a rubber band, it stores energy. Now, if the object is warmed the metal melts and can flow. At this point, like a rubber band let go, the silicone restores everything to the original shape, the when it cools the metal crystallizes and we are back where we started.

According to Physics World Zhang and his colleagues made several demonstration structures such as a honeycomb, a spider’s web-like structure and a hand, these were all crushed, and when warmed they sprang back to life in their original form. At first sight this might seem to be designed to put panel beaters out of business. You have a minor prang but do not worry: just get out the hair drier and all will be well. That, of course, is unlikely. As you may have noticed, one of the components is indium. There is not a lot of indium around and for its currently very restricted uses it costs about $US800/kg, which would make for a rather expensive bumper. Large-scale usage would make the cost astronomical. The cost of manufacturing would also always limit its use to rather specialist objects, irrespective of availabiity.One of the uses advocated by Zhang is in space missions. While weight has to be limited on space missions, volume is also a problem, especially for objects with awkward shapes, such as antennae or awkward shaped superstructures. The idea is they could be crushed down to a flat compact load for easy storage, then reassembled. The car bumper might be out of bounds because of cost and limited indium supply, but the cushioning effect arising from its ability to absorb a considerable amount of energy might be useful in space missions. Engineers usually use aluminium or steel for cushioning parts, but they are single use. A spacecraft with such landing cushions can be used once, but landing cushions made of this material could be restored simply by heating them. Zhang seems to favour the use in space engineering. He says he is contemplating building a liquid robot, but there is one thing, apart from behaviour, that such a robot could not do that the terminator robot did, and that is, if the robot has bits knocked off and the bits melt, they cannot reassemble into a whole. Leaving aside the fact there is no force to rejoin the bits, the individual bits will merely reassemble into whatever parts they were and cannot rejoin with the other bits. Think of it as held together by millions of rubber bands. Breaking into bits breaks a fraction of the rubber bands, which leaves no force to restore the original shape at the break.

Lockdown! Now What?

By now, everyone should be aware there is a virus out there, and it has been generally agreed that action was needed to protect citizens. So far there is no vaccine, and in some cases the treatment required to preserve life is restricted. In New Zealand, thanks to various travellers bringing it here, we are starting to feel the effects. It is easy to flash around figures but with a population of about 5 million, one estimate is that if nothing were done, about 70% of the population would get it, and about 80,000 would die. The reason is, if all those got it about the same time, say over a two-month period, there are insufficient ventilators, etc. for them. If they got it one at a time, most of those 80,000 would not die.  Our hospitals did not have 20,000 ventilators sitting around waiting for this event. So what we have done (as have many other countries) is we have initiated a lockdown, the idea being that by breaking the possible chains of transmission the virus will die out. The associated problem is, so will many businesses that cannot earn during this period. So the question is, what will emerge from this, or perhaps a more reasonable question is, what is more probable to arise from this?

The average estimate here is that unemployment will rise to about 9%, and many small businesses will go under. Life will be particularly difficult for restaurants, etc. because many of them tend to operate on slim margins, and they are more designed to offer the owners a life-style rather than direct them to be a developing business owner. Our airline will shrink down to 10% of what it was because international travel will almost disappear. One slight bright sign for them lies in the domestic market: their major competitor has already decided to call it quits here. Such competitors restricted themselves to the major intercity services and left the minor spots alone. The price for those tickets will now rise, but with the far lower ticket sales there would have been blood on the floor had such cheaper flights continued for that many aircraft. There will be a great reduction in the number of tourists for some time, because even if our lockdown works, what happens if other countries have not gone as hard? Do we want to succeed, at great cost, then let in fresh infection?

One of the other things that has happened is we have discovered the “just in time” purchasing ethic has a cost. One slightly ironic fact is there was a claim we were running low on hospital gowns, and the biggest manufacturer anywhere of hospital gowns is in Wuhan, except it closed because of the virus. Apparently, a couple of small manufacturers are switching to make some of this necessary equipment, including ventilators, but that will not continue because they cannot compete on price with China, and in any case, the hospitals will not need more when this dies down.

On the issue of more general manufacturing, I heard one small manufacturer say that in response to the difficulties some are having in getting certain things, he has ordered a major robotic machine. The capital cost is higher, but the wage bill is much lower, and if the equipment is sufficiently flexible, the major expenditure, apart from raw materials and capital cost, will be in paying designers. This suggests this pandemic may well be the straw that broke the back of the current way of making goods. Strategic niche manufacturing, manufacturing close to raw materials, and the use of brains may be the key factors in future prosperity.That raises the question of what happens to current workers. If half the small businesses go to the wall, there will be a lot of workers who have few resources and only limited skills. There will also be a number of highly skilled people who are unemployed. Think of the airlines. Where do pilots and cabin crew of the big jets find jobs? Nobody else will want them because all the other airlines are in the same boat, and it has nothing to do with management or mistakes. It is going to require a lot of imagination and investment to get out of this, and both may be in rather short supply. Also, new businesses need customers, and who is going to have spare money when this wrings out?

Can you Think like a Scientist?

Ever wondered how science works? Feel you know? If so, read this slowly. There is a puzzle to solve so don’t cheat and read past the question before trying to answer. 

WASP 76b is a planet circulating the star known as WASP 76. (WASP stands for “wide angle search for planets” and is an international consortium searching for exoplanets by using robotic telescopes in both hemispheres, hence wide angle. It searches by looking for transits, i.e. a planet passing in front of the star and dimming it. The 76 presumably means the 76th star of interest, and the b means the first planet to be discovered around that particular star.) The star is an F7 class, with a mass of about 1.46 that of the sun, and an effective temperature of about 6,000 oC. So it is bigger, brighter and hotter than the sun.

This planet is weird, by any standards. It is about 0.92 times the size of Jupiter, which means it is a gas giant, and it is 0.033 AU from the centre of the star. (The Earth-Sun distance is defined as 1 AU.) That is close, especially since the star is bigger than the sun. The time taken to go around the star is 1.809886 days. That means a birthday every second day our time, not that there will be anyone having birthdays. The news media has got hold of this because being so close it is expected that the planet is tidally locked. That means, like the Moon going around the Earth, one side is always facing the star and the other side is always facing away. This means that if that is correct and it is tidally locked, the side facing the star will have a temperature of about 2,400 oC, but the side facing away would have a temperature about 1,000 degrees cooler.

When a planet transits in front of the star, the material in the atmosphere absorbs starlight, which gives slightly darker spectral lines, and these give clues as to what is in the atmosphere. In this case, lines corresponding to iron were seen in the gas. At first sight, that is not surprising at 2,400 C. The melting point of iron is 1538 oC, while the boiling point, at our atmospheric pressure, is 2862 oC.  It is not hot enough to boil iron, but then again Earth has temperatures that are nowhere near hot enough to boil water, but plenty of water gets into the atmosphere as clouds, and comes down as rain.

This is where the media have sat up and taken notice: it appears it might be raining iron on that planet. That is weird. More evidence was cited for the rain in that the iron signal was unevenly distributed. Recall the light has to go through the atmosphere, so what we see is the signal from the edges. That signal is not evenly distributed, and apparently present on the evening side, but not in the transition edge from night to day. This was interpreted as due to the iron condensing out as it entered the cold side, and there would be liquid iron droplets as rain during the night. Now, here is your test as a potential scientist. Stop reading and think, and answer this question: do you see anything inconsistent in the above description? This is a test for potential theorists. Theories are not developed by brilliant insights, but rather by thinking that something we think is right has an inconsistency.

Anyway, what struck me is the planet allegedly has a morning and an evening. It cannot have that if it is tidally locked, because the same parts always face the star the same way. The planet must be rotating. As an aside, it is hard to see how it could be tidally locked because the gas in the atmosphere will be travelling extremely fast – the winds and storms will be ferocious if there is a thousand degree difference between night and day. But if it is rotating, maybe the difference is not that much. We cannot measure the dayside from transits. Also, if it were tidally locked, we might expect the iron to rain out on the dark side, but then what? How would it get back to the dayside? After a while it would all be on the night side. There has to be some rotation somewhere.Another interesting point is how do you tidally lock gas? And what does rotation of a gas giant mean? In the case of Jupiter we know it rotates because characteristic storms mark the rotation, but Jupiter is far enough from the star that the temperature differences between night and day are trivial. The hot gas around WASP 76b must move. If it is always going the same way, is the planet rotating or merely the gas has a uniform wind?

A New Coronavirus

2019-nCoV is having an effect that most will have heard of. It is apparently milder than some related viruses, such as SARS, which had a mortality rate of 10%, but that might be premature because the new virus has caused a very large number of seriously ill people, and nobody knows what will happen to them. So far, the probability of death appears to be around 3%, although a number of those are through people who had poor health anyway. Unfortunately, it appears to spread at a dizzying rate, and so far the number of patients appears to double every six days. It appears to have a period of about 12 days when it is asymptomatic, but it remains contagious. Most people will know about the effects of mild contagious coronaviruses. The common cold is caused by over 90 different viruses, the majority of which belong to the rhinovirus family, but coronaviruses participate in a good percentage.

This virus almost certainly came from animals, probably a bat, but when and how are uncertain. The genomic sequence of 2019-nCoV is 96.2% that of a bat coronavirus, and 79.5% is identical to sequences found in SARS. The Huanan Seafood Wholesale Market in Wuhan, which also sells animals as well as fish, may be the origin of the outbreak as the earliest patients had visited it, and 33 environmental samples from the Western end of the market, which is where the animals were sold, contained the coronavirus. However, the first patient apparently had no contact with this market, so it is possible it started elsewhere and infected the market. Genomic sequencing, which involves counting mutations since entering the human population, suggests the virus began spreading in mid November, 2019.

So, what can be done? At present, the best approach is containment, but whether this is possible when it takes two weeks for symptoms to appear is another matter. If it works, in a few months everybody will wonder what the fuss was all about. If containment fails, it appears to be as contagious as the common cold, and who hasn’t had one of those? One calculation has suggested there could be up to fifty million dead through it. Most would say that is unduly pessimistic, but is it? If there is any good news, it is that the number of reported cases in Wuhan have had about three days of falling. We hope the decline is real and not a consequence of poor reporting.

For current patients and those over the next year, we need something ready to go, and fully approved for use. That suggests trying drugs with antiviral properties. At this point we do not know whether any will work, but if used on patients with the virus, the argument is it is preferable to attempt to do good. In Wuhan, they are already trying a randomized controlled trial of two drugs that target the protease enzyme used by HIV to copy itself. These drugs apparently gave beneficial results against SARS, which is promising. The drug remdesivir, made by Gilead Pharmaceuticals, is a possibility. It interferes with the viral polymerase enzyme, and it has shown activity against every coronavirus tested so far. When combined with interferon it slowed viral replication in MERS-infected mice. (MERS is another coronavirus.) Another US biotech Regeneron is trying to develop monoclonal antibodies; it has previously managed to develop them that were effective against ebola and MERS. 

The next most obvious approach is to develop a vaccine, but historically there has never been a vaccine developed fast enough to have a significant impact on an emerging virus. Historically, vaccines were based on the concept of injecting dead virus into the body to stimulate the immune system, but this is not the current approach. The Chinese got proceedings started by publishing the genetic code of the virus, which was truly impressive work given how quickly they did it. One approach is to convert viral sequences into messenger RNA, which causes the body to produce a viral protein that triggers immune responses. Another approach, at the University of Queensland, is to try to develop a vaccine made of viral proteins grown in cell cultures. Another approach is to make a string of RNA that corresponds to a section of the coronavirus. Thus there are a variety of approaches, and the question then is, will they work?There is also the question, will they work fast enough? Suppose we developed one? It is inconceivable this could be done in less than three months, at which time there would need to be clinical trials. These would take several weeks, and that would have to be followed by a period of six months where it was determined whether there were any adverse effects. That would have to be followed by an extended period where it was examined whether the vaccine actually works, and the net result of this is that it would take over a year at the very least to decide whether we had a working vaccine. Then it has to be manufactured. A vaccine is our only defence if we cannot contain it and it becomes endemic. In the meantime, the scientific community is working; apparently there are at least 77 scientific papers made public on it since the outbreak became declared.

Forests versus Fossil Fuels – a Debate on Effectiveness

The use of biomass for fuel has been advocated as a means of reducing carbon dioxide emissions, but some have argued it does nothing of the sort. There was a recent article in Physics World that discusses this issue, and here is a summary. First, the logic behind the case is simple. The carbon in trees all comes from the air. When the plant dies, it rots, releasing energy to the rotting agents, and much of the carbon is released back into the air. Burning it merely intercepts that cycle and gives the use of the energy to us as opposed to the microbes. A thermal power station in North Yorkshire is now burning enough biomass to generate 12% of the UK’s renewable energy. The power station claims it has changed from being one of the largest CO2 emitters in Europe to supporting the largest decarbonization project in Europe. So what could be wrong? 

My first response is that other than a short-term fix, burning it in a thermal power station is wrong because the biomass is more valuable for generating liquid fuels, for which there is no alternative. There are many alternative ways of generating electricity, and the electricity demand is so high that alternatives are going to be needed. There is no obvious replacement for liquid fuels in air transport, although the technology to make such fuels is yet to be developed properly. 

So, what can the critics carp on about? There were two criticisms about the calculated savings being based on the assumptions: (a) the CO2 released is immediately captured by growing plants, and (b) the biomass would have rotted and put its carbon back into the atmosphere anyway. The first is patently wrong, but so what? The critics claim it takes time for the CO2 to be reabsorbed, and that depends on fresh forest, or regrowth of the current forest. So replanting is obviously important, but equally there is quite some time used up in carbon reabsorption. According to the critics, this takes between 40 and a hundred years, then it is found that because biomass is a less energy-dense material during combustion, compared with coal you actually increase the CO2 emissions in the short-term. The reabsorption requires new forest to replace the old.

The next counter-argument was that the block should not be counted, but rather the landscape – if you only harvest 1% of the forest, the remaining 99% is busily absorbing carbon dioxide. The counter to that is that it would have been doing that anyway. The next objection is that older forests absorb carbon over a much longer period, and sequester more carbon than younger stands. Further, the wood that rots in the soil feeds microbes that otherwise will be eating their way through stored carbon in the soil. The problem is not so much that regrowth does not absorb carbon dioxide, but rather it does not reabsorb it fast enough to be meaningful for climate change.

Let us consider the options where we either do it or we do not. If we do, assume we replant the same area, and fresh vegetation is sufficient to maintain the soil carbon. In year 1 we release x t CO2. After year 40, say, it has been all absorbed, but we burn again and release x t CO2. By year 80, it is all reabsorbed, so we burn again. There is a net x t CO2 in the air. Had we not done this, in each of years 1, 40 and 80 we burn kx t CO2, giving us now 3kx t CO2, where k is some number <1 to counter the greater efficiency of burning coal. Within this scenario eventually the biofuel must save CO2. That we could burn coal and plant fresh forests is irrelevant because in the above scenario we only replace what was there. We can always plant fresh forest.

Planting more works in both options. This is a bit oversimplified, but it is aimed to show that you have to integrate what happens over sufficient time to eliminate the effect of non-smoothness in the functions, and count everything. In my example above it could be argued I do not know whether there will be a reduction in soil carbon, but if that is troublesome, at least we have focused attention on what we need to know. It is putting numbers on a closed system, even if idealized, that shows the key facts in their proper light. 

Scientific Journals Accused of Behaving Badly

I discussed peer review in my previous post, and immediately came across an article on “Predatory Publishing” (Science367, p 129). They report that six out of ten articles published in a sample of what they call “predatory” journals received no citations, i.e. nobody published a further paper referring to the work in these papers. The only reasonable inference that could be taken from what followed was that this work was very much worse than that published in more established journals. So, first, what are “predatory” journals, are they inherently bad, is the work described there seriously worse than in the established ones, or is this criticism more to defend the elite positions of some? I must immediately state I don’t know because the article did not give specific examples that I could analyse, neither science nor journals, although the chances are I would not have been able to read such journals. There are so many journals out there that libraries are generally restricted by finance on what they purchase.

Which gets to the first response. Maybe there are no citations because nobody is reading the articles because libraries do not buy the journals. There can, of course, be other good reasons why a paper is not cited, in that the subject may be of very narrow interest, but it was published to archive a fact. I have some papers that fit that description. For a while I had a contract to establish the chemical structures of polysaccharides from some New Zealand seaweeds and publish the results. If the end result is clearly correct, and if the polysaccharide was unique to that seaweed, which was restricted to being found in New Zealand and had no immediate use, why would anyone reference it? One can argue that the work ended up being not that interesting, but I did not know that before I started. Before starting I did not know; after completion I did, and by publishing, so will everyone else. If they never have any use, well, at least we know why. From my point of view, they were useful; I had a contract and I fulfilled it. When you are independent and do not have such things as secure salaries, contracts are valuable.

The article defined the “predatory” journal as (a) one that charged to publish (Page charges were well established in the main stream journals); (b) they used aggressive marketing tactics (so do the mainstream journals); and (c) they offered “little or no peer review” (I have no idea how they reached this conclusion because peer review is not open to examination). As an aside, the marketing tactics of the big conglomerates is not pretty either, but they have the advantage of being established, and libraries cannot usually bring themselves to stop using them, as it is an “all or nothing” subscription with a lot of journals involved, with at least one or two essential for a University.

The next criticism was these upstarts were getting too much attention. And horrors, 40% of the articles drew at least one citation. You can’t win against this sort of critic: it is bad because articles are not cited, and bad because they are. I find citations are a bad iindication of importance. Many scientists in the West cite their friends frequently, irrespective of whether the article cited has any relevance because they know nobody checks, and the number of citations is important in the West for getting grants. You cite them, they cite you, everybody wins, except those not in the loop. This is a self-help game.

The next criticism of them is there are too many of them. Actually the same could be said of mainstream journals; take a look at the number of journals from Elsevier. Even worse, many of these come from Africa and Asia. How dare they challenge our established superiority! Another criticism – the articles are not cited in Wikipedia. As if citations in Wikipedia were important. So why do scientists in Africa and Asia publish in such journals? The article suggests an answer: publication is faster. Hmm, fancy going for better performance! So, if it is a problem, the answer would surely be to fix it with the “approved” journals, but that is not going to happen any time soon. Also, from the Africans’ perspective, their papers may well be more likely to be rejected in the peer review of Western journals because they are not using the most modern equipment, in part because they can’t afford it. The work may be less interesting to Western eyes, but is that relevant if it is interesting in Africa? I can’t help but think this article was more a sign of “protecting their turf” than of trying to improve the situation.

Peer Review – a Flawed Process for Science

Back from a Christmas break, and I hope all my readers had a happy and satisfying break. 2020 has arrived, more with a bang than a whimper, but while languishing in what has started off as a dreadful summer here, thanks to Australia (the heat over Central Australia has stirred up the Southern Ocean to give us cold air, while their bush fires have given us smoky air, even though we are about 2/3 the width of the Atlantic away) I have been thinking of how science progresses, or doesn’t. One of the thoughts that crossed my mind was the assertion that we must believe climate change is real because the data are published in peer-reviewed journals. Climate change is certainly real, Australia is certainly on fire, but what do you make of the reference to peer-reviewed journals? Does such publication mean it is right, and that peer review is some sort of gold standard?

Unfortunately, that is not necessarily so, and while the process filters out some abysmal rubbish it also lets through some fairly mediocre stuff, although we can live with that. If the work reports experimental observations we should have more faith in it, right? After all, it will have been looked at by experts in the field who use the same techniques and they will filter out errors. There are two reasons why that is not so. The first is that the modern scientific paper, written to save space, usually gives insufficient evidence to tell. The second is illustrated by climate change; there are a few outlets that are populated solely by deniers, in which another denier reviews the work, in other words, prejudice rules. 

Chemistry World reported a study carried out by the Royal Society of Chemistry that reviewed the performance of peer review, and came to the conclusion that peer review is sexist. Females as corresponding authors made up 23.9% of submissions, and 25.6% of the rejections without peer review. Only 22.9% of the papers accepted after peer review came from female corresponding authors. Female corresponding authors are less likely to receive an immediate “accept”, or “accept with minor revisions”, but interestingly, if the reviewer is female, the males are less likely to receive that. These figures come from 700,000 submissions, so although the percentages are not very big, the question remains: are they meaningful, and if so, what do they mean?

There is a danger in drawing conclusions from statistics because correlations do not mean cause. It may be nothing more than women are more likely to be younger, and hence being early in their careers are more likely to need help, or they are more likely to have sent the paper to a less than appropriate journal, since journals tend to publish only in very narrow fields. It could also indicate that style is more important than substance, because the only conceivable difference with a gender bias is the style used in presentation. It would be of greater interest to check out how status affects the decision. Is a paper from Harvard, say, more likely to be accepted than a paper from a minor college, or something non-academic, such as a patent office?

One of my Post-doc supervisors once advised me that a new idea will not get published, but publication is straightforward if you report the melting point of a new compound. Maybe he was a little bitter, but it raises the question, does peer review filter out critical material because it does not conform to the required degree of comfort and compliance with standard thinking? Is important material rejected simply because of the prejudices or incompetence of the reviewer? What happens if the reviewer is not a true peer? Irrespective of what the editor tells the author, is a paper that criticizes the current paradigm rejected on that ground? I have had some rather pathetic experiences, and I expect a number of others have too, but the counter to that is, maybe the papers had insufficient merit. That is the simple out, after all, who am I?

Accordingly, I shall end by citing someone else. This related to a paper about spacetime, which at its minimum is a useful trick for solving the equations of General Relativity. However, for some people, spacetime is actually a “thing”; you hear about the “fabric of spacetime” and in an attempt to quantize it, scientists have postulated that it exists in tiny lumps. In 1952 an essay was written that was against the prevailing view that spacetime is filled with fields that froth with “virtual particles”. I don’t know whether this was right or not because nobody would publish it, so it is not to be discussed in polite scientific society. It was probably rejected because it went totally against the prevailing view, and we must not challenge that. And no, it was no written by an ignorant fool, although it should have been judged on content and not the status of the person. The author was Albert Einstein, who could legitimately argue that he knew a thing or two about General Relativity, so nobody is immune to such rejection. If you want to see such prejudice in action, try arguing that quantum field theory is flawed in front of an advocate. You would be sent to the corner wearing a conical hat. The advocate will argue that the theory has calculated the magnetic moment of the electron and this is the most accurate calculation in physics. The counter is yes, but only through some rather questionable mathematics (like cancelling out infinities), while another calculation based on Einstein’s relativity gives an error in the cosmological constant of about 120 orders of magnitude (ten followed by 120 zeros), the worst error in all of physics. Oops!