Liar Liar!

Do you think you can detect liars? If so, according to a Nature article (https://doi.org/10.1038/s41562-023-01556-2 ) you are most probably mistaken. A meta-analytic test involving 24,483 people who had to attempt to pick truth/lie, found truth/lie discrimination no better than 4% better than random guessing. Not exactly a stunning achievement. You will read about cues such as liars avoid eye contact, and basically they do not. Liars have heard about that! Worse, truthful people do it as well. If you rely on behavioural cues, you find that there is a 96% overlap of these behavioural variables between truth-tellers and liars.

A current approach is to combine many cues, and the article reports that airport security personnel were trained to handle 92 cues. One problem is most of such a large number will be weak cues, but more significantly, who can do a rapid analysis of 98 variables in their head? Back to guessing, except those involved will have a very fancy name for the procedure that leads to guessing.

The problem is we need to end up with a binary judgement. This happens in many cases. A jury must decide on guilt or innocence, in a job interview someone must hire or reject. The action is discrete; twenty-seven ifs and buts have to be rejected, and your decision should be better than the toss of a coin. So how are such decisions reached? The usual way of dealing with too much information is to ignore most of it. The approach is to select a very few cues. If we want to do that for lie detection, we need to know the best possible cues.

In some experimental tests, when subjects could use any cue they liked, the accuracy was about 50%, which is what they would get from random guessing. However, when the same subjects were asked to make their decision based on richness of detail, success rose to about 66%, in other words two/thirds were  correct. A second cue was whether the statement could in principle be verified; accuracy then rose the about 70%, but note that no verification actually took place. The key was, could it be verified? This was not actually done because in the tests decisions had to be made more or less on the spot, but it would be possible to ask what would be seen if someone went to inspect the situation to verify it or not.

An experiment was  done where participants were asked to determine on a single cue (detailedness) or use multiple cues (detailedness, affect, unexpected complications, admissions of lack of memory). The success rate with a single cue  was 59%, while with multiple cues, 54%. More information in the decision-making process led to worse decisions. Of course, you might well think this difference is not very great, and worse there may have been something in the material that biased the results, so just maybe the conclusion is only marginally significant. Another point might be that some of the additional cues were not very relevant. A memory lapse might be an excuse for not having tho0ught up the lie fully, but it can also be genuine. Who recalls all the fine details of having seen something but there was no reason at the time to think it was particularly important?

There is another problem with this sort of study, which was acknowledged by the authors. Subjects were instructed to lie or tell the truth. This may have led to deliberate “false cues”, and even if it did not, there is nothing at stake for the liar. Knowing there is a serious price to pay if caught out, liars may be more prepared to embellish their lies, and give additional details. Second, the success of this approach  was demonstrated only where statements were about episodic memory and truth-tellers were both willing and able to provide specific details. In real situations, there may not be many details remembered when telling the truth, while a liar may produce more.

So, to summarize, those TV shows where there is someone who can pick a lie with infallible accuracy are, well, telling the audience porkies.

Advertisement

Science and Society

An interesting question is to what extent should a person’s loyalties lie towards society? Suppose you own a business in a mall, and you happen to know that right now business is going well and lots of electronic transactions are going into your account. You are making money hand over fist, but by some means you learn there is a bomb somewhere in the mall,  but you are reasonably confident that nobody in your premises will be seriously injured when it goes off . Do you immediately clear the mall, or do you keep quiet for a while and let the money keep flowing? You might think the answer to that is obvious, but is it?

Now let us shift the problem down the chain of responsibility. An employee of the business believes there is a gas leak down the other end of the mall. Again, the business he works for will be safe, but he suspects that if he stops the flow of money he may well be fired. What does he do? Now, let us shift the imminence. You have some engineering knowledge and you are employed by the owner of the Christchurch TV building after the first earthquake, and you notice some of the floors are not level. You look up the architectural plans and you see the floors are debatably not properly anchored to the walls. Do you blow the whistle and get everyone out, at the risk of being fired, or let things stay, on the basis the building is safe enough now, and it withstood a serious earthquake? Finally, in each case, you are the owner, and a worker brings the information to you. What do you do?

With your answers to those questions safely established, now consider that according to Chemistry World (Feb. 2023) in 1977 the scientists at Exxon Mobil had completed some extremely capable climate modelling and had reached conclusions similar to what few other reached until the start of the new millennium. A study of Exxon-Mobil’s internal reports clearly showed the effects of mankind’s burning of fossil fuels, and these could be easily separated from natural effects. ExxonMobil kept insisting the science was too uncertain to know when, or if, human-caused global warming might be measurable. The ExxonMobil scientists predicted 2000 +/- 5 years. They just made it -the IPCC declared it was measurable in 1995.  Thus armed with excellent scientific evidence, the Chief Executives Raymond and Tillerson insisted there was a  “high degree of uncertainty” in climate models. Tillerson was CEO from 2006 to 2017 when the evidence was in that the predictions were accurate and if anything under-estimated the adverse effects. Further, ExxonMobil had a long history of funding  third parties to make misleading claims on climate change. Armed with that experience, Tillerson  subsequently became Secretary of State under Donald Trump. Had ExxonMobil spent such funds on promoting research to find the answers to the problems with climate change, we would be much better off. Interestingly, ExxonMobil’s defence regarding the misleading statements is that free speech is protected under the Constitution. Of course, “free speech” that is demonstrably wrong that is made to gain money is also known as fraud, and that is less well protected.

What particularly annoys me is that part of the solution may lie with biofuels. After cyclone Gabrielle, a huge amount of forestry waste was washed down into rivers and has become a major problem. That could be turned into liquid fuel as could the organic fraction of municipal waste by hydrothermal liquefaction. It is possible to do this in a variety of ways, I have done it in the lab, there has previously been a demonstration plant proposed to be built in the US, but was abandoned after the price of oil slumped. And we know the Bergius process converted lignite to liquid fuels, with Germany making over a million tonnes this way through 1944, despite plant being bombed. ExxonMobil had the internal skills to address the engineering issues and could have made this a possible option now. It would not solve everything, but it would have used their experience to provide a partial solution.

Switching slightly, some readers may recall my fascination with aluminium/chorine batteries. The reason is that aluminium is easily available, and in principle offers a large capacity because while one atom of lithium provides work from one electron, aluminium has three. I suggested fuel cells based on the chemistry in my e-novel Red Gold because aluminium and chlorine would be readily available on Mars. Now, a novel battery has been claimed (Angew. Chem. Int. Ed. 2023, 62, e202216797) with a phenoxazine cathode (no cobalt!). A gram is claimed to have a capacity of 133 mAh at 0.2A, and has run for 50,000 cycles with no loss. If these can be manufactured with such performance, general reasonably priced electric vehicles become a possibility.

Economic Consequences of the Ukraine War

My last post mentioned the USSR collapse. One of the longer term consequences has been this Ukraine war. Currently, there have been problems of shelling of the Zaporizhzhia nuclear plant, and this appears to have happened in that our TV news has shown some of the smashed concrete, etc. The net result is the plant has shut down. Each side accuses the other of doing the shelling, but it seems to me that it had to be the Ukrainians. Russia has troops there, and no military command is going to put up with his side shelling his own troops. However, that is far from the total bad news. So far, Ukraine has been terribly lucky, but such luck cannot last indefinitely. There are consequences outside the actual war itself. The following is a summary of some of what was listed in the August edition of Chemistry World.

The Donbas area is Ukraine’s heavy industry area, and this includes the chemical industry. Recently, Russian air strikes at Sieverierodonetsk hit a nitric acid plant, and we saw images of the nitrogen dioxide gas spewing into the atmosphere.

Apparently, in 2017 Ukrainian shelling was around a chemical plant that contained 7 tonne of chlorine. Had a shell hit a critical tank, that would have been rather awkward. Right now, in the eastern Donbas there is a pipeline almost 700 km long that pipes ammonia. There are approximately 1.5 million people in danger from that pipeline should it burst; exactly how many depends on where it is broken. There are also just under  200,000 t of hazardous waste stored in various places. The question now is, with all this mess generated, in addition to demolished buildings and infrastructure, who will pay what to clean it up? It may or may not be fine for Western countries to use their taxes to produce weapons to give to Ukraine, but cleaning up the mess requires the money to go to Ukraine, not armament-making corporations at home.

The separation of the Donbas has led to many mines being closed, and these have filled with water. This has allowed mercury and sulphuric acid to be leached and then enter the water table. During 2019, a survey of industrial waste was made, and Ukraine apparently stores over 5.4 billion t of industrial waste, about half of which is in the Donbas. Ukraine has presumably inherited a certain amount, together with some of the attitudes, from the old Soviet Union. From experience, their attitude to environmental protection was not their strong point. I recall  one very warm sunny morning going for a walk around Tashkent. I turned a corner and saw rather a lot of rusty buildings, and also, unbelievably, a cloud. How could water droplets form during such a warm dry climate? The answer was fairly clear when I got closer. One slight whiff, and I knew what it was: the building was emitting hydrogen chloride into the atmosphere and the hydrochloric acid droplets were the reason for the rust.

Meanwhile, some more glum news. We all know that the sanctions in response to the Ukraine war has led to a gas shortage. What most people will not realize is what this is doing to the chemical industry. The problem for the chemical industry is that unlike most other industries, other than the very sophisticated, the chemical industry is extremely entangled and interlinked. A given company may make a very large amount of chemical A, which is then sold as a raw material to a number of other companies, who in turn may do the same thing. There are many different factories dependent on the same raw chemical and the material in a given chemical available to the public may have gone through several different steps in several different factories.

An important raw mixture is synthesis gas, which is a mix of carbon monoxide and hydrogen. The hydrogen may be separated and used in steps to make a variety of chemicals, such as ammonia, the base chemical for just about all nitrogen fertilizer, as well as a number of other uses. The synthesis gas is made by heating a mixture of methane gas and water. Further, almost all chemical processing requires heat, and by far the bulk of the heat is produced by burning gas. In Europe, the German government is asking people to cut back on gas usage. Domestic heating can survive simply by lowering the temperature, although how far down one is prepared to go during winter is another question. However, the chemical industry is not so easily handled. Many factories use multiple streams, and it is a simple matter to shut down such a stream, but you cannot easily reduce the amounts going through a stream because the reactions are highly dependent on pressure, and the plant is in a delicate balance between amount processed and heat generated.  A production unit is really only designed to operate one way, and that is continuously with a specific production rate. If you close it down, it may take a week to get it started again, to get the temperature gradients right. One possibility is the complete shutdown of the BASF plant at Ludwigshafen, the biggest chemical complex in the world. The German chemical industry uses about 135 TWhr of gas, or about 15% of the total in the country. The price of such gas has risen by up to a factor of eight since Russia was sanctioned, and more price rises are likely. That means companies have to try to pass on costs, but if they face international competition, that may not be possible. This war has consequences far beyond Ukraine.

Betelgeuse Dimmed

First, I apologize for the initial bizarre appearance of my last post. For some reason, some computer decided to slice and dice. I have no idea why, or for that matter, how. Hopefully, this post will have better luck.

Some will recall that around October 2019 the red supergiant Betelgeuse dimmed, specifically from magnitude +0.5 down to +1.64. As a variable star, its brightness oscillates, but it had never dimmed like this before, at least within our records. This generated a certain degree of nervousness or excitement because a significant dimming is probably what happens initially before a supernova. There has been no nearby supernova since that of the crab nebula in 1054 AD.

To put a cool spot into perspective, if Betelgeuse replaced the sun, its size is such it would swallow Mars, and its photosphere might almost reach Saturn. Its mass is estimated at least ten times, or possibly up to twenty times, the mass of the sun. Such a variation sparks my interest because when I pointed out that my proposed dependence of characteristic planetary orbital semimajor axes on the cube of the mass of the star ran into trouble because the stellar masses were not known that well I got criticised by an astronomer: they knew the masses to within a few percent. The difference between ten times the sun’s mass and twenty times is more than a few percent. This is a characteristic of science. They can measure stellar masses fairly accurately in double star systems, then they “carry over” the results,

But back to Betelgeuse. Our best guess as to distance is between 500 – 600 light years. Interestingly, we have observed its photosphere, the outer “shell” of the star that is transparent to photons, at least to a degree, and this is non-spherical, presumably due to stellar pulsations that send matter out from the star. The star may seem “stable” but actually its surface (whatever that means) is extremely turbulent. It is also surrounded by something we could call an atmosphere, an envelope of matter about 250 times the size of the star. We don’t really know its size because these asymmetric pulsations can add several astronomical units (the Earth-sun distance) in selected directions.

Anyway, back to the dimming. Two rival theories were produced: one involved the development of a large cooler cell that came to the surface and was dimmer than the rest of Betelgeuse’s surface. The other was the partial obscuring of the star by a dust cloud. Neither proposition really explained the dimming, nor did they explain why Betelgeuse was back to normal by the end of February, 2020. Rather unsurprisingly, the next proposition was that the dimming was caused by both of those effects.

Perhaps the biggest problem because telescopes could only look at the star sone of them however a Japanese weather satellite ended up providing just the data they needed. This was somewhat inadvertent. The weather satellite was in geostationary orbit 35,786 km above the Western Pacific. It was always looking at half of Earth, and always the same half, but the background was also always constant, and in the background was Betelgeuse. The satellite revealed that the star overall cooled by 140 degrees C. This was sufficient to reduce the heating of a nearby gas cloud, and when it cooled, dust condensed and formed obscuring dust. So both theories were right, and even more strangely, both contributed roughly equally to what was called “the Great Dimming”.

It also suggested more was happening to the atmospheric structure of the star before this happened. By looking at the infrared lines, it became apparent that water molecules in the upper atmosphere that would normally create absorption lines in the star’s spectrum suddenly changed to form emission lines. Something had made them become unexpectedly hotter. The current thinking is that a shock-wave from the interior propelled a lot of gas outwards from the star, leading to a cooler surface, while heating the outer atmosphere. That is regarded as the best current explanation. It is possible that there was a similar dimming event in the 1940s, but otherwise we have not noticed much, but possibly it could have occurred but our detection methods may not have been accurate enough. People may not want to get carried away with, “I think it might be dimmer.” Anyway, for the present, no supernova. But one will occur, probably within the next 100,000 years. Keep looking upwards!

Scientists Behaving Badly

You may think that science is a noble activity carried out by dedicated souls thinking only of the search for understanding and of improving the lot of society. Wrong! According to an item published in Nature ( https://doi.org/10.1038/d41586-021-02035-2) there is rot in the core. A survey of 64,000 researchers at 22 universities in the Netherlands was carried out, 6,813 actually filled out the form and returned it, and an estimated 8% of scientists who so returned their forms in the anonymous survey confessed to falsifying or fabricating data at least once between 2017 and 2020. Given that a fraudster is less likely to confess, that figure is probably a clear underestimate.

There is worse. More than half of respondents also reported frequently engaging in “questionable research practices”. These include using inadequate research designs, which can be due to poor funding and hence more understandable, and frankly this could be a matter of opinion. On the other hand, if you confess to doing it you are at best slothful. Much worse, in my opinion, was deliberately judging manuscripts or fund applications while peer reviewing unfairly. Questionable research practices are “considered lesser evils” than outright research misconduct, which includes plagiarism and data fabrication. I am not so sure of that. Dismissing someone else’s work or fund application hurts their career.

There was then the question of “sloppy work”, which included failing to “preregister experimental protocols (43%), make underlying data available (47%) or keep comprehensive research records (56%)” I might be in danger here. I had never heard about “preregistering protocols”. I suspect that is more for the medical research than for physical sciences. My research has always been of the sort where you plan the next step based on the last step you have taken. As for “comprehensive records, I must admit my lab books have always been cryptic. My plan was to write it down, and as long as I could understand it, that was fine. Of course, I have worked independently and records were so I could report more fully and to some extent for legal reasons.

If you think that is bad, there is worse in medicine. On July 5 an item appeared in the British Medical Journal with the title “Time to assume that health research is fraudulent until proven otherwise?” One example: a Professor of epidemiology apparently published a review paper that included a paper that showed mannitol halved the death rate from comparable injuries. It was pointed out to him that that paper that he reviewed was based on clinical trials that never happened! All the trials came from a lead author who “came from an institution” that never existed! There were a number of co-authors but none had ever contributed patients, and many did not even know they were co-authors. Interestingly, none of the trials had been retracted so the fake stuff is still out there.

Another person who carried out systematic reviews eventually realized that only too many related to “zombie trials”. This is serious because it is only by reviewing a lot of different work can some more important over-arching conclusions be drawn, and if a reasonable percentage of the data is just plain rubbish everyone can jump to the wrong conclusions. Another medical expert attached to the journal Anaesthesia found from 526 trials, 14% had false data and 8% were categorised as zombie trials. Remember, if you are ever operated on, anaesthetics are your first hurdle! One expert has guessed that 20% of clinical trials as reported are false.

So why doesn’t peer review catch this? The problem for a reviewer such as myself is that when someone reports numbers representing measurements, you naturally assume they were the results of measurement. I look to see that they “make sense” and if they do, there is no reason to suspect them. Further, to reject a paper because you accuse it of fraud is very serious to the other person’s career, so who will do this without some sort of evidence?

And why do they do it? That is easier to understand: money and reputation. You need papers to get research funding and to keep your position as a scientist. It is very hard to detect, unless someone repeats your work, and even then there is the question, did they truly repeat it? We tend to trust each other, as we should be able to. Published results get rewards, publishers make money, Universities get glamour (unless they get caught out). Proving fraud (as opposed to suspecting it) is a skilled, complicated and time-consuming process, and since it shows badly on institutions and publishers, they are hardly enthusiastic. Evil peer review, i.e. dumping someone’s work to promote your own is simply strategic, and nobody will do anything about it.

It is, apparently, not a case of “bad apples”, but as the BMJ article states, a case of rotten forests and orchards. As usual, as to why, follow the money.

Neanderthals: skilled or unskilled?

Recently, you may have seen images of a rather odd-looking bone carving, made 50,000 years ago by Neanderthals. One of the curious things about Neanderthals is that they have been portrayed as brutes, a sort of dead-end in the line of human evolution, probably wiped out by our ancestors. However, this is somewhat unfair for several reasons, one of which is this bone carving. It involved technology because apparently the bone was scraped and then seemingly boiling or some equivalent heat processing took place. Then two sets of three parallel lines, the sets normal to each other, were carved on it. What does this tell us? First, it appears they had abstract art, but a more interesting question is, did it mean anything more? We shall probably never know.

One thing that has led to the “brute” concept is they did not leave many artifacts, and those they did were stone tools that compared with our “later ancestors” appeared rather crude. But is that assessment fair? The refinement of a stone tool probably depends on the type of stone available. The Neanderthals lived more or less during an ice age, and while everything was not covered with glaciers, the glaciers would have inhibited trade. People had to use what was available. How many of you live in a place where high quality flint for knapping is available? Where I live, the most common rocks available are greywacke, basalt, and maybe some diorite, granodiorite or gabbro. You try making fine stone tools with these raw materials.

Another point, of course, is that while they lived in the “stone age”, most of their tools would actually be made of wood, with limited use of bone, antler and ivory. Stone tools were made because stone was the toughest material they could find, and they hoped to get a sharp edge which would make a useful cutting edge. Most of the wooden items will have long rotted, which is unfortunate, but some isolated items remain, including roughly 40 pieces of modified boxwood, which are interpreted as being used as digging sticks and were preserved in mudstone in Central Italy. These were 170,000 years old. Even older were nine well-preserved wooden spears is a coal mine at Schöningen, from 300,000 years ago. Making these would involve selecting and cutting a useful piece of spruce, shaping a handle, removing the bark (assumed to be done through fire) smoothing the handle with an abrasive stone, and sharpening the point, again with an abrasive stone.

Even more technically advanced, apparently stone objects were attached to wooden handles with a binding agent. The wooden parts have long rotted, but the production can be inferred from the traces of hafting wear and of adhesive material on the stones. Thus Neanderthals made stone-tipped wooden spears, hafted cutting and scraping tools, and they employed a variety of adhesives. Thus they made two different classes of artifacts each comprising at least three components. They were making objects more complex than some recent hunter-gatherers. There is a further point. The items require a number of steps to make them, and they require quite different skills. The better tools would be made quicker if there were different people making the various components, but that would require organization, and ensuring each knew what then others were doing. That involves language. We have also found a pit that contains many bones and tools for cutting meat from them, presumably a butchery where the results of a successful hunt were processed. That involves sharing the work, and presumably the yield. 

We have found graves. They must have endured pain because they invariably have the signs of at least one fracture that healed. To survive such injuries they must have had others care for them. Also found have been sharpened pieces of manganese dioxide, which is soft but very black. Presumably these were crayons, which implies decorating something, the somethings long rotted away. There are Neanderthal cave paintings in SpainFinally, there was jewellery, which largely involved shells and animals’ teeth with holes cut into them. Some shells were pigmented, which means decoration. Which raises the question, could you cut a hole in a tooth with the only available tools being what you made from stone, bone, or whatever is locally available naturally? Finally, there are the ”what were they” artifacts. One is the so-called Neanderthal flute – a 43,000 – 60,000- year-old bear femur with four holes drilled in it. The spacings does not match any carnivore’s tooth spacing, but they do match that of a musical scale, which, as an aside, indicate the use of a minor scale. There is also one carving of a pregnant woman attributed to them.  These guys were cleverer than we give them credit for.

New Ebook

Now on preorder, and available from July 15 at Amazon and most outlets that sell .epub, Spoliation.

When a trial to cover-up a corporate failure ends Captain Jonas Stryker’s career, he wants revenge against The Board, a ruthless, shadowy organization with limitless funds that employs space piracy, terrorism, and even weaponised asteroids. Posing as a space miner, Stryker learns that The Board wants him killed, while a young female SCIB police agent wants retribution against him for having her career spoiled at his trial. As Stryker avoids attempts to kill him, he becomes the only chance to prevent The Board from overturning the Federation Government and imposing a Fascist-style rule.

A story of greed, corruption and honour, combining science and visionary speculation that goes from the high frontier to outback Australia.

The complications involved in processing small asteroids means they have to be moved to a central point. The background to this novel shows the science behind that, and also how to convert an asteroid into a weapon. You know what happened to the dinosaurs so the weapon has punch.

Preorder at:

Amazon:  https://www.amazon.com/dp/B097M95LCJ 

Smashwords:  https://www.smashwords.com/books/view/1090447

B&N                https://www.barnesandnoble.com/s/2940164941673

Apple:             https://books.apple.com/us/book/x/id1574442266

Kobo               https://store.kobobooks.com/en-us/Search?Query=9781005532796

How Fast is the Universe Expanding?

In the last post I commented on the fact that the Universe is expanding. That raises the question, how fast is it expanding? At first sight, who cares? If all the other galaxies will be out of sight in so many tens of billions of years, we won’t be around to worry about it. However, it is instructive in another way. Scientists make measurements with very special instruments and what you get are a series of meter readings, or a printout of numbers, and those numbers have implied dimensions. Thus the number you see on your speedometer in your car represents miles per hour or kilometers per hour, depending on where you live. That is understandable, but that is not what is measured. What is usually measured is actually something like the frequency of wheel revolutions. So the revolutions are counted, the change of time is recorded, and the speedometer has some built-in mathematics that gives you what you want to know. Within that calculation is some built-in theory, in this case geometry and an assumption about tyre pressure.

Measuring the rate of expansion of the universe is a bit trickier. What you are trying to measure is the rate of change of distance between galaxies at various distances from you, average them because they have random motion superimposed, and in some cases regular motion if they are in clusters. The velocity at which they are moving apart is simply change of distance divided by change of time. Measuring time is fine but measuring distance is a little more difficult.  You cannot use a ruler.  So some theory has to be imposed.

There are some “simple” techniques, using the red shift as a Doppler shift to obtain velocity, and brightness to measure distance. Thus using different techniques to estimate cosmic distances such as the average brightness of stars in giant elliptical galaxies, type 1a supernovae and one or two other techniques it can be asserted the Universe is expanding at 73.5 + 1.4 kilometers per second for every megaparsec. A megaparsec is about 3.3 million light years, or three billion trillion kilometers.

However, there are alternative means of determining this expansion, such as measured fluctuations in the cosmic microwave background and fluctuations in matter density of the early Universe. If you know what the matter density was then, and know what it is now, it is simple to calculate the rate of expansion, and the answer is, 67.4 +0.5 km/sec/Mpc. Oops. Two routes, both giving highly accurate answers, but well outside any overlap and hence we have two disjoint sets of answers.

So what is the answer? The simplest approach is to use an entirely different method again, and hope this resolves the matter, and the next big hope is the surface brightness of large elliptical galaxies. The idea here is that most of the stars in a galaxy are red dwarfs, and hence the most “light” from a galaxy will be in the infrared. The new James Webb space telescope will be ideal for making these measurements, and in the meantime standards have been obtained from nearby elliptical galaxies at known distances. Do you see a possible problem? All such results also depend on the assumptions inherent in the calculations. First, we have to be sure we actually know the distance accurately to the nearby elliptical galaxies, but much more problematical is the assumption that the luminosity of the ancient galaxies is the same as the local ones. Thus in earlier times, since the metals in stars came from supernovae, the very earliest stars will have much less so their “colour” from their outer envelopes may be different. Also, because the very earliest stars formed from denser gas, maybe the ratio of sizes of the red dwarfs will be different. There are many traps. Accordingly, the main reason for the discrepancy is that the theory used is slightly wrong somewhere along the chain of reasoning. Another possibility is the estimates of the possible errors are overly optimistic. Who knows, and to some extent you may say it does not matter. However, the message from this is that we have to be careful with scientific claims. Always try to unravel the reasoning. The more the explanation relies on mathematics and the less is explained conceptually, the greater the risk that whoever is presenting the story does not understands it either.

2020 and all that

Since the year is almost over, I thought I would have a small review of the year, from my point of view. From my perspective, the year started with nice warm weather, and rather remarkable sunsets. Australia had some terrible bushfires. Still, all was well where I live. NZ had some fires as well, although nothing like the Australian ones.

My daughter-in-law is Chinese, and her parents live near the edge of Hunan province, but her father travels to work in a factory in adjacent Hubei province, and in February that got locked down. I am not quite sure what happened exactly, but her father could not return home for nearly a week. That is putting in overtime! When Tian announced that Wuhan would build a thousand-bed hospital in ten days, I did not believe her, but they did, in the middle of a lockdown. The Chinese lockdown was interesting. Soldiers from the PLA would but paper tape over everyone’s doors. If you wanted groceries a soldier would take away the tape, you would go collect them, then the tape would be replaced. Break the tape and be naughty, an automatic six months in a Chinese jail, and you don’t get time off for good behaviour. Good behaviour is required, and avoids the consequences of bad behaviour. If your naughtiness could reasonably, in the eye of the party, have led to someone else getting the virus as a consequence of your behaviour, five years. The Chinese behaved and by all accounts the virus was essentially eliminated and life returned to normal in China in a couple of months, other than the odd outbreak from Chinese returning from somewhere else.

Inevitably, the virus landed in New Zealand, and our government tried a strategy of elimination. It was fascinating in that on day one I was out on the road running alongside the bank that encloses my property to cut back vegetation and make it easier for road users. A pedestrian came down the road and immediately crossed to the other side when he saw me. I live on the side of a hill, and I can look down on the main highway going into Wellington. It was weird: almost no vehicles. How could this be? During the major lockdown, my daughter brought me groceries once a week; she, being a senior physician at Wellington Hospital had a priority time for grocery shopping when all and sundry were not allowed. On a personal level, I had one scary moment when the lockdown was eased off. On the first evening, I went to a scheduled meeting that we all thought would be cancelled, but wasn’t. I was driving down what is normally one of the busiest roads in the valley when a van flew out of a commercial building and shot across the road, presumably being used to empty roads. Fortunately, I still have very good reflexes, and it seems good brakes.

The good news is that while there were the odd example of a leakage, the virus appears to be eliminated here, and sports events, summer festivals, etc are apparently going to proceed as usual. While the tourist/hospitality sector has been in trouble, and probably will continue to be, life in New Zealand has returned to normal.

At a personal level, I was invited to write a chapter on hydrothermal processing of biomass by a major book publishing company. I agreed, and that was settled prior to the virus outbreak. I sent in the chapter, but never heard any more about it. I suppose it gave me something to do over the period. I also finished and started revising my next novel, provisionally called “Spoliation” so please go to https://www.inkshares.com/books/spoliation to read chapter one.

The election here had the government returned with a record majority, while in the US there was a narrow defeat. What does this all mean? The most critical problems for 2021 will be how to fix the economies and how to deal with the virus. There are vaccines for the virus, but unless the virus is eliminated, it will stay with us, and now it depends on how long the vaccines work. My guess is revaccination will probably need to be frequent unless we do eliminate it, and I can’t see that happening as only too many countries do not see that as an objective. Meanwhile, the virus is mutating. As for the economies, what happens will be critically dependent on what governments and central banks do. We may be cursed with more interesting times.This will be my last post for 2020. Since it is summer here, and Christmas is imminent, I shall be distracted, but I shall return in mid January. In the meantime, I wish you all a very merry Christmas, and a healthy virus-free 2021.

Will Pump-priming the Economy Help Post-Covid?

Because I operated a company that had the primary objective of developing technology for new businesses in the chemical arena, economics interested me. We can all be smart looking back, but what about now? What should we do about the economy during virus times? So what are some options? This will take more than one post, but first, what is the best example in history of getting out of trouble? What tools are available?

In 1936, John Maynard Keynes published “The General Theory of Employment, Interest and Money”, and when he did so, he should have known it worked for getting out of depression because when Adolf Hitler took over in Germany, the economy was in a mess, with horrendous unemployment and terrible wages for most of those actually employed. Hitler promised to fix things, and he did, by implementing the policies that Keynes was later to publish. By 1936 the German unemployment had essentially disappeared and Keynes would know that.  Hitler was to provide the world with horrors, but early on his economic policy was exactly what Germany needed.

Keynes’ approach was essentially that in a depression the state should provide money to prime the economy, and when better times arrived, pay it back. In my opinion, therein lay one flaw: when better times arrive, do politicians want to pay it back? Er, no. Better (at least for re-election chances) to leave it as debt and inflate it away. (Hitler never had the opportunity to pay it back, because he had other interests.) It is usual to say that Keynes’ economics collapsed in the 1970s with persistently high inflation and high unemployment. One could argue that at least part of the inflation was because the governments refused to pay back, and instead kept borrowing. I have no doubt the counter to that will be, look at now – there is no inflation, and governments are borrowing heavily. Maybe.

If following Keynes, does it matter what the money is spent on? In the German example, the money went on infrastructure, and on providing the expansion of industries for making things. There was an unintended consequence after the war: once the West Germans started to run their own economy they had another economic miracle. Thanks to Hitler’s apprentice schemes, there were a large number of highly skilled people required for manufacturing, and they had factories. The allies bombed cities but mainly left the factories alone. German manufacturing reached its highest point of the war in late 1944. As an example, they made ten times more fighters then than around the Battle of Britain. (That they had run out of skilled pilots was a separate issue.) 

Keynesian economics involved high taxes on the wealthy and some claim such tax rates prevent innovation and general expansion. In the US, from 1953 to 1964, the top tax rate was 90%, and it did not drop below 70% until about 1982. This period corresponded to the US being the most developed country in the world. The tax rates did not stifle anything. Of course, there were tax exemptions for money being sent in the desired direction, and that may well be a desirable aspect of taxation policy. The death of Keynesian economics was probably a consequence of Milton Friedman, as much as anything else. The stagflation in the late 1970s convinced politicians they could no longer spend their way out of a recession. An important observation of Friedman was that if policymakers stimulated without tackling the underlying structural deficiencies, they would fail. They did not and fail they did, but that was partly because the politicians had ceased to look at structural deficiencies. Friedman was correct regarding the problem, but that was because in detail Keynes’ obligations were overlooked. No more than half of the Keynes  prescription was implemented generally. So, where does that leave us? Is Keynes applicable now? In my opinion, the current attempts to spend our way out of virus difficulties won’t work because there are further problems that apply, but that is for a later post.