About ianmillerblog

I am a semi-retired professional scientist who has taken up writing futuristic thrillers, which are being published by myself as ebooks on Amazon and Smashwords, and a number of other sites. The intention is to publish a sequence, each of which is stand-alone, but when taken together there is a further story through combining the backgrounds. This blog will be largely about my views on science in fiction, and about the future, including what we should be doing about it, but in my opinion, are not. In the science area, I have been working on products from marine algae, and on biofuels. I also have an interest in scientific theory, which is usually alternative to what others think. This work is also being published as ebooks under the series "Elements of Theory".

A Discovery on Mars

Our space programs now seem to be focusing in the increasingly low concentrations or more obscure events, as if this will tell us something special. Recall earlier there was the supposed finding of phosphine in the Venusian atmosphere. Nothing like stirring up controversy because this was taken as a sign of life. As an aside, I wonder how many people actually have ever noticed phosphine anywhere? I have made it in the lab, but that hardly counts. It is not a very common material, and the signal in the Venusian atmosphere was almost certainly due to sulphur dioxide. That in itself is interesting when you ask how would that get there? The answer is surprisingly simple: sulphuric acid is known to be there, and it is denser, and might form a fog or even rain, but as it falls it hits the hotter regions near the surface and pyrolysis to form sulphur dioxide, oxygen and water. These rise, the oxygen reacts with sulphur dioxide to make sulphur trioxide (probably helped by solar radiation), which in turn reacts with water to form sulphuric acid, which in turn is why the acid stays in the atmosphere. Things that have a stable level on a planet often have a cycle.

In February this year, as reported in Physics World, a Russian space probe detected hydrogen chloride in the atmosphere of Mars after a dust storm occurred. This was done with a spectrometer that looked at sunlight as it passed through the atmosphere, and materials such as hydrogen chloride would be picked up as a darkened line at the frequency for the bond vibration in the infrared part of the spectrum. The single line, while broadened due to rotational options, would be fairly conclusive. I found the article to be interesting for all sorts of reasons, one of which was for stating the obvious. Thus it stated that dust density was amplified in the atmosphere during a global dust storm. Who would have guessed that? 

Then with no further explanation, the hydrogen chloride could be generated by water vapour interacting with the dust grains. Really? As a chemist my guess would be that the dust had wet salt on it. UV radiation and atmospheric water vapour would oxidise that, to make at first sodium hypochlorite, like domestic bleach and then hydrogen.  From the general acidity we would then get hydrogen chloride and probably sodium carbonate dust. They were then puzzled as to how the hydrogen chloride disappeared. The obvious answer is that hydrogen chloride would strongly attract water, which would form hydrochloric acid, and that would react with any oxide or carbonate in the dust to make chloride salts. If that sounds circular, yes it is, but there is a net degradation of water; oxygen or oxides would be formed, and hydrogen would be lost to space. The loss would not be very great, of course, because we are talking about parts per billion in a highly rarefied upper atmosphere and only during a dust storm.

Hydrogen chloride would also be emitted during volcanic eruptions, but that is probably able to be eliminated here because Mars no longer has volcanic eruptions. Fumarole emissions would be too wet to get to the upper atmosphere, and if they occurred, and there is no evidence they still do, any hydrochloric acid would be expected to react with oxides, such as the iron oxide that makes Mars look red, rather quickly.  So the unfortunate effect is that the space program is running up against the law of diminishing returns. We are getting more and more information that involves ever-decreasing levels of importance. Rutherford once claimed that physics was the only science – the rest was stamp collecting.  Well, he can turn in his grave because to me this is rather expensive stamp collecting.

Our Financial Future

Interest rates should be the rental cost of money. The greater the opportunities to make profits, the more people will be willing to pay for the available money to invest in further profitable ventures and the interest rates go up. That is reinforced in that if more people are trying to borrow the same limited supply of money the rental price of it must increase, to shake out the less determined borrowers. However, it does not quite work like that. If an economic boom comes along, who wants to kill good times when you can print more money? However, eventually interest rates begin to rise, and then spike to restrict credit and suppress speculation. Recessions tend to follow this spike, and interest rates fall. Ideally, the interest rate reflects what the investor expects future value to be relative to present value. All of this assumes no external economic forces.

An obvious current problem is that we have too many objectives as central banks start to enter the domain of policy. Quantitative easing involved greatly increasing the supply of money so that there was plenty for profitable investment. Unfortunately, what has mainly happened, at least where I live, is that most of it has gone into pre-existing assets, especially housing. Had it gone into building new ones, that would be fine, but it hasn’t; it has simply led to an exasperating increase in prices.

In the last half of the twentieth century, interest rates positively correlated strongly with inflation. Investors add in their expectation of inflation into their demand for bonds, for example. Interest rates and equity values tend to increase during a boom and fall during a recession. Now we find the value of equities and the interest rates on US Treasuries are both increasing, but arguably there is no boom going on. One explanation is that inflation is increasing. However, the Head of the US Federal Reserve has apparently stated that the US economy is a long way from employment and inflation goals, and there will be no increase in interest rates in the immediate future. Perhaps this assumes inflation will not take off until unemployment falls, but the evidence of stagflation, particularly in Japan, says you can have bad unemployment and high inflation, and consequently a poorly performing economy. One of the problems with inflation is that expectations of it tend to be self-fulfilling. 

As a consequence of low inflation, and of central banks printing money, governments tend to be spending vigorously. They could invest in new technology or infrastructure to stimulate the economy, and well-chosen investment will generate a lot of employment, with the consequent benefits in economic growth and that growth and profitability will eventually pay for the cost of the money. However, that does not seem to be happening. There are two other destinations: banks, which lend at low interest, and “helicopter money” to relieve those under strain because of the virus. The former, here at least, has ended up mainly in fixed and existing assets, which inflates their price. The latter has saved many small companies, at least for a while, but there is a price.

The US has spent $5.3 trillion dollars. The National Review looked at what would be needed to pay this back. If you assume the current pattern of taxation depending on income holds, Americans with incomes (in thousand dollars) between $30 – 40 k would pay ~$5,000; between $40 – 50 k would pay ~$9,000; between $50 – 75 k would pay ~$16,000; between $75 – 100 k would pay ~$27,000; between $100 – 200 k would pay ~$51,000. For those on higher incomes the numbers get out of hand. If you roll it over and pay interest, the average American family will get $350 less in government services, which is multiplied by however much interest rates rise. If we assume that the cost of a dollar raised in tax is $1.50 to allow for the depressed effects on the economy, the average American owes $40,000 thanks to the stimulus. Other countries will have their own numbers.I know I seem to be on this issue perhaps too frequently, but those numbers scare me. The question I ask is, do those responsible for printing all this money have any idea what the downstream consequences will be? If they do, they seem to be very reluctant to tell us.

Why We Cannot Get Evidence of Alien Life Yet

We have a curiosity about whether there is life on exoplanets, but how could we tell? Obviously, we have to know that the planet is there, then we have to know something about it. We have discovered the presence of a number of planets through the Doppler effect, in which the star wobbles a bit due to the gravitational force from the planet. The problem, of course, is that all we see is the star, and that tells us nothing other than the mass of the planet and its distance from the star. A shade more is found from observing an eclipse, because we see the size of the star, and in principle we get clues as to what is in an atmosphere, although in practice that information is extremely limited.

If you wish to find evidence of life, you have to first be able to see the planet that is in the habitable zone, and presumably has Earth-like characteristics. Thus the chances of finding evidence of life on a gas giant are negligible because if there were such life it would be totally unlike anything we know. So what are the difficulties? If we have a star with the same mass as our sun, the planet should be approximately 1 AU from the star. Now, take the Alpha Centauri system, the nearest stars, and about 1.3 parsec, or about 4.24 light years. To see something 1 AU away from the star requires an angular separation of about one arc-second, which is achievable with an 8 meter telescope. (For a star x times away, the required angular resolution becomes 1/x arc-seconds, which requires a correspondingly larger telescope. Accordingly, we need close stars.) However, no planets are known around Alpha Centauri A or B, although there are two around Proxima Centauri. Radial velocity studies show there is no habitable planet around A greater than about 53 earth-masses, or about 8.4 earth-masses around B. However, that does not mean no habitable planet because planets at these limits are almost certainly too big to hold life. Their absence, with that method of detection, actually improves the possibility of a habitable planet.

The first requirement for observing whether is life would seem to be that we actually directly observe the planet. Some planets have been directly observed but they are usually super-Jupiters on wide orbits (greater than10 AU) that, being very young, have temperatures greater than 1000 degrees C. The problem of an Earth-like planet is it is too dim in the visible. The peak emission intensity occurs in the mid-infrared for temperate planets, but there are further difficulties. One is the background is higher in the infrared, and another is that as you look at longer wavelengths there is a 2 – 5 times coarser spatial resolution due to the diffraction limit scaling. Apparently the best telescopes now have the resolution to detect planets around roughly the ten nearest stars. Having the sensitivity is another question.

Anyway, this has been attempted, and a candidate for an exoplanet around A has been claimed (Nature Communications, 2021, 12:922 ) at about 1.1 AU from the star. It is claimed to be within 7 times Earth’s size, but this is based on relative light intensity. Coupled with that is the possibility that this may not even be a planet at all. Essentially, more work is required.

Notwithstanding the uncertainty, it appears we are coming closer to being able to directly image rocky planets around the very closest stars. Other possible stars include Epsilon Eridani, Epsilon Indi, and Tau Ceti. But even then, if we see them, because it is at the limit of technology, we will still have no evidence one way or the other relating to life. However, it is a start to look where at least the right sized planet is known to exist. My personal preference is Epsilon Eridani. The reason is, it is a rather young star, and if there are planets there, they will be roughly as old as Earth and Mars were when life started on Earth and the great river flows occurred on Mars. Infrared signals from such atmospheres would tell us what comprised the atmospheres. My prediction is reduced, with a good amount of methane, and ammonia dissolved in water. The reason is these are the gases that could be formed through the original accretion, with no requirements for a bombardment by chondrites or comets, which seemingly, based on other evidence, did not happen here. Older planets will have more oxidized atmospheres that do not give clues, apart possibly if there are signals from ozone. Ozone implies oxygen, and that suggests plants.What should we aim to detect? The overall signal should indicate the temperature if we can resolve it. Water gives a good signal in the infrared, and seeing signals of water vapour in the atmosphere would show that that key material is present. For a young planet, methane and ammonia give good signals, although resolution may be difficult and ammonia will mainly be in water. The problems are obvious: getting sufficient signal intensity, subtracting out background noise from around the planet while realizing the planet will block background, actually resolving lines, and finally, correcting for other factors such as the Doppler effect so the lines can be properly interpreted. Remember phosphine on Venus? Errors are easy to make.

How Fast is the Universe Expanding?

In the last post I commented on the fact that the Universe is expanding. That raises the question, how fast is it expanding? At first sight, who cares? If all the other galaxies will be out of sight in so many tens of billions of years, we won’t be around to worry about it. However, it is instructive in another way. Scientists make measurements with very special instruments and what you get are a series of meter readings, or a printout of numbers, and those numbers have implied dimensions. Thus the number you see on your speedometer in your car represents miles per hour or kilometers per hour, depending on where you live. That is understandable, but that is not what is measured. What is usually measured is actually something like the frequency of wheel revolutions. So the revolutions are counted, the change of time is recorded, and the speedometer has some built-in mathematics that gives you what you want to know. Within that calculation is some built-in theory, in this case geometry and an assumption about tyre pressure.

Measuring the rate of expansion of the universe is a bit trickier. What you are trying to measure is the rate of change of distance between galaxies at various distances from you, average them because they have random motion superimposed, and in some cases regular motion if they are in clusters. The velocity at which they are moving apart is simply change of distance divided by change of time. Measuring time is fine but measuring distance is a little more difficult.  You cannot use a ruler.  So some theory has to be imposed.

There are some “simple” techniques, using the red shift as a Doppler shift to obtain velocity, and brightness to measure distance. Thus using different techniques to estimate cosmic distances such as the average brightness of stars in giant elliptical galaxies, type 1a supernovae and one or two other techniques it can be asserted the Universe is expanding at 73.5 + 1.4 kilometers per second for every megaparsec. A megaparsec is about 3.3 million light years, or three billion trillion kilometers.

However, there are alternative means of determining this expansion, such as measured fluctuations in the cosmic microwave background and fluctuations in matter density of the early Universe. If you know what the matter density was then, and know what it is now, it is simple to calculate the rate of expansion, and the answer is, 67.4 +0.5 km/sec/Mpc. Oops. Two routes, both giving highly accurate answers, but well outside any overlap and hence we have two disjoint sets of answers.

So what is the answer? The simplest approach is to use an entirely different method again, and hope this resolves the matter, and the next big hope is the surface brightness of large elliptical galaxies. The idea here is that most of the stars in a galaxy are red dwarfs, and hence the most “light” from a galaxy will be in the infrared. The new James Webb space telescope will be ideal for making these measurements, and in the meantime standards have been obtained from nearby elliptical galaxies at known distances. Do you see a possible problem? All such results also depend on the assumptions inherent in the calculations. First, we have to be sure we actually know the distance accurately to the nearby elliptical galaxies, but much more problematical is the assumption that the luminosity of the ancient galaxies is the same as the local ones. Thus in earlier times, since the metals in stars came from supernovae, the very earliest stars will have much less so their “colour” from their outer envelopes may be different. Also, because the very earliest stars formed from denser gas, maybe the ratio of sizes of the red dwarfs will be different. There are many traps. Accordingly, the main reason for the discrepancy is that the theory used is slightly wrong somewhere along the chain of reasoning. Another possibility is the estimates of the possible errors are overly optimistic. Who knows, and to some extent you may say it does not matter. However, the message from this is that we have to be careful with scientific claims. Always try to unravel the reasoning. The more the explanation relies on mathematics and the less is explained conceptually, the greater the risk that whoever is presenting the story does not understands it either.

Ebook discount

From March 18 – 25, my thriller, The Manganese Dilemma, will be discounted to 99c/99p on Amazon. When the curvaceous Svetlana escapes to the West with clues that the Russians have developed super stealth, Charles Burrowes, a master hacker living under a cloud of suspicion, must find out what it is. Surveillance technology cannot show any evidence of such an invention, but Svetlana’s father was shot dead as they made their escape. Burrowes must uncover what is going on before Russian counterintelligence or a local criminal conspiracy blow what is left of his freedom out of the water

Dark Energy

Many people will have heard of dark energy, yet nobody knows what it is, apart from being something connected with the rate of expansion of the Universe. This is an interesting part of science. When Einstein formulated General Relativity, he found that if his equations were correct, the Universe should collapse due to gravity. It hasn’t so far, so to avoid that he introduced a term Λ, the so-called cosmological constant, which was a straight-out fudge with no basis other than that of avoiding the obvious mistake that the universe had not collapsed and did not look like doing so. Then, when he found from observations that the Universe was actually expanding, he tore that up. In General Relativity, Λ represents the energy density of empty space.

We think the Universe expansion is accelerating because when we look back in time by looking at ancient galaxies, we can measure the velocity of their motion relative to us through the so-called red shift of light, and all the distant galaxies are going away from us, and seemingly faster the further away they are. We can also work out how far away they are by taking light sources and measuring how bright they are, and provided we know how bright they were when they started, the dimming gives us a measure of how far away they are. What two research groups found in 1998 is that the expansion of the Universe was accelerating, which won them the 2011 Nobel prize for physics. 

The next question is, how accurate are these measurements and what assumptions are inherent? The red shift can be measured accurately because the light contains spectral lines, and as long as the physical constants have remained constant, we know exactly their original frequencies, and consequently the shift when we measure the current frequencies. The brightness relies on what are called standard candles. We know of a class of supernovae called type 1a, and these are caused by one star gobbling the mass of another until it reaches the threshold to blow up. This mass is known to be fairly constant, so the energy output should be constant.  Unfortunately, as often happens, the 1a supernovae are not quite as standard as you might think. They have been separated into three classes: standard 1a, dimmer 1a , and brighter 1a. We don’t know why, and there is an inherent problem that the stars of a very long time ago would have had a lower fraction of elements from previous supernovae. They get very bright, then dim with time, and we cannot be certain they always dim at the same rate. Some have different colour distributions, which makes specific luminosity difficult to measure. Accordingly, some consider the evidence is inadequate and it is possible there is no acceleration at all. There is no way for anyone outside the specialist field to resolve this. Such measurements are made at the limits of our ability, and a number of assumptions tend to be involved.

The net result of this is that if the universe is really expanding, we need a value for Λ because that will describe what is pushing everything apart. That energy of the vacuum is called dark energy, and if we consider the expansion and use relativity to compare this energy with the mass of the Universe we can see, dark energy makes up 70% of the total Universe. That is, assuming the expansion is real. If not, 70% of the Universe just disappears! So what is it, if real?

The only real theory that can explain why the vacuum has energy at all and has any independent value is quantum field theory. By independent value, I mean it explains something else. If you have one observation and you require one assumption, you effectively assume the answer. However, quantum field theory is not much help here because if you calculate Λ using it, the calculation differs from observation by a factor of 120 orders of magnitude, which means ten multiplied by itself 120 times. To put that in perspective, if you were to count all the protons, neutrons and electrons in the entire universe that we can see, you would multiply ten by itself about 83 times to express the answer. This is the most dramatic failed prediction in all theoretical physics and is so bad it tends to be put in the desk drawer and ignored/forgotten about.So the short answer is, we haven’t got a clue what dark energy is, and to make matters worse, it is possible there is no need for it at all. But it most certainly is a great excuse for scientific speculation.

Is There a Planet 9?

Before I start, I should remind everyone of the solar system yardstick: the unit of measurement called the Astronomical Unit, or AU, which is the distance from Earth to the Sun. I am also going to define a mass unit, the emu, which is the mass of the Earth, or Earth mass unit.

As you know, there are eight planets, with the furthest out being Neptune, which is 30 AU from the Sun. Now the odd thing is, Neptune is a giant of 17 emu, Uranus is only about 14.5 emu, so there is more to Neptune than Uranus, even though it is about 12 AU further out. So, the obvious question is, why do the planets stop at Neptune, and that question can be coupled with, “Do they?” The first person to be convinced there had to be at least one more was Percival Lowell, he of Martian canal fame, and he built himself a telescope and searched but failed to find it. The justification was that Neptune’s orbit appeared to be perturbed by something. That was quite reasonable as Neptune had been found by perturbations in Uranus’ orbit that were explained by Neptune. So the search was on. Lowell calculated the approximate position of the ninth planet, and using Lowell’s telescope, Clyde Tombaugh discovered what he thought was planet 9.  Oddly, this was announced on the anniversary of Lowell’s birthday, Lowell now being dead. As it happened, this was an accidental coincidence. Pluto is far too small to affect Neptune, and it turns out Neptune’s orbit did not have the errors everyone thought it did – another mistake. Further, Neptune, as with the other planets has an almost circular obit but Pluto’s is highly elliptical, spending some time inside Neptune’s orbit and sometimes as far away as 49 AU from the Sun. Pluto is not the only modest object out there: besides a lot of smaller objects there is Quaoar (about half Pluto’s size) and Eris (about Pluto’s size). There is also Sedna, (about 40% Pluto’s size) that has an elliptical orbit that varies the distance to the sun from 76 AU to 900 AU.

This raises a number of questions. Why did planets stop at 30 AU here? Why is there no planet between Uranus and Neptune? We know HR 8977 has four giants like ours, and the Neptune equivalent is about 68 AU from the star, and that Neptune-equivalent is about 6 times the mass of Jupiter. The “Grand Tack” mechanism explains our system by arguing that cores can only grow by major bodies accreting what are called planetesimals, which are bodies about the size of asteroids, and cores cannot grow further out than Saturn. In this mechanism, Neptune and Uranus formed near Saturn and were thrown outwards and lifted by throwing a mass of planetesimals inwards, the “throwing”: being due to gravitational interactions. To do this there had to be a sufficient mass of planetesimals, which gets back to the question, why did they stop at 30 AU?

One of the advocates for Planet 9 argued that Planet 9, which was supposed to have a highly elliptical orbit itself, caused the elliptical orbits of Sedna and some other objects. However, this has also been argued to be due to an accidental selection of a small number of objects, and there are others that don’t fit. One possible cause of an elliptical orbit could be a close encounter with another star. This does happen. In 1.4 million years Gliese 710, which is about half the mass of the Sun, will be about 10,000 AU from the Sun, and being that close, it could well perturb orbits of bodies like Sedna.

Is there any reason to believe a planet 9 could be there? As it happens, the exoplanets encylopaedia lists several at distances greater that 100 AU, and in some case several thousand AU. That we see them is because they are much larger than Jupiter, and they have either been in a good configuration for gravitational lensing or they are very young. If they are very young, the release of gravitational energy raises them to temperatures where they emit yellow-white light. When they get older, they will fade away and if there were such a planet in our system, by now it would have to be seen by reflected light. Since objects at such great distances move relatively slowly they might be seen but not recognized as planets, and, of course, studies that are looking for something else usually encompass a wide sky, which is not suitable for planet searching.For me, there is another reason why there might be such a planet. In my ebook, “Planetary Formation and Biogenesis” I outline a mechanism by which the giants form, which is similar to that of forming a snowball: if you press ices/snow together and it is suitably close to its melting point, it melt-fuses, so I predict the cores will form from ices known to be in space: Jupiter – water; Saturn – methanol/ammonia/water; Uranus – methane/argon; Neptune – carbon monoxide/nitrogen. If you assume Jupiter formed at the water ice temperature, the other giants are in the correct place to within an AU or so. However, there is one further ice not mentioned: neon. If it accreted a core then it would be somewhere greater than 100 AU.  I cannot be specific because the melting point of neon is so low that a number of other minor and ignorable effects are now significant, and cannot be ignored. So I am hoping there is such a planet there.

Ebook Discount

For the Smashwords sale March 7 – 13 the following ebooks will be discounted. 

Puppeteer:  A technothriller where the ultimate piece of terrorism/blackmail threatens to kill tens to hundreds of millions of people and destroy billions of dollars worth of infrastructure. One of the keys to stopping it lies with two young scientists on the sub-Antarctic island of Kerguelen.

http://www.smashwords.com/books/view/69696

‘Bot War:  A technothriller set about 8 years later, a more concerted series of terrorist attacks made by stolen drones coupled with a government so deeply in debt leads to governance breaking down.

Smashwords    https://www.smashwords.com/books/view/677836

Troubles. The world is recovering from anarchy caused by government debt, gross inequality, factional fighting, but the invention of fusion power means growth and opportunities. Economic evolution, greed and the gun compete for the crumbs.

https://www.smashwords.com/books/view/174203

Also discounted is Biofuels, non-fiction, which summarizes to what extent biofuels could be a solution to the carbon-neutrality problem, and not from making alcohol from corn. From my own personal research. Summarises what is and what is not a good idea. Be knowledgeable: https://www.smashwords.com/books/view/454344

What Started Civilization?

I am fascinated by the question when did civilization start? This question, of course, depends on what you mean by civilization. I assume the first step would involve a person becoming more skilled at just one thing, and trade that product for all the other things he/she wanted. That is necessary, but not sufficient. Perhaps, rather arbitrarily, I am going to define it as when people started to specialize sufficiently that they had to stay in one place. Now, food became a problem because the same area had to sustain the tribe, which might lead to the weeding of the undesirable and planting and tending the desirable. I suspect the first real such industry would be flint knapping. Someone who could make really sharp arrow-heads could trade them for meat, but the flint knapper would need to remain near the best supplies of flint.

Evidence for trade goes back at least 300,000 years, because the remains of a tribe has been found that used ochre for decoration, and the nearest ochre deposits were over a hundred kilometers away. Trade, however, does not mean specialization. What presumably happened was that the very small tribes (which may have been little more than a few families) would go to an annual get-together, trade, socialize, exchange young women (because small tribes need to keep up genetic diversity) then go back to where they can feed themselves. Neanderthals also lived in small groupings and probably maintained the same type of lifestyle. Is that civilization?

The stone-knapper would be the equivalent of a tradesperson, doing one job for one person at a time. Maybe that does not qualify. (Whether it does depends on whatever definition you choose. This problem persists in modern science where only too many silly ideas are conveyed by terms that become misinterpreted.) However, I feel that a processing plant really does qualify. For this you need a fixed site, a continual source of raw material, and to get scale, you need a number of customers. So what came first? It appears there are at least two contenders.

The first is bread. An archaeological site in Jordan that was occupied 14,000 years ago has unearthed a bakery, and the remains of bread. This was made by grinding wild wheat and wild barley to a flour, pounding tubers of wild plants that grow in water, mixing these together to make a dough and then bake it on hot stones around a fire. Microscopic examination of the remains shows clear evidence of grinding, sieving and kneading. The people were hunter-gatherers, and would eat meat from gazelles down to hares and birds, together with whatever plant foods they could forage. That the large stone oven remains here today shows this activity was in a fixed place.

The second contender comes from a dig near Haifa. They found stone mortars 60 cm deep used for pounding various species of plants, including oats and legumes. The evidence was that besides a place where food was prepared, they also made beer. Grain was germinated to produce malt, then the resulting mash was heated, then fermented with wild yeast to produce a “beer”.  This beer was probably more like an alcoholic porridge than what we thing of as beer, but it was an industry.It would be fascinating if it were beer that was the cause of civilization The need for beer would require grain, and because you could not carry around these large mortars, you would prefer to have your grain close, and in regular supply. Regular supply means storing it because grain is seasonal. Growing enough to keep a good beer supply means farming, and keeping the rats out of the grain. As it happens, cats have become domesticated for about 13,000 years. Your household cat is probably the clue.

An Infestation of Bacteria

One of those things you probably don’t need to know (but I am going to tell you anyway) is that there are more bacterial cells in your gut than there are cells in your body. This may seem weird, but remember much of your body, like water, is not cellular. And, of course, there is more than just one type of bacteria, indeed according to a Naturearticle from last year, there more than one hundred times the number of genes in the gut than there are in the human host. That, of course, gives a lot of scope for studying, er, colonic material. And yes, some people apparently do that, and there are some “interesting outputs”.

With such a range of “starting material” to study, the first step was to break the bacteria into four enterotypes. One of those sets, labeled Bacteroides 2, is associated with inflammation. Thus 75% of those with inflammatory bowel disease have this enterotype, while fewer than 15% of those who do not have the disease harbor it. This enterotype has another problem: if you have this enterotype it suppresses the manufacture of butyric acid, which is argued to preserve the barrier function of the epithelial cells lining the gut. In short, too little butyric acid and you get more inflammation. This suggests a corrective measure: eat butter, various fats, milk, parmesan cheese, and some rather unpleasant sources. The problem is that such foods still do not give enough. As an aside, butyric acid is quite foul smelling, and is a significant component of vomit. This suggests that supplements are unlikely to be chosen.

Gut bacteria can make trimethylamine oxide, which is claimed to accelerate atherosclerosis and lead to adverse cardiovascular outcomes, and the article adds, “including death”. Yes, that could be described as an adverse effect. Apparently a research group made a study on 2000 individuals and sorted out something like 1400 variables. For me that is far too small a number of subjects for that number of variables, but nevertheless they came out with the claim that a higher prevalence of this Bact-2  enterotype led to a higher probability of cardiovascular disease, but also it correlated with a  higher body-mass index and with obesity.  Note that correlation does not imply causation, and excess weight has been correlated with cardiovascular difficulties before. 

But there was more. If we consider only the obese, it was found that those taking statins have a pronounced reduction in this Bact-2 enterophyte, in which case they presumably help build up butyric acid. Statins also inhibit an enzyme on the route for making cholesterol, leading to cells to boost low-density lipoprotein (LDL), which in turn captures more cholesterol, which is supposed to lower the risk of cardiovascular disease. Statins also have anti-inflammatory action.This leads to a problem that in my opinion confounds medical research. We have an observation from a study in which there were almost as many variables as subjects that in one very small subset statins reduced the level of a gut bacteria group that can be correlated with cardiovascular problems. It seemed particularly effective at doing this in obese patients. Do you notice some rather tenuous links? In this study there were a huge number of variables that were not separated. Could we argue that we have been on the wrong track and something else is the cause of this effect, assuming the effect is real and not an accidental outcome of a small subset? How can we be sure that those taking statins were not better treated or more health conscious? On the other hand, if the effect is real, should not statin consumption, under proper medical prescription, be encouraged? What I hope this shows is how easy it is to find correlations, with the risk you are misleading everybody, which is why there are so many articles on medical issues that seem to contradict other ones. It is not an easy subject to analyse data, but we all have an interest in delaying death and misery.