The Fermi Paradox and Are We Alone in the Universe?

The Fermi paradox is something like this. The Universe is enormous, and there are an astronomical number of planets. Accordingly, the potential for intelligent life somewhere should be enormous, but we find no evidence of anything. The Seti program has been searching for decades and has found nothing. So where are these aliens?

What is fascinating about this is an argument from Daniel Whitmire, who teaches mathematics at the University of Arkansas and has published a paper in the International Journal of Astrobiology (doi:10.1017/S1473550417000271 ). In it, he concludes that technological societies rapidly exterminate themselves. So, how does he come to this conclusion. The argument is fascinating relating to the power of mathematics, and particularly statistics, to show or mislead.

He first resorts to a statistical concept called the Principle of Mediocrity, which states that, in the absence of any evidence to the contrary, any observation should be regarded as typical. If so, we observe our own presence. If we assume we are typical, and we have been technological for 100 years (he defines being technological as using electricity, but you can change this) then it follows that our being average means that after a further 200 years we are no longer technological. We can extend this to about 500 years on the basis that in terms of age a Bell curve is skewed (you cannot have negative age). To be non-technological we have to exterminate ourselves, therefore he concludes that technological societies exterminate themselves rather quickly. We may scoff at that, but then again, watching the antics over North Korea can we be sure?

He makes a further conclusion: since we are the first on our planet, other civilizations should also be the first. I really don’t follow this because he has also calculated that there could be up to 23 opportunities for further species to develop technologies once we are gone, so surely that follows elsewhere. It seems to me to be a rather mediocre use of this principle of mediocrity.

Now, at this point, I shall diverge and consider the German tank problem, because this shows what you can do with statistics. The allies wanted to know the production rate of German tanks, and they got this from a simple formula, and from taking down the serial numbers of captured or destroyed tanks. The formula is

N = m + m/n – 1

Where N is the number you are seeking, m is the highest sampled serial number and n is the sample size (the number of tanks). Apparently this was highly successful, and their estimations were far superior to intelligence gathering, which always seriously overestimated.

That leaves the question of whether that success means anything for the current problem. The first thing we note is the Germans conveniently numbered their tanks, and in sequence, the sample size was a tolerable fraction of the required answer (it was about 5%), and finally it was known that the Germans were making tanks and sending them to the front as regularly as they could manage. There were no causative aspects that would modify the results. With Whitmire’s analysis, there is a very bad aspect of the reasoning: this question of whether we are alone is raised as soon as we have some capability to answer it. Thus we ask it within fifty years of having reasonable electronics; for all we know they may still be asking it a million years in the future, so the age of technological society, which is used to base the lifetime reasoning, is put into the equation as soon as it is asked. That means it is not a random sample, but causative sample. Then on top of that, we have a sample of one, which is not exactly a good statistical sample. Of course if there were more samples than one, the question would answer itself and there would be no need for statistics. In this case, statistics are only used when they should not be used.

So what do I make of that? For me, there is a lack of logic. By definition, to publish original work, you have to be the first to do it. So, any statistical conclusion from asking the question is ridiculous because by definition it is not a random sample; it is the first. It is like trying to estimate German tank production from a sample of 1 and when that tank had the serial number 1. So, is there anything we can take from this?

In my opinion, the first thing we could argue from this Principle of Mediocrity is that the odds of finding aliens are strongest on earth-sized planets around G type stars about this far from the star, simply because we know it is at least possible. Further, we can argue the star should be at least about 4.5 billion years old, to give evolution time to generate such technological life. We are reasonably sure it could not have happened much earlier on Earth. One of my science fiction novels is based on the concept that Cretaceous raptors could have managed it, given time, but that still only buys a few tens of millions of years, and we don’t know how long they would have taken, had they been able. They had to evolve considerably larger brains, and who knows how long that would take? Possibly almost as long as mammals took.

Since there are older stars out there, why haven’t we found evidence? That question should be rephrased into, how would we? The Seti program assumes that aliens would try to send us messages, but why would they? Unless they were directed, to send meaningful signals over such huge distances would require immense energy expenditures. And why would they direct signals here? They could have tried 2,000 years ago, persisted for a few hundred years, and given us up. Alternatively, it is cheaper to listen. As I noted in a different novel, the concept falls down on economic grounds because everyone is listening and nobody is sending. And, of course, for strategic reasons, why tell more powerful aliens where you live? For me, the so-called Fermi paradox is no paradox at all; if there are aliens out there, they will be following their own logical best interests, and they don’t include us. Another thing it tells me is this is evidence you can indeed “prove” anything with statistics, if nobody is thinking.

What is nothing?

Shakespeare had it right – there has been much ado about nothing, at least in the scientific world. In some of my previous posts I have advocated the use of the scientific method on more general topics, such as politics. That method involves the rigorous evaluation of evidence, of making propositions in accord with that evidence, and most importantly, rejecting those that are clearly false. It may appear that for ordinary people, that might be too hard, but at least that method would be followed by scientists, right? Er, not necessarily. In 1962 Thomas Kuhn published a work, “The structure of scientific revolutions” and in it he argued that science itself has a very high level of conservatism. It is extremely difficult to change a current paradigm. If evidence is found that would do so, it is more likely to be secreted away in the bottom drawer, included in a scientific paper in a place where it is most likely to be ignored, or, if it is published, ignored anyway, and put in the bottom drawer of the mind. The problem seems to be, there is a roadblock towards thinking that something not in accord with expectations might be significant. With that in mind, what is nothing?

An obvious answer to the title question is that a vacuum is nothing. It is what is left when all the “somethings” are removed. But is there “nothing” anywhere? The ancient Greek philosophers argued about the void, and the issue was “settled” by Aristotle, who argued in his Physica that there could not be a void, because if there were, anything that moved in it would suffer no resistance, and hence would continue moving indefinitely. With such excellent thinking, he then, for some reason, refused to accept that the planets were moving essentially indefinitely, so they could be moving through a void, and if they were moving, they had to be moving around the sun. Success was at hand, especially if he realized that feathers did not fall as fast as stones because of wind resistance, but for some reason, having made such a spectacular start, he fell by the wayside, sticking to his long held prejudices. That raises the question, are such prejudices still around?

The usual concept of “nothing” is a vacuum, but what is a vacuum? Some figures from Wikipedia may help. A standard cubic centimetre of atmosphere has 2.5 x 10^19 molecules in it. That’s plenty. For those not used to “big figures”, 10^19 means the number where you write down 10 and follow it with 19 zeros, or you multiply 10 by itself nineteen times. Our vacuum cleaner gets the concentration of molecules down to 10^19, that is, the air pressure is two and a half times less in the cleaner. The Moon “atmosphere” has 4 x 10^5 molecules per cubic centimetre, so the Moon is not exactly in vacuum. Interplanetary space has 11 molecules per cubic centimetre, interstellar space has 1 molecule per cubic centimetre, and the best vacuum, intergalactic space, needs a million cubic centimetres to find one molecule.

The top of the Earth’s atmosphere, the thermosphere goes from 10^14 to 10^7. That is a little suspect at the top because you would expect it to gradually go down to that of interplanetary space. The reason there is a boundary is not because there is a sharp boundary, but rather it is the point where gas pressure is more or less matched by solar radiation pressure and that from solar winds, so it is difficult to make firm statements about further distances. Nevertheless, we know there is atmosphere out to a few hundred kilometres because there is a small drag on satellites.

So, intergalactic space is most certainly almost devoid of matter, but not quite. However, even without that, we are still not quite there with “nothing”. If nothing else, we know there are streams of photons going through it, probably a lot of cosmic rays (which are very rapidly moving atomic nuclei, usually stripped of some of their electrons, and accelerated by some extreme cosmic event) and possibly dark matter and dark energy. No doubt you have heard of dark matter and dark energy, but you have no idea what it is. Well, join the club. Nobody knows what either of them are, and it is just possible neither actually exist. This is not the place to go into that, so I just note that our nothing is not only difficult to find, but there may be mysterious stuff spoiling even what little there is.

However, to totally spoil our concept of nothing, we need to see quantum field theory. This is something of a mathematical nightmare, nevertheless conceptually it postulates that the Universe is full of fields, and particles are excitations of these fields. Now, a field at its most basic level is merely something to which you can attach a value at various coordinates. Thus a gravitational field is an expression such that if you know where you are and if you know what else is around you, you also know the force you will feel from it. However, in quantum field theory, there are a number of additional fields, thus there is a field for electrons, and actual electrons are excitations of such fields. While at this point the concept may seem harmless, if overly complicated, there is a problem. To explain how force fields behave, there needs to be force carriers. If we take the electric field as an example, the force carriers are sometimes called virtual photons, and these “carry” the force so that the required action occurs. If you have such force carriers, the Uncertainty Principle requires the vacuum to have an associated zero point energy. Thus a quantum system cannot be at rest, but must always be in motion and that includes any possible discrete units within the field. Again, according to Wikipedia, Richard Feynman and John Wheeler calculated there was enough zero point energy inside a light bulb to boil off all the water in the oceans. Of course, such energy cannot be used; to use energy you have to transfer it from a higher level to a lower level, when you get access to the difference. Zero point energy is at the lowest possible level.

But there is a catch. Recall Einstein’s E/c^2 = m? That means according to Einstein, all this zero point energy has the equivalent of inertial mass in terms of its effects on gravity. If so, then the gravity from all the zero point energy in vacuum can be calculated, and we can predict whether the Universe is expanding or contracting. The answer is, if quantum field theory is correct, the Universe should have collapsed long ago. The difference between prediction and observation is merely about 10^120, that is, ten multiplied by itself 120 times, and is the worst discrepancy between prediction and observation known to science. Even worse, some have argued the prediction was not right, and if it had been done “properly” they justified manipulating the error down to 10^40. That is still a terrible error, but to me, what is worse, what is supposed to be the most accurate theory ever is suddenly capable of turning up answers that differ by 10^80, which is roughly the same as the number of atoms in the known Universe.

Some might say, surely this indicates there is something wrong with the theory, and start looking elsewhere. Seemingly not. Quantum field theory is still regarded as the supreme theory, and such a disagreement is simply placed in the bottom shelf of the minds. After all, the mathematics are so elegant, or difficult, depending on your point of view. Can’t let observed facts get in the road of elegant mathematics!

A Further Example of Theory Development.

In the previous post I discussed some of what is required to form a theory, and I proposed a theory at odds with everyone else as to how the Martian rivers flowed. One advantage of that theory is that provided the conditions hold, it at least explains what it set out to do. However, the real test of a theory is that it then either predicts something, or at least explains something else it was not designed to do.

Currently there is no real theory that explains Martian river flow if you accept the standard assumption that the initial atmosphere was full of carbon dioxide. To explore possible explanations, the obvious next step is to discard that assumption. The concept is that whenever forming theories, you should look at the premises and ask, if not, what?

The reason everyone thinks that the original gases were mainly carbon dioxide appears to be because volcanoes on Earth largely give off carbon dioxide. There can be two reasons for that. The first is that most volcanoes actually reprocess subducted material, which includes carbonates such as lime. The few that do not may be as they are because the crust has used up its ability to turn CO2 into hydrocarbons. That reaction depends on Fe (II) also converting to Fe (III), and it can only do that once. Further, there are many silicates with Fe (II) that cannot do it because the structure is too tightly bound, and the water and CO2 cannot get at the iron atoms. Then, if that did not happen, would methane be detected? Any methane present mixed with the red hot lava would burn on contact with air. Samples are never taken that close to the origin. (As an aside, hydrocarbon have been found, especially where the eruptions are under water.)

Also, on the early planet, iron dust will have accreted, as will other reducing agents, but the point of such agents is, they can also only be used once. What happens now will be very different from what happened then. Finally, according to my theory, the materials were already reduced. In this context we know that there are samples of meteorites that have serious reduced matter, such as phosphides, nitrides and carbides (both of which I argue should have been present), and even silicides.

There is also a practical point. We have one sample of Earth’s sea/ocean from over three billion years ago. There were quite high levels of ammonia in it. Interestingly, when that was found, the information ended up as an aside in a scientific paper. Because it was inexplicable to the authors, it appears they said the least they could.

Now if this seems too much, bear with me, because I am shortly going to get to the point of this. But first, a little chemistry, where I look at the mechanism of making these reduced gases. For simplicity, consider the single bond between a metal M and, say, a nitrogen atom N in a nitride. Call that M – N. Now, let it be attacked by water. (The diagram I tried to include refused to cooperate. Sorry) Anyway, the water attacks the metal and because the number of bonds around the metal stays the same, a hydrogen atom has to get attached to N, thus we get M-OH  + NH. Do this three times and we have ammonia, and three hydroxide groups on a metal ion. Eventually, two hydroxides will convert to one oxide and one molecule of water will be regenerated. The hydroxides do not have to be on the same metal to form water.

Now, the important thing is, only one hydrogen gets transferred per water molecule attack. Now suppose we have one hydrogen atom and one deuterium atom. Now, the one that is preferentially transferred is the one that it is easier to transfer, in which case the deuterium will preferentially stay on the oxygen because the ease of transfer depends on the bond strength. While the strength of a chemical bond starts out depending only on the electromagnetic forces, which will be the same for hydrogen and deuterium, that strength is reduced by the zero point vibrational energy, which is required by quantum mechanics. There is something called the Uncertainty Principle that says that two objects at the quantum level cannot be an exact distance from each other, because then they would have exact position, and exact momentum (zero). Accordingly, the bonds have to vibrate, and the energy of the vibration happens to depend on the mass of the atoms. The bond to hydrogen vibrates the fastest, so less energy is subtracted for deuterium. That means that deuterium is more likely to remain on the regenerated water molecule. This is an example of the chemical isotope effect.

There are other ways of enriching deuterium from water. The one usually considered for planetary bodies is that as water vapour rises, solar winds will blow off some water or UV radiation will break a oxygen – hydrogen bond, and knock the hydroden atom to space. Since deuterium is heavier, it is slightly less likely to get to the top. The problem with this is that the evidence does not back up the solar wind concept (it does happen, but not enough) and if the UV splitting of water is the reason, then there should be an excess of oxygen on the planet. That could work for Earth, but Earth has the least deuterium enrichment of the rocky planets. If it were the way Venus got its huge deuterium enhancement, there had to be a huge ocean initially, and if that is used to explain why there is so much deuterium, then where is the oxygen?

Suppose the deuterium levels in a planet’s hydrogen supply is primarily due to the chemical isotope effect, what would you expect? If the model of atmospheric formation noted in the previous post is correct, the enrichment would depend on the gas to water ratio. The planet with the lowest ratio, i.e. minimal gas/water would have the least enrichment, and vice versa. Earth has the least enrichment. The planet with the highest ratio, i.e. the least water to make gas, would have the greatest enrichment, and here we see that Venus has a huge deuterium enrichment, and very little water (that little is bound up in sulphuric acid in the atmosphere). It is quite comforting when a theory predicts something that was not intended. If this is correct, Venus never had much water on the surface because what it accreted in this hotter zone was used to make the greater atmosphere.

Liquid Fuels from Algae

In the previous post, I discussed biofuels in general. Now I shall get more specific, with one particular source that I have worked on. That is attempting to make liquid fuels from macro and microalgae. I was recently sent the following link:

https://www.fool.com/investing/2017/06/25/exxonmobil-to-climate-change-activists-chew-on-thi.aspx

In this, it was reported that ExxonMobil partnering Synthetic Genomics Inc. have a $600 million collaboration to develop biofuels from microalgae. I think this was sent to make me green with envy, because I was steering the research efforts of a company in New Zealand trying to do the same, except that they had only about $4 million. I rather fancy we had found the way to go with this, albeit with a lot more work to do, but the company foundered when it had to refinance. It could have done this in June 2008, but it put it off until 2009. I think it was in August that Lehmans did a nosedive, and the financial genii of Wall Street managed to find the optimal way to dislocate the world economies without themselves going to jail or, for that matter, becoming poor; it was the lesser souls that paid the price.

The background: microalgae are unique among plants in that they devote most of their photochemical energy into either making protein and lipids, which in more common language are oily fats. If for some reason, such as a shortage of nitrogen, they will swell up and just make lipids, and about 75 – 80% of their mass are comprised of these, and when nitrogen starved, they can reach about 70% lipids before they die of starvation. When nitrogen is plentiful, they try to reproduce as fast as they can, and that is rapid. Algae are the fastest growing plants on the planet. One problem with microalgae: they are very small, and hence difficult to harvest.

So what is ExxonMobil doing? According to this article they have trawled the world looking for samples of microalgae that give high yields of oil. They have tried gene-editing techniques to grow a strain that will double oil production without affecting growth rate, and they grow these in special tubes. To be relevant, they need a lot of tubes. According to the article, if they try open tanks, they need an area about the size of Colorado to supply America’s oil demand, and a corresponding lot of water. So, what is wrong here? In my opinion, just about everything.

First, you want to increase the oil yield? Take the microalgae from the rapidly growing stage and grow them in nitrogen-starved conditions. No need for special genetics. Second, if you are going to grow your microalgae in open tanks (to let in the necessary carbon dioxide and reduce containment costs) you also let in airborne algae. Eventually, they will take over because evolution has made them more competitive than your engineered strain. Third, no need to consider producing all of America’s liquid fuels all at once; electricity will take up some, and in any case, there is no single fix. We need what we can get. Fourth, if you want area, where is the greatest area with sufficient water? Anyone vote for the ocean? It is also possible that microalgae may not be the only option, because if you use the sea, you could try macroalgae, some of which such as Macrocystis pyrifera grow almost as fast, although they do not make significant levels of lipids.

We do not know how ExxonMobil intended to process their algae. What many people advocate is to extract out the lipids and convert them to biodiesel by reacting them with something like sodium methoxide. To stop horrible emulsions while extracting, the microalgae need to be dried, and that uses energy. My approach was to use simple high pressure processing in water, hence no need to dry the algae, from which both a high-octane petrol fraction and a high-cetane diesel fraction could be obtained. Conversion efficiencies are good, but there are many other byproducts, and some of the residue is very tarry.

After asking where the best supply of microalgae could be found, we came up with sewage treatment ponds. No capital requirement for building the ponds, and the microalgae are already there. In the nutrient rich water, they grow like mad, and take up the nutrients that would otherwise be considered pollutants like sponges. The lipid level by simple extraction is depressingly low, but the levels that are bound elsewhere in the algae are higher. There is then the question of costs. The big cost is in harvesting the microalgae, which is why macroalgae would be a better bet in the oceans.

The value of the high pressure processing (an accelerated treatment that mimics how nature made our crude oil in the first place) is now apparent: while the bulk of the material is not necessarily a fuel, the value of the “byproducts” of your fuel process vastly exceeds the value of the fuel. It is far easier to make money while still working on the smaller scale. (The chemical industry is very scale dependent. The cost of making something is such that if you construct a similar processing plant that doubles production, the unit cost of the larger plant is about 60% that of the smaller plant.)

So the approach I favour involves taking mainly algal biomass, including some microalgae from the ocean (and containing that might be a problem) and aiming initially to make most of your money from the chemical outputs. One of the ones I like a lot is a suite of compounds with low antibacterial activity, which should be good for feeding chickens and such, which in turn would remove the breeding ground for antibiotic resistant superbugs. There are plenty of opportunities, but unfortunately, a lot of effort and money required it make it work.

For more information on biofuels, my ebook, Biofuels An Overview is available at Smashwords through July for $0.99. Coupon code NY22C

The future did not seem to work!

When I started my career in chemistry as an undergraduate, chemists were an optimistic bunch, and everyone supported this view. Eventually, so it was felt, chemists would provide a substance to do just about anything people wanted, provided the laws of physics and chemistry permitted it. Thus something that was repelled by gravity was out, but a surprising lot was in. There was a shortage of chemists, and good paying jobs were guaranteed.

By the time I had finished my PhD, governments and businesses everywhere decided they had enough chemists for the time being thank you. The exciting future could be put on hold. For the time being, let us all stick to what we have. Of course there were still jobs; they were just an awfully lot harder to find. The golden days for jobs were over; as it happened, that was not the only thing that was over. In some people’s eyes, chemicals were about to become national villains.

There was an element of unthinking optimism from some. I recall in one of my undergraduate lectures where the structure of penicillin was discussed. Penicillin is a member of a class of chemicals called beta lactams, and the question of bacterial tolerance was discussed. The structure of penicillin is (https://en.wikipedia.org/wiki/Penicillin) where R defines the carboxylic acid to that amide. The answer to bacterial tolerance was simple: there is almost an infinite number of possible carboxylic acids (the variation is changing R) so chemists could always be a step ahead of the bugs. You might notice a flaw in that argument. Suppose the enzymes of the bug attacked the lactam end of the molecule and ignored the carboxylic acid amide? Yes, when bacteria learned to do that, the effectiveness of all penicillins disappears. Fortunately for us, this seems to be a more difficult achievement, and penicillins still have their uses.

The next question is, why did this happen? The answer is simple: stupidity. People stopped restricting the use to countering important infections. They started to be available “over the counter” in some places, and they were used intermittently by some, or as prophylactics by others. Not using the full course meant that some bacteria were not eliminated, and since they were the most resistant ones, thanks to evolution when they entered the environment, they conveyed some of the resistance. This was made worse by agricultural use where low levels were used to promote growth. If that was not a recipe to breed resistance, what was?

The next “disaster” to happen was the recognition of ozone depletion, caused by the presence of chlorofluorocarbons, which on photolysis in the upper atmosphere created free radicals that destroyed ozone. The chlorofluorocarbons arose from spray cans, essential for hair spray and graffiti. This problem appears to have been successfully solved, not by banning spray cans, not by requesting restraint from users, but rather by replacing the chlorofluorocarbons with hydrocarbon propellant.

One problem we have not addressed, despite the fact that everyone knows it is there, is rubbish in the environment. What inspired this post was the announcement that rubbish has been found in the bottom of the Marianna trench. Hardly surprising; heavy things sink. But some also floats. The amounts of waste plastic in the oceans is simply horrendous, and only too much of it is killing fish and sea mammals. What starts off as a useful idea can end up generating a nightmare if people do not treat it properly. One example that might happen comes from a news report this week: a new type of plastic bottle has been developed that is extremely slippery, and hence you can more easily get out the last bit of ketchup. Now, how will this be recycled? I once developed a reasonably sophisticated process for recycling plastics, and the major nightmare is the multi-layered plastics with hopelessly incompatible properties. This material has at least three different components, and at least one of them appears to be just about totally incompatible with everything else, which is where the special slipperiness comes from. So, what will happen to all these bottles?

Then last problem to be listed here is climate change. The problem is that some of the more important people, such as some politicians, do not believe in it sufficiently to do anything about it. The last thing a politician wants to do is irritate those who fund his election campaign. Accordingly, that problem may be insoluble in practice.

The common problem here is that things tend to get used without thinking of the consequences of what is likely to happen. Where things have gone wrong is people. The potential failure of antibiotics is simply due to greed from the agricultural sector; there was no need for its use as a growth promoter when the downside is the return of bacterial dominance. The filling of the oceans with plastic bags is just sloth. Yes, the bag is useful, but the bag does not have to end in the sea. Climate change is a bit more difficult, but again people are the problem, this time in voting for politicians that announce they don’t believe in it. If everybody agreed not to vote for anyone who refused to take action, I bet there would be action. But people don’t want to do that, because action will involve increased taxes and a requirement to be better citizens.

Which raises the question, do we need more science? In the most recent edition of Nature there was an interesting comment: people pay taxes for one of two reasons, namely they feel extremely generous and want to good in the world, or alternatively, they pay them because they will go to jail if they don’t. This was followed by the comment to scientists: do you feel your work is so important someone should be thrown into jail if they don’t fund it? That puts things into perspective, doesn’t it? What about adding if they question who the discovery will benefit.

Why is the Moon so dry?

The Planetary Society puts out a magazine called The Planetary Report, and in the September issue, they pose an issue: why is there more water ice on Mercury than the Moon? This is an interesting question because it goes to the heart of data analysis and how to form theories. First we check our data, and while there is a degree of uncertainty, from neutron scattering as measured from orbiting satellites, Mercury has about 350 times the water than the Moon, not counting whatever is in the deep interior. Further, some of that on the Moon will not be water in the sense we think of water. The neutron scattering picks out hydrogen, and that can also come from materials such as hydroxyapatite, which is present in some lunar rocks. The ice sits in cold traps; parts of the body where the temperature is always below -175 oC. For the Moon, there are (according to the article) about 26,000 km2 cold enough, while Mercury has only 7,000 km2. So, why is the Moon so dry?

Before going any further, it might help some understand what follows if they understand what isotopes are, and what effects they have. The nature of an element is defined by the number of electrons, which also equals the number of protons. For any given number of protons, there may be a variation in the number of neutrons. Thus all chlorine atoms (found in common salt) have 17 protons, and either 18 or 20 neutrons. The two different types of atoms are called isotopes. Of particular interest, hydrogen has two such isotopes: ordinary hydrogen and deuterium with 0 and 1 neutron respectively. This has two effects. The first is the heavier one usually boils or sublimes at a slightly higher temperature and in a gravitational field, is more likely to be at the top, hence ice that spends a lot of time in space tends to be richer in deuterium. The second effect is the chemical bond is stronger for the heavier isotope.

So where do volatiles (water and gases) on the rocky planets come from? I raised some of the issues on where it did not come from earlier: https://wordpress.com/post/ianmillerblog.wordpress.com/564 To summarize, the first option is the accretion disk. That is where Jupiter’s came from. If the body is big enough before the gases are removed, they will remain as an atmosphere. We can reject that. The planets will have had such atmospheres, but soon after formation of the star, it starts spitting out extreme UV/Xray radiation, and intense solar winds, and these strip the volatiles. The evidence that this removed most of the atmosphere from Earth is that some very heavy inert gases, such as krypton and xenon, have heavy isotope enhancements suggesting they have been fractionally distilled, and some of the heavier isotopes were enhanced. These gases are extremely rare, but they also cannot be accreted by any mechanism other than gravity and adsorption, and unless they were so stripped, there would be huge amounts of neon here because physical mechanisms of accretion apply equally to all the so-called rare gases, and neon is very abundant in the accretion disk. However the krypton was accreted, very large amounts of neon would also be accreted. Neon is rare, so most gases that arrived with the krypton would have been similarly removed. As would be water. So after a few hundred million years, the rocky planets were essentially rocks, with only very thin atmospheres. That, of course, is assuming our star behaved the same way as other similar stars, and that the effects of the high energy radiation are correctly assessed.

So, where did our atmosphere and oceans come from? There are only two possibilities: from above and from below. Above means comets and asteroid-type bodies colliding with Earth. Below means the elements were accreted with the solids, and emitted through volcanoes. Which one? This is where the Mercury/Moon data becomes significant. However, as often is the case, there is a catch. Mathematical modeling suggests that the Moon might have changed its obliquity, and once upon a time it was almost lying on its side. If so, and if this occurred for long enough, there would have been no permanently cold points, and the Moon would have lost its ice. It did not, but the questions then are, did this actually happen, did that period last long enough, or did the water arrive on the surface after this tilting?

Back to Earth’s atmosphere. The idea that Earth was struck by comets has been just about falsified. The problem lies in the fact that the comets have enriched deuterium, and there is too much for Earth’s water to have come from there, other than in minor amounts. A similar argument holds for chondrites. It is not the water that is the problem, but rather the solid rock. The isotope ratios of the chondrite rocks do not match Terran rocks. The same applies for the Moon, because the isotope ratios of the surface of the moon are essentially the same as on Earth, and the Moon has not has resurfacing. That essentially requires the water on Earth to have come from below, volcanically. I gave an account of that at https://wordpress.com/post/ianmillerblog.wordpress.com/576

And now we see why the extreme dryness of the Moon is so important: it shows that very little water did land on the Moon from comets and chondrites. Yes, that was shown from isotopes, but it is very desirable that all information is consistent. When you have a set of different sorts of information that lead to the same place, we can have more confidence in that place being right.

The reason why the Moon is so dry is now simple if we accept the usual explanation for how it was formed. The Moon formed from silicates blasted out of the Earth when Theia collided with it. Exactly where Theia came from is another issue, but the net result was a huge amount of silicates were thrown into orbit, and these stuck together to form the Moon. We now come to a problem that annoys me. I saw a review where it said these silicates were at a temperature approaching 1100 oC At that temperature, zinc oxide will start top boil off in a vacuum, and the lunar rocks are depleted in zinc. According to the review, published in October, it could not have been hotter because the Moon is not significantly depleted in potassium. However, in the latest edition of Nature (at the time of writing this) an article argued there was a depletion of potassium. Who is right, and how does whoever it is know? Potasium is a particularly bad element to choose because it separates out and gets concentrated in certain rocks, and we do not have that many samples. However, for water it is clear: the silicates were very hot, and the water was largely boiled off.

So, we have a conclusion. I suppose we cannot know for sure that it is absolutely right because we cannot know there is not some other theory that might explain these facts, but at least this explanation is consistent with what we know.

To conclude, some personal stuff. There will be no post from me next week; I am having hip replacement surgery. Hopefully, back again in a fortnight. Second, for those interested in my economic thoughts, my newest novel, ‘Bot War, will be available from December 2, but it is available for preorder soon.

A Tale of Two Supervisors

Something scary happened to me the other day. It occurred to me that about now was the fiftieth anniversary of my submitting my PhD thesis, as well as that of my friend, who I shall call John, in part because that is his name. The two of us had started our PhDs at about the same time, and we finished about the same time. But the effects were not the same. PhD projects tend to involve somewhat unexciting work, largely because when you set out you have to nominate the title, and the final thesis work has to reflect that title. Accordingly, supervisors tend to come up with projects that cannot fail to end up somewhere, but somewhere is usually very unexciting. John elected to work on chemical transformations of steroids, which, at the time, were considered very important. A steroid is a hydrocarbon molecule with four rings fused in a specific configuration (See Wikipedia for a diagram) and steroids were becoming important because they involve the sex hormones, and the contraceptive pill was just making its appearance. The different steroids differ in substitution, so a lot of chemistry could be done to see what sort of new steroids could be made. (Much later, this seems to have continued with effort into anabolic steroids for athletes to cheat, but at the time this was more or less unheard of.)

So John worked in a “hot” topic, and he got lucky. Something quite unusual happened – the so-called “Backbone rearrangement” was discovered. This led to a good number of papers being published, and John’s supervisor worked this to the hilt, with good use of the dreaded “letter”, a short paper that effectively lets the same work be reported first in parts in sketches, and later in detail. John’s career was well on the way to a sound scientific career, with lots of publications in a “hot” area.

At the time of thesis submission, I was highly excited because while I was not going to get many publications, in many ways against the odds, I thought I was making a real contribution to science. My project had a rocky start. I had selected my supervisor to be, he had given me a project, and I started a review of what had previously been done. Three and a half weeks later, and a few days before Christmas, I gave him the answer to the project: it had just turned up in the latest edition of Journal of the American Chemical Society. It was a good project, but we had been scooped. (Actually, this was good. It would have been awful if it had turned up, say, two years into the project, particularly as it then turned out my supervisor had no fall-back position.) So, after a couple of days, he gave me the choice of two projects, and we all went off for Christmas, which is in the summer in New Zealand.

I came back reasonably promptly, and the literature review of these projects was anything but encouraging. One involved a nightmare synthesis that required heating stuff with about 2/3 of a kg of a substance that was highly explosive, and the explosion could be triggered by scratches on glass. Then, very small amounts of the desired material could be separated, in principle, from a little under 4 kg of tar. And that only gave the starting material for some rather long and horrible syntheses to follow, which might or might not have worked. The second project was simpler: to measure the rates of chemical reaction of a class of compounds that were at least tolerably easy to make. That looked good until I found a reference that the rate was zero. The reaction simply did not go. My supervisor was still on holiday, so when the Head of Department saw me in a somewhat depressed state, he suggested I find a project. That was probably to get rid of me, but I took up the challenge. There was a debate underway in the chemical literature, and I elected to join in. So, that started my PhD. The project was to decide whether the electrons in a cyclopropane ring could delocalize, and I intended to use to be the first (I wasn’t, as it turned out) to use the Hammett equation to actually solve a problem.

I soon found out the compounds were a lot harder to make than I expected, and I also found out my supervisor was not going to be much help with advice. I had another problem. One of the syntheses was outlined in one of those dreaded “letters”. My problem was, the method did not work on my compounds. Since the original publication was on quite different types of molecules, I thought that was just life. Several years later, the full paper came out and the reason was obvious: in the letter a very key condition was omitted. I did get another quick method to work, but to be useful I needed to order a number of compounds from the US. Eventually, they turned up – three weeks after my thesis was submitted, and apparently after having gone via Hong Kong, Singapore, and spent time in warehouses in each place! So I was back to my rather tortuous fall-back synthesis route.

Shortly after the end of year 1, my supervisor took a year sabbatical in the US, so I had to do all my thinking myself. Meanwhile, the debate was settling down, and most had come to the conclusion that there was delocalization, and more particularly, some big names were coming down on that side. Somewhere in this period two publications came out relating to using this Hammett equation on the carboxylic acids I was making (Scooped!). One showed there was no delocalization; the other claimed to show there was. In my view, neither was likely to be correct. In the first there was a huge scatter in the data fit and I knew the acids were barely soluble in the solvent used. In my opinion, the second contained a logic error (see explaining post if interested). Since I intended to work on amines, which were more sensitive to this equation, I was still in business.

That was until I tried to measure my equilibrium constants. Everything went like a charm until I had to use the trans 2-(p-nitrophenyl)cyclopropylamine. I introduced this as the hydrochloride (because hydrochlorides were easier to purify) but when I introduced the buffer to generate the equilibrium, the solution gradually darkened. I got a fast result before the darkening became significant, technically the result was consistent with no delocalization, but I did not trust the result. Unfortunately, this was the key compound. I was out of business.

It was now that my supervisor made a brief appearance before heading back to North America to try to get a better-paid position. He made the suggestion of reacting the acids with a diazo compound. His stated reason for the suggestion: I needed to pad out the thesis. Nevertheless, this turned out to be a great suggestion because the required low dielectric solvent had the effect of amplifying changes between compounds. And there it was: I had an unambiguous result lying well outside any experimental uncertainty. The answer was no, the cyclopropane ring did not delocalize. Pity that by this time everyone else was starting to believe it did because the big names said it did. I saw my supervisor once more, briefly, then he was off for good. No advice on writing the thesis from him.

Needless to say, I was excited because I had settled a dispute (or so I thought), and more to the point, I had worked out what was really going on. The writing of that in the thesis was a bit obscure, first because I wanted the work to stand on the results, and secondly, I had decided that if my supervisor wanted to get any benefit from my theoretical work, he would have to make some progress himself, or at least talk to me. One of the jobs of a supervisor is to write up scientific papers based on the thesis results, after all the supervisor puts his name on those papers so he should do something. (A lot of supervisors, like John’s, do a lot more; this one did not.) After some considerable delay he wrote up one paper, relating to the work on the amines. This was not really conclusive, but I had made some compounds for the first time. But what really annoyed me was that he never wrote up the one part to which he had contributed, and which had unambiguously proven the absence of such delocalization. Why not? As far as I could tell, either he did not understand it, or he was too afraid to stick his head above the parapet and challenge the big names.

Needless to say, there is more to this story, but that can wait for another day. Meanwhile, in the post below, I shall try to explain the chemistry for those who are interested.