Ross 128b a Habitable Planet?

Recently the news has been full of excitement that there may be a habitable planet around the red dwarf Ross 128. What we know about the star is that it has a mass of about 0.168 that of the sun, it has a surface temperature of about 3200 degrees K, it is about 9.4 billion years old (about twice as old as the sun) and consequently it is very short of heavy elements, because there had not been enough supernovae that long ago. The planet is about 1.38 the mass of Earth, and it is about 0.05 times as far from its star as Earth is. It also orbits its star every 9.9 days, so Christmas and birthdays would be a continual problem. Because it is so close to the star it gets almost 40% more irradiation than Earth does, so it is classified as being in the inner part of the so-called habitable zone. However, the “light” is mainly at the red end of the spectrum, and in the infrared. Even more bizarrely, in May this year the radio telescope at Arecibo appeared to pick up a radio signal from the star. Aliens? Er, not so fast. Everybody now seems to believe that the signal came from a geostationary satellite. Apparently here is yet another source of electromagnetic pollution. So could it have life?

The first question is, what sort of a planet is it? A lot of commentators have said that since it is about the size of Earth it will be a rocky planet. I don’t think so. In my ebook “Planetary Formation and Biogenesis” I argued that the composition of a planet depends on the temperature at which the object formed, because various things only stick together in a narrow temperature range, but there are many such zones, each giving planets of different composition. I gave a formula that very roughly argues at what distance from the star a given type of body starts forming, and if that is applied here, the planet would be a Saturn core. However, the formula was very approximate and made a number of assumptions, such as the gas all started at a uniform low temperature, and the loss of temperature as it migrated inwards was the same for every star. That is known to be wrong, but equally, we don’t know what causes the known variations, and once the star is formed, there is no way of knowing what happened so that was something that had to be ignored. What I did was to take the average of observed temperature distributions.

Another problem was that I modelled the centre of the accretion as a point. The size of the star is probably not that important for a G type star like the sun, but it will be very important for a red dwarf where everything happens so close to it. The forming star gives off radiation well before the thermonuclear reactions start through the heat of matter falling into it, and that radiation may move the snow point out. I discounted that largely because at the key time there would be a lot of dust between the planet and the star that would screen out most of the central heat, hence any effect from the star would be small. That is more questionable for a red dwarf. On the other hand, in the recently discovered TRAPPIST system, we have an estimate of the masses of the bodies, and a measurement of their size, and they have to have either a good water/ice content or they are very porous. So the planet could be a Jupiter core.

However, I think it is most unlikely to be a rocky planet because even apart from my mechanism, the rocky planets need silicates and iron to form (and other heavier elements) and Ross 128 is a very heavy metal deficient star, and it formed from a small gas cloud. It is hard to see how there would be enough material to form such a large planet from rocks. However, carbon, oxygen and nitrogen are the easiest elements to form, and are by far the most common elements other than hydrogen and helium. So in my theory, the most likely nature of Ross 128b is a very much larger and warmer version of Titan. It would be a water world because the ice would have melted. However, the planet is probably tidally locked, which means one side would be a large ocean and the other an ice world. What then should happen is that the water should evaporate, form clouds, go around the other side and snow out. That should lead to the planet eventually becoming metastable, and there might be climate crises there as the planet flips around.

So, could there be life? If it were a planet with a Saturn core composition, it should have many of the necessary chemicals from which life could start, although because of the water/ice live would be limited to aquatic life. Also, because of the age of the planet, it may well have been and gone. However, leaving that aside, the question is, could life form there? There is one restriction (Ranjan, Wordsworth and Sasselov, 2017. arXiv:1705.02350v2) and that is if life requires photochemistry to get started, then the intensity of the high energy photons required to get many photochemical processes started can be two to four orders of magnitude less than what occurred on Earth. At that point, it depends on how fast everything that follows happens, and how fast the reactions that degrade them happen. The authors of that paper suggest that the UV intensity is just too low to get life started. Since we do not know exactly how life started yet, that assessment might be premature, nevertheless it is a cautionary point.

Advertisements

A personal scientific low point.

When I started my PhD research, I was fairly enthusiastic about the future, but I soon got disillusioned. Before my supervisor went on summer holidays, he gave me a choice of two projects. Neither were any good, and when the Head of Department saw me, he suggested (probably to keep me quiet) that I find my own project. Accordingly, I elected to enter a major controversy, namely were the wave functions of a cyclopropane ring localized (i.e., each chemical bond could be described by wave interference between a given pair of atoms, but there was no further wave interference) or were they delocalized, (i.e. the wave function representing a pair of electrons spread over more than one pair of atoms) and in particular, did they delocalize into substituents? Now, without getting too technical, I knew my supervisor had done quite a bit of work on something called the Hammett equation, which measures the effect or substituents on reactive sites, and in which, certain substituents that had different values when such delocalization was involved. If I could make the right sort of compounds, this equation would actually solve a problem.

This was not to be a fortunate project. First, my reserve synthetic method took 13 steps to get to the desired product, and while no organic synthesis gives a yield much better than 95%, one of these struggled to get over 35%, and another was not as good as desirable, which meant that I had to start with a lot of material. I did explore some shorter routes. One involved a reaction that was published in a Letter by someone who would go on to win a Nobel prize. The very key requirement to get the reaction to work was omitted in the Letter. I got a second reaction to work, but I had to order special chemicals. They turned up after I had submitted my thesis. They travelled via Hong Kong, where they got put aside and forgotten. After discovering that my supervisor was not going to provide any useful advice on chemical synthesis, he went on sabbatical, and I was on my own. After a lot of travail, I did what I had set out to do, but an unexpected problem arose. The standard compounds worked well and I got the required straight line set with minimum deviation, but for the key compound at one extreme of the line, the substituent at one end reacted quickly with the other end in the amine form. No clear result.

My supervisor made a cameo appearance before heading back to North America, where he was looking for a better paying job, and he made a suggestion, which involved reacting carboxylic acids that I already had in toluene. These had already been reported in water and aqueous alcohol, but the slope of the line was too shallow to be conclusive. What the toluene did was to greatly amplify the effect. The results were clear: there was no delocalization.

The next problem was the controversy was settling down, and the general consensus that there was such delocalization. This was based on one main observational fact, namely adjacent positive charge was stabilized, and there were many papers stating that it must on theoretical grounds. The theory used was exactly the same type of programs that “proved” the existence of polywater. Now the interesting thing was that soon everybody admitted there was no polywater, but the theory was “obviously” right in this case. Of course I still had to explain the stabilization of positive charge, and I found a way, namely strain involved mechanical polarization.

So, where did this get me? Largely, nowhere. My supervisor did not want to stick his head above the parapet, so he never published the work on the acids that was my key finding. I published a sequence of papers based on the polarization hypothesis, but in my first one I made an error: I left out what I thought was too obvious to waste the time of the scientific community, and in any case, I badly needed the space to keep within page limits. Being brief is NOT always a virtue.

The big gain was that while both explanations explained why positive charge was stabilized, (and my theory got the energy of stabilization of the gas phase carbenium ion right, at least as measured by another PhD student in America) the two theories differed on adjacent negative charge. The theory involving quantum delocalization required it to be stabilized too, while mine required it to be destabilized. As it happens, negative charge adjacent to a cyclopropane ring is so unstable it is almost impossible to make it, but that may not be convincing. However, there is one UV transition where the excited state has more negative charge adjacent to the cyclopropane ring, and my calculations gave the exact spectral shift, to within 1 nm. The delocalization theory cannot even get the direction of the shift right. That was published.

So, what did I learn from this? First, my supervisor did not have the nerve to go against the flow. (Neither, seemingly, did the supervisor of the student who measured the energy of the carbenium ion, and all I could do was to rely on the published thesis.) My spectral shifts were dismissed by one reviewer as “not important” and they were subsequently ignored. Something that falsifies the standard theory is unimportant? I later met a chemist who rose to the top of the academic tree, and he had started with a paper that falsified the standard theory, but when it too was ignored, he moved on. I asked him about this, and he seemed a little embarrassed as he said it was far better to ignore that and get a reputation doing something more in accord with a standard paradigm.

Much later (I had a living to earn) I had the time to make a review. I found over 60 different types of experiment that falsified the standard theory that was now in textbooks. That could not get published. There are few review journals that deal with chemistry, and one rejected the proposal on the grounds the matter was settled. (No interest in finding out why that might be wrong.) For another, it exceeded their page limit. For another, not enough diagrams and too many equations. For others, they did not publish logic analyses. So there is what I have discovered about modern science: in practice it may not live up to its ideals.

Scientific low points: (2)

The second major low point from recent times is polywater. The history of polywater is brief and not particularly distinguished. Nikolai Fedyakin condensed water in, or repeatedly forced water through, quartz capillaries, and found that tiny traces of such water could be obtained that had an elevated boiling point, a depressed freezing point, and a viscosity approaching that of a syrup. Boris Deryagin improved production techniques (although he never produced more than very small amounts) and determined a freezing point of – 40 oC, a boiling point of » 150 oC, and a density of 1.1-1.2. Deryagin decided there were only two possible reasons for this anomalous behaviour: (a) the water had dissolved quartz, (b) the water had polymerized. Everybody “knew” water did not dissolve quartz, therefore it must have polymerized. From the vibrational spectrum of polywater, two new bands were observed at 1600 and 1400 cm-1. From force constant considerations this was explained in terms of each OH bond being of approximately 2/3 bond order. The spectrum was consistent with the water occurring in hexagonal planar units, and if so, the stabilization per water molecule was calculated to be in the order of 250-420 kJ/mol. For the benefit of the non-chemist, this is a massive change in energy, and it meant the water molecules were joined together with a strength comparable to the carbon – carbon bonds in diamonds. The fact that it had a reported boiling point of » 150 oC should have warned them that this had to be wrong, but when a bandwagon starts rolling, everyone wants to jump aboard without stopping to think. An NMR spectrum of polywater gave a broad, low intensity signal approximately 300 Hz from the main proton signal, which meant that either a new species had formed, or there was a significant impurity present. (This would have been a good time to check for impurities.) The first calculation employing “reliable” methodology involved ab initio SCF LCAO methodology, and water polymers were found to be stabilized by polymer size. The cyclic tetramer was stabilized by 177 kJ/mol, the cyclic pentamer by 244 kJ/mol, and the hexamer by 301.5 kJ/mol. One of the authors of this paper was John Pople, who went on to get a Nobel prize, although not for this little effort.

All of this drew incredible attention. It was even predicted that an escape of polywater into the environment could catalytically convert the Earth’s oceans into polywater, thus extinguishing life, and that this had happened on Venus. We had to be careful! Much funding was devoted to polywater, even from the US navy, who apparently saw significant defence applications. (One can only imagine the trapping of enemy submarines in a polymeric syrup, prior to extinguishing all life on Earth!)

It took a while for this to fall over. Pity one poor PhD candidate who had to prepare polywater, and all he could prepare was solutions of silica. His supervisor told him to try harder. Then, suddenly, polywater died. Someone notice the infrared spectrum quoted above bore a striking resemblance to that of sweat. Oops.

However if the experimentalists did not shine, theory was extraordinarily dim. First, the same methods in different hands produced a very wide range of results with no explanation of why the results differed, although of course none of them concluded there was no polywater. If there were no differences in the implied physics between methods that gave such differing results, then the calculation method was not physical. If there were differences in the physics, then these should have been clearly explained. One problem was, as with only too many calculations in chemical theory, the inherent physical relationships are never defined in the papers. It was almost amusing to see, when it was clear there was no polywater, a paper was published in which ab initio LCAO SCF calculations with Slater-type orbitals provide evidence against previous calculations supporting polywater. The planar symmetrical structure was found to be not stable. A tetrahedral structure made by four water molecules results in instability because of excessive deformation of bond angles. What does that mean, apart from face-covering for the methodology? If you cannot have roing structures when the bond angles are tetrahedral, sugar is therefore an impossible molecule. While there are health issues with sugar, impossibility of its existence is not in debate.

One problem with the theory was that molecular orbital theory was used to verify large delocalization of electron motion over the polymers. The problem is, MO theory assumes it in the first place. Verifying what you assume is one of the big naughties pointed out by Aristotle, and you would thing that after 2,400 years, something might have stuck. Part of the problem was that nobody could question any of these computations because nobody had any idea of what the assumed inputs and code were. We might also note that the more extreme of these claims tended to end up in what many would claim to be the most reputable of journals.

There were two major fall-outs from this. Anything that could be vaguely related to polywater was avoided. This has almost certainly done much to retard examination of close ordering on surfaces, or on very thin sections, which, of course, are of extreme importance to biochemistry. There is no doubt whatsoever that reproducible effects were produced in small capillaries. Water at normal temperatures and pressures does not dissolve quartz (try boiling a lump of quartz in water for however long) so why did it do so in small capillaries? The second was that suddenly journals became far more conservative. The referees now felt it was their God-given duty to ensure that another polywater did not see the light of day. This is not to say that the referee does not have a role, but it should not be to decide arbitrarily what is true and what is false, particularly on no better grounds than, “I don’t think this is right”. A new theory may not be true, but it may still add something.

Perhaps the most unfortunate fallout was to the career of Deryagin. Here was a scientist who was more capable than many of his detractors, but who made an unfortunate mistake. The price he paid in the eyes of his detractors seems out of all proportion to the failing. His detractors may well point out that they never made such a mistake. That might be true, but what did they make? Meanwhile, Pople, whose mistake was far worse, went on to win a Nobel Prize for developing molecular orbital theory and developing a cult following about it. Then there is the question, why avoid studying water in monolayers or bilayers? If it can dissolve quartz, it has some very weird properties, and understanding these monolayers and bilayers is surely critical if we want to understand enzymes and many biochemical and medical problems. In my opinion, the real failures here come from the crowd, who merely want to be comfortable. Understanding takes effort, and effort is often uncomfortable.

Scientific low points: (1)

A question that should be asked more often is, do scientists make mistakes? Of course they do. The good news, however, is that when it comes to measuring something, they tend to be meticulous, and published measurements are usually correct, or, if they matter, they are soon found out if they are wrong. There are a number of papers, of course, where the findings are complicated and not very important, and these could well go for a long time, be wrong, and nobody would know. The point is also, nobody would care.

On the other hand, are the interpretations of experimental work correct? History is littered with examples of where the interpretations that were popular at the time are now considered a little laughable. Once upon a time, and it really was a long time ago, I did a post doctoral fellowship at The University, Southampton, and towards the end of the year I was informed that I was required to write a light-hearted or amusing article for a journal that would come out next year. (I may have had one put over me in this respect because I did not see the other post docs doing much.) Anyway, I elected to comply, and wrote an article called Famous Fatuous Failures.

As it happened, this article hardly became famous, but it was something of a fatuous failure. The problem was, I finished writing it a little before I left the country, and an editor got hold of it. In those days you wrote with pen on paper, unless you owned a typewriter, but when you are travelling from country to country, you tend to travel light, and a typewriter is not light. Anyway, the editor decided my spelling of a two French scientists’ names (Berthollet and Berthelot) was terrible and it was “obviously” one scientist. The net result was there was a section where there was a bitter argument, with one of them arguing with himself. But leaving that aside, I had found that science was continually “correcting” itself, but not always correctly.

An example that many will have heard of is phlogiston. This was a weightless substance that metals and carbon gave off to air, and in one version, such phlogisticated air was attracted to and stuck to metals to form a calx. This theory got rubbished by Lavoisier, who showed that the so-called calxes were combinations of the metal with oxygen, which was part of the air. A great advance? That is debatable. The main contribution of Lavoisier was he invented the analytical balance, and he decided this was so accurate there would be nothing that was “weightless”. There was no weight for phlogiston therefore it did not exist. If you think of this, if you replace the word “phlogiston” with “electron” you have an essential description of the chemical ionic bond, and how do you weigh an electron? Of course there were other versions of the phlogiston theory, but getting rid of that version may we’ll have held chemistry back for quite some time.

Have we improved? I should add that many of my cited failures were in not recognizing, or even worse, not accepting truth when shown. There are numerous examples where past scientists almost got there, but then somehow found a reason to get it wrong. Does that happen now? Since 1970, apart from cosmic inflation, as far as I can tell there have been no substantially new theoretical advances, although of course there have been many extensions of previous work. However, that may merely mean that some new truths have been uncovered, but nobody believes them so we know nothing of them. However, there have been two serious bloopers.

The first was “cold fusion”. Martin Fleischmann, a world-leading electrochemist, and Stanley Pons decided that if deuterium was electrolyzed under appropriate conditions you could get nuclear fusion. They did a range of experiments with palladium electrodes, which would strongly adsorb the deuterium, and sometimes they got unexplained but significant temperature rises. Thus they claimed they got nuclear fusion at room temperature. They also claimed to get helium and neutrons. The problem with this experiment was that they themselves admitted that whatever it was only worked occasionally; at other times, the only heat generated corresponded to the electrical power input. Worse, even when it worked, it would be for only so long, and that electrode would never do it again, which is perhaps a sign that there was some sort of impurity in their palladium that gave the heat from some additional chemical reaction.

What happened next was nobody could repeat their results. The problem then was that being unable to repeat a result when it is erratic at best may mean very little, other than, perhaps, better electrodes did not have the impurity. Also, the heat they got raised the temperature of their solutions from thirty to fifty degrees Centigrade. That would mean that at best, very few actual nuclei fused. Eventually, it was decided that while something might have happened, it was not nuclear fusion because nobody could get the required neutrons. That in turn is not entirely logical. The problem is that fusion should not occur because there was no obvious way to overcome the Coulomb repulsion between nuclei, and it required palladium to do “something magic”. If in fact palladium could do that, it follows that the repulsion energy is not overcome by impact force. If there were some other way to overcome the repulsive force, there is no reason why the nuclei would not form 4He, because that is far more stable than 3He, and if so, there would be no neutrons. Of course I do not believe palladium would overcome that electrical repulsion, so there would be no fusion possible.

Interestingly, the chemists who did this experiment and believed it would work protected themselves with a safety shield of Perspex. The physicists decided it had no show, but they protected themselves with massive lead shielding. They knew what neutrons were. All in all, a rather sad ending to the career of a genuinely skillful electrochemist.

More to follow.

The Fermi Paradox and Are We Alone in the Universe?

The Fermi paradox is something like this. The Universe is enormous, and there are an astronomical number of planets. Accordingly, the potential for intelligent life somewhere should be enormous, but we find no evidence of anything. The Seti program has been searching for decades and has found nothing. So where are these aliens?

What is fascinating about this is an argument from Daniel Whitmire, who teaches mathematics at the University of Arkansas and has published a paper in the International Journal of Astrobiology (doi:10.1017/S1473550417000271 ). In it, he concludes that technological societies rapidly exterminate themselves. So, how does he come to this conclusion. The argument is fascinating relating to the power of mathematics, and particularly statistics, to show or mislead.

He first resorts to a statistical concept called the Principle of Mediocrity, which states that, in the absence of any evidence to the contrary, any observation should be regarded as typical. If so, we observe our own presence. If we assume we are typical, and we have been technological for 100 years (he defines being technological as using electricity, but you can change this) then it follows that our being average means that after a further 200 years we are no longer technological. We can extend this to about 500 years on the basis that in terms of age a Bell curve is skewed (you cannot have negative age). To be non-technological we have to exterminate ourselves, therefore he concludes that technological societies exterminate themselves rather quickly. We may scoff at that, but then again, watching the antics over North Korea can we be sure?

He makes a further conclusion: since we are the first on our planet, other civilizations should also be the first. I really don’t follow this because he has also calculated that there could be up to 23 opportunities for further species to develop technologies once we are gone, so surely that follows elsewhere. It seems to me to be a rather mediocre use of this principle of mediocrity.

Now, at this point, I shall diverge and consider the German tank problem, because this shows what you can do with statistics. The allies wanted to know the production rate of German tanks, and they got this from a simple formula, and from taking down the serial numbers of captured or destroyed tanks. The formula is

N = m + m/n – 1

Where N is the number you are seeking, m is the highest sampled serial number and n is the sample size (the number of tanks). Apparently this was highly successful, and their estimations were far superior to intelligence gathering, which always seriously overestimated.

That leaves the question of whether that success means anything for the current problem. The first thing we note is the Germans conveniently numbered their tanks, and in sequence, the sample size was a tolerable fraction of the required answer (it was about 5%), and finally it was known that the Germans were making tanks and sending them to the front as regularly as they could manage. There were no causative aspects that would modify the results. With Whitmire’s analysis, there is a very bad aspect of the reasoning: this question of whether we are alone is raised as soon as we have some capability to answer it. Thus we ask it within fifty years of having reasonable electronics; for all we know they may still be asking it a million years in the future, so the age of technological society, which is used to base the lifetime reasoning, is put into the equation as soon as it is asked. That means it is not a random sample, but causative sample. Then on top of that, we have a sample of one, which is not exactly a good statistical sample. Of course if there were more samples than one, the question would answer itself and there would be no need for statistics. In this case, statistics are only used when they should not be used.

So what do I make of that? For me, there is a lack of logic. By definition, to publish original work, you have to be the first to do it. So, any statistical conclusion from asking the question is ridiculous because by definition it is not a random sample; it is the first. It is like trying to estimate German tank production from a sample of 1 and when that tank had the serial number 1. So, is there anything we can take from this?

In my opinion, the first thing we could argue from this Principle of Mediocrity is that the odds of finding aliens are strongest on earth-sized planets around G type stars about this far from the star, simply because we know it is at least possible. Further, we can argue the star should be at least about 4.5 billion years old, to give evolution time to generate such technological life. We are reasonably sure it could not have happened much earlier on Earth. One of my science fiction novels is based on the concept that Cretaceous raptors could have managed it, given time, but that still only buys a few tens of millions of years, and we don’t know how long they would have taken, had they been able. They had to evolve considerably larger brains, and who knows how long that would take? Possibly almost as long as mammals took.

Since there are older stars out there, why haven’t we found evidence? That question should be rephrased into, how would we? The Seti program assumes that aliens would try to send us messages, but why would they? Unless they were directed, to send meaningful signals over such huge distances would require immense energy expenditures. And why would they direct signals here? They could have tried 2,000 years ago, persisted for a few hundred years, and given us up. Alternatively, it is cheaper to listen. As I noted in a different novel, the concept falls down on economic grounds because everyone is listening and nobody is sending. And, of course, for strategic reasons, why tell more powerful aliens where you live? For me, the so-called Fermi paradox is no paradox at all; if there are aliens out there, they will be following their own logical best interests, and they don’t include us. Another thing it tells me is this is evidence you can indeed “prove” anything with statistics, if nobody is thinking.

What is nothing?

Shakespeare had it right – there has been much ado about nothing, at least in the scientific world. In some of my previous posts I have advocated the use of the scientific method on more general topics, such as politics. That method involves the rigorous evaluation of evidence, of making propositions in accord with that evidence, and most importantly, rejecting those that are clearly false. It may appear that for ordinary people, that might be too hard, but at least that method would be followed by scientists, right? Er, not necessarily. In 1962 Thomas Kuhn published a work, “The structure of scientific revolutions” and in it he argued that science itself has a very high level of conservatism. It is extremely difficult to change a current paradigm. If evidence is found that would do so, it is more likely to be secreted away in the bottom drawer, included in a scientific paper in a place where it is most likely to be ignored, or, if it is published, ignored anyway, and put in the bottom drawer of the mind. The problem seems to be, there is a roadblock towards thinking that something not in accord with expectations might be significant. With that in mind, what is nothing?

An obvious answer to the title question is that a vacuum is nothing. It is what is left when all the “somethings” are removed. But is there “nothing” anywhere? The ancient Greek philosophers argued about the void, and the issue was “settled” by Aristotle, who argued in his Physica that there could not be a void, because if there were, anything that moved in it would suffer no resistance, and hence would continue moving indefinitely. With such excellent thinking, he then, for some reason, refused to accept that the planets were moving essentially indefinitely, so they could be moving through a void, and if they were moving, they had to be moving around the sun. Success was at hand, especially if he realized that feathers did not fall as fast as stones because of wind resistance, but for some reason, having made such a spectacular start, he fell by the wayside, sticking to his long held prejudices. That raises the question, are such prejudices still around?

The usual concept of “nothing” is a vacuum, but what is a vacuum? Some figures from Wikipedia may help. A standard cubic centimetre of atmosphere has 2.5 x 10^19 molecules in it. That’s plenty. For those not used to “big figures”, 10^19 means the number where you write down 10 and follow it with 19 zeros, or you multiply 10 by itself nineteen times. Our vacuum cleaner gets the concentration of molecules down to 10^19, that is, the air pressure is two and a half times less in the cleaner. The Moon “atmosphere” has 4 x 10^5 molecules per cubic centimetre, so the Moon is not exactly in vacuum. Interplanetary space has 11 molecules per cubic centimetre, interstellar space has 1 molecule per cubic centimetre, and the best vacuum, intergalactic space, needs a million cubic centimetres to find one molecule.

The top of the Earth’s atmosphere, the thermosphere goes from 10^14 to 10^7. That is a little suspect at the top because you would expect it to gradually go down to that of interplanetary space. The reason there is a boundary is not because there is a sharp boundary, but rather it is the point where gas pressure is more or less matched by solar radiation pressure and that from solar winds, so it is difficult to make firm statements about further distances. Nevertheless, we know there is atmosphere out to a few hundred kilometres because there is a small drag on satellites.

So, intergalactic space is most certainly almost devoid of matter, but not quite. However, even without that, we are still not quite there with “nothing”. If nothing else, we know there are streams of photons going through it, probably a lot of cosmic rays (which are very rapidly moving atomic nuclei, usually stripped of some of their electrons, and accelerated by some extreme cosmic event) and possibly dark matter and dark energy. No doubt you have heard of dark matter and dark energy, but you have no idea what it is. Well, join the club. Nobody knows what either of them are, and it is just possible neither actually exist. This is not the place to go into that, so I just note that our nothing is not only difficult to find, but there may be mysterious stuff spoiling even what little there is.

However, to totally spoil our concept of nothing, we need to see quantum field theory. This is something of a mathematical nightmare, nevertheless conceptually it postulates that the Universe is full of fields, and particles are excitations of these fields. Now, a field at its most basic level is merely something to which you can attach a value at various coordinates. Thus a gravitational field is an expression such that if you know where you are and if you know what else is around you, you also know the force you will feel from it. However, in quantum field theory, there are a number of additional fields, thus there is a field for electrons, and actual electrons are excitations of such fields. While at this point the concept may seem harmless, if overly complicated, there is a problem. To explain how force fields behave, there needs to be force carriers. If we take the electric field as an example, the force carriers are sometimes called virtual photons, and these “carry” the force so that the required action occurs. If you have such force carriers, the Uncertainty Principle requires the vacuum to have an associated zero point energy. Thus a quantum system cannot be at rest, but must always be in motion and that includes any possible discrete units within the field. Again, according to Wikipedia, Richard Feynman and John Wheeler calculated there was enough zero point energy inside a light bulb to boil off all the water in the oceans. Of course, such energy cannot be used; to use energy you have to transfer it from a higher level to a lower level, when you get access to the difference. Zero point energy is at the lowest possible level.

But there is a catch. Recall Einstein’s E/c^2 = m? That means according to Einstein, all this zero point energy has the equivalent of inertial mass in terms of its effects on gravity. If so, then the gravity from all the zero point energy in vacuum can be calculated, and we can predict whether the Universe is expanding or contracting. The answer is, if quantum field theory is correct, the Universe should have collapsed long ago. The difference between prediction and observation is merely about 10^120, that is, ten multiplied by itself 120 times, and is the worst discrepancy between prediction and observation known to science. Even worse, some have argued the prediction was not right, and if it had been done “properly” they justified manipulating the error down to 10^40. That is still a terrible error, but to me, what is worse, what is supposed to be the most accurate theory ever is suddenly capable of turning up answers that differ by 10^80, which is roughly the same as the number of atoms in the known Universe.

Some might say, surely this indicates there is something wrong with the theory, and start looking elsewhere. Seemingly not. Quantum field theory is still regarded as the supreme theory, and such a disagreement is simply placed in the bottom shelf of the minds. After all, the mathematics are so elegant, or difficult, depending on your point of view. Can’t let observed facts get in the road of elegant mathematics!

A Further Example of Theory Development.

In the previous post I discussed some of what is required to form a theory, and I proposed a theory at odds with everyone else as to how the Martian rivers flowed. One advantage of that theory is that provided the conditions hold, it at least explains what it set out to do. However, the real test of a theory is that it then either predicts something, or at least explains something else it was not designed to do.

Currently there is no real theory that explains Martian river flow if you accept the standard assumption that the initial atmosphere was full of carbon dioxide. To explore possible explanations, the obvious next step is to discard that assumption. The concept is that whenever forming theories, you should look at the premises and ask, if not, what?

The reason everyone thinks that the original gases were mainly carbon dioxide appears to be because volcanoes on Earth largely give off carbon dioxide. There can be two reasons for that. The first is that most volcanoes actually reprocess subducted material, which includes carbonates such as lime. The few that do not may be as they are because the crust has used up its ability to turn CO2 into hydrocarbons. That reaction depends on Fe (II) also converting to Fe (III), and it can only do that once. Further, there are many silicates with Fe (II) that cannot do it because the structure is too tightly bound, and the water and CO2 cannot get at the iron atoms. Then, if that did not happen, would methane be detected? Any methane present mixed with the red hot lava would burn on contact with air. Samples are never taken that close to the origin. (As an aside, hydrocarbon have been found, especially where the eruptions are under water.)

Also, on the early planet, iron dust will have accreted, as will other reducing agents, but the point of such agents is, they can also only be used once. What happens now will be very different from what happened then. Finally, according to my theory, the materials were already reduced. In this context we know that there are samples of meteorites that have serious reduced matter, such as phosphides, nitrides and carbides (both of which I argue should have been present), and even silicides.

There is also a practical point. We have one sample of Earth’s sea/ocean from over three billion years ago. There were quite high levels of ammonia in it. Interestingly, when that was found, the information ended up as an aside in a scientific paper. Because it was inexplicable to the authors, it appears they said the least they could.

Now if this seems too much, bear with me, because I am shortly going to get to the point of this. But first, a little chemistry, where I look at the mechanism of making these reduced gases. For simplicity, consider the single bond between a metal M and, say, a nitrogen atom N in a nitride. Call that M – N. Now, let it be attacked by water. (The diagram I tried to include refused to cooperate. Sorry) Anyway, the water attacks the metal and because the number of bonds around the metal stays the same, a hydrogen atom has to get attached to N, thus we get M-OH  + NH. Do this three times and we have ammonia, and three hydroxide groups on a metal ion. Eventually, two hydroxides will convert to one oxide and one molecule of water will be regenerated. The hydroxides do not have to be on the same metal to form water.

Now, the important thing is, only one hydrogen gets transferred per water molecule attack. Now suppose we have one hydrogen atom and one deuterium atom. Now, the one that is preferentially transferred is the one that it is easier to transfer, in which case the deuterium will preferentially stay on the oxygen because the ease of transfer depends on the bond strength. While the strength of a chemical bond starts out depending only on the electromagnetic forces, which will be the same for hydrogen and deuterium, that strength is reduced by the zero point vibrational energy, which is required by quantum mechanics. There is something called the Uncertainty Principle that says that two objects at the quantum level cannot be an exact distance from each other, because then they would have exact position, and exact momentum (zero). Accordingly, the bonds have to vibrate, and the energy of the vibration happens to depend on the mass of the atoms. The bond to hydrogen vibrates the fastest, so less energy is subtracted for deuterium. That means that deuterium is more likely to remain on the regenerated water molecule. This is an example of the chemical isotope effect.

There are other ways of enriching deuterium from water. The one usually considered for planetary bodies is that as water vapour rises, solar winds will blow off some water or UV radiation will break a oxygen – hydrogen bond, and knock the hydroden atom to space. Since deuterium is heavier, it is slightly less likely to get to the top. The problem with this is that the evidence does not back up the solar wind concept (it does happen, but not enough) and if the UV splitting of water is the reason, then there should be an excess of oxygen on the planet. That could work for Earth, but Earth has the least deuterium enrichment of the rocky planets. If it were the way Venus got its huge deuterium enhancement, there had to be a huge ocean initially, and if that is used to explain why there is so much deuterium, then where is the oxygen?

Suppose the deuterium levels in a planet’s hydrogen supply is primarily due to the chemical isotope effect, what would you expect? If the model of atmospheric formation noted in the previous post is correct, the enrichment would depend on the gas to water ratio. The planet with the lowest ratio, i.e. minimal gas/water would have the least enrichment, and vice versa. Earth has the least enrichment. The planet with the highest ratio, i.e. the least water to make gas, would have the greatest enrichment, and here we see that Venus has a huge deuterium enrichment, and very little water (that little is bound up in sulphuric acid in the atmosphere). It is quite comforting when a theory predicts something that was not intended. If this is correct, Venus never had much water on the surface because what it accreted in this hotter zone was used to make the greater atmosphere.