What is nothing?

Shakespeare had it right – there has been much ado about nothing, at least in the scientific world. In some of my previous posts I have advocated the use of the scientific method on more general topics, such as politics. That method involves the rigorous evaluation of evidence, of making propositions in accord with that evidence, and most importantly, rejecting those that are clearly false. It may appear that for ordinary people, that might be too hard, but at least that method would be followed by scientists, right? Er, not necessarily. In 1962 Thomas Kuhn published a work, “The structure of scientific revolutions” and in it he argued that science itself has a very high level of conservatism. It is extremely difficult to change a current paradigm. If evidence is found that would do so, it is more likely to be secreted away in the bottom drawer, included in a scientific paper in a place where it is most likely to be ignored, or, if it is published, ignored anyway, and put in the bottom drawer of the mind. The problem seems to be, there is a roadblock towards thinking that something not in accord with expectations might be significant. With that in mind, what is nothing?

An obvious answer to the title question is that a vacuum is nothing. It is what is left when all the “somethings” are removed. But is there “nothing” anywhere? The ancient Greek philosophers argued about the void, and the issue was “settled” by Aristotle, who argued in his Physica that there could not be a void, because if there were, anything that moved in it would suffer no resistance, and hence would continue moving indefinitely. With such excellent thinking, he then, for some reason, refused to accept that the planets were moving essentially indefinitely, so they could be moving through a void, and if they were moving, they had to be moving around the sun. Success was at hand, especially if he realized that feathers did not fall as fast as stones because of wind resistance, but for some reason, having made such a spectacular start, he fell by the wayside, sticking to his long held prejudices. That raises the question, are such prejudices still around?

The usual concept of “nothing” is a vacuum, but what is a vacuum? Some figures from Wikipedia may help. A standard cubic centimetre of atmosphere has 2.5 x 10^19 molecules in it. That’s plenty. For those not used to “big figures”, 10^19 means the number where you write down 10 and follow it with 19 zeros, or you multiply 10 by itself nineteen times. Our vacuum cleaner gets the concentration of molecules down to 10^19, that is, the air pressure is two and a half times less in the cleaner. The Moon “atmosphere” has 4 x 10^5 molecules per cubic centimetre, so the Moon is not exactly in vacuum. Interplanetary space has 11 molecules per cubic centimetre, interstellar space has 1 molecule per cubic centimetre, and the best vacuum, intergalactic space, needs a million cubic centimetres to find one molecule.

The top of the Earth’s atmosphere, the thermosphere goes from 10^14 to 10^7. That is a little suspect at the top because you would expect it to gradually go down to that of interplanetary space. The reason there is a boundary is not because there is a sharp boundary, but rather it is the point where gas pressure is more or less matched by solar radiation pressure and that from solar winds, so it is difficult to make firm statements about further distances. Nevertheless, we know there is atmosphere out to a few hundred kilometres because there is a small drag on satellites.

So, intergalactic space is most certainly almost devoid of matter, but not quite. However, even without that, we are still not quite there with “nothing”. If nothing else, we know there are streams of photons going through it, probably a lot of cosmic rays (which are very rapidly moving atomic nuclei, usually stripped of some of their electrons, and accelerated by some extreme cosmic event) and possibly dark matter and dark energy. No doubt you have heard of dark matter and dark energy, but you have no idea what it is. Well, join the club. Nobody knows what either of them are, and it is just possible neither actually exist. This is not the place to go into that, so I just note that our nothing is not only difficult to find, but there may be mysterious stuff spoiling even what little there is.

However, to totally spoil our concept of nothing, we need to see quantum field theory. This is something of a mathematical nightmare, nevertheless conceptually it postulates that the Universe is full of fields, and particles are excitations of these fields. Now, a field at its most basic level is merely something to which you can attach a value at various coordinates. Thus a gravitational field is an expression such that if you know where you are and if you know what else is around you, you also know the force you will feel from it. However, in quantum field theory, there are a number of additional fields, thus there is a field for electrons, and actual electrons are excitations of such fields. While at this point the concept may seem harmless, if overly complicated, there is a problem. To explain how force fields behave, there needs to be force carriers. If we take the electric field as an example, the force carriers are sometimes called virtual photons, and these “carry” the force so that the required action occurs. If you have such force carriers, the Uncertainty Principle requires the vacuum to have an associated zero point energy. Thus a quantum system cannot be at rest, but must always be in motion and that includes any possible discrete units within the field. Again, according to Wikipedia, Richard Feynman and John Wheeler calculated there was enough zero point energy inside a light bulb to boil off all the water in the oceans. Of course, such energy cannot be used; to use energy you have to transfer it from a higher level to a lower level, when you get access to the difference. Zero point energy is at the lowest possible level.

But there is a catch. Recall Einstein’s E/c^2 = m? That means according to Einstein, all this zero point energy has the equivalent of inertial mass in terms of its effects on gravity. If so, then the gravity from all the zero point energy in vacuum can be calculated, and we can predict whether the Universe is expanding or contracting. The answer is, if quantum field theory is correct, the Universe should have collapsed long ago. The difference between prediction and observation is merely about 10^120, that is, ten multiplied by itself 120 times, and is the worst discrepancy between prediction and observation known to science. Even worse, some have argued the prediction was not right, and if it had been done “properly” they justified manipulating the error down to 10^40. That is still a terrible error, but to me, what is worse, what is supposed to be the most accurate theory ever is suddenly capable of turning up answers that differ by 10^80, which is roughly the same as the number of atoms in the known Universe.

Some might say, surely this indicates there is something wrong with the theory, and start looking elsewhere. Seemingly not. Quantum field theory is still regarded as the supreme theory, and such a disagreement is simply placed in the bottom shelf of the minds. After all, the mathematics are so elegant, or difficult, depending on your point of view. Can’t let observed facts get in the road of elegant mathematics!

Trappist-1, and Problems for a Theoretician

In my previous post, I outlined the recently discovered planets around Trappist-1. One interesting question is, how did such planets form? My guess is, the standard theory will have a lot of trouble explaining this, because what we have is a very large number of earth-sized rocky planets around a rather insubstantial star. How did that happen? However, the alternative theory outlined in my ebook, Planetary Formation and Biogenesis, also has a problem. I gave an equation that very approximately predicts what you will get based on the size of the star, and this equation was based on the premise that chemical or physical chemical interactions that lead to accretion of planets while the star is accreting follow the temperatures in various parts of the accretion disk. In turn, the accretion disk around Trappist-1 should not have got hot enough where any of the rocky planets are, and more importantly, it should not have happened over such a wide radial distance. Worse still, the theory predicts different types of planets in different places, and while we cannot eliminate this possibility for trappist-1, it seems highly likely that all the planets located so far are rocky planets. So what went wrong?

This illustrates an interesting aspect of scientific theory. The theory was developed in part to account for our solar system, and solar systems around similar stars. The temperature in the initial accretion disk where the planets form around G type stars is dependent on two major factors. The first is the loss of potential energy as the gas falls towards the star. The temperature at a specific distance due to this is due to the gravitational potential at that point, which in turn is dependent on the mass of the star, and the rate of gas flowing through that point, which in turn, from observation, is very approximately dependent on the square of the mass of the star. So overall, that part is very approximately proportional to the cube of the stellar mass. The second dependency is on the amount of heat radiated to space, which in turn depends on the amount of dust, the disk thickness, and the turbulence in the disk. Overall, that is approximately the same for similar stars, but it is difficult to know how the Trappist-1 disk would cool. So, while the relationship is too unreliable for predicting where a planet will be, it should be somewhat better for predicting where the others will be, and what sort of planets they will be, if you can clearly identify what one of them is. Trappist-1 has far too many rocky planets. So again, what went wrong?

The answer is that in any scientific theory, very frequently we have to make approximations. In this case, because of the dust, and because of the distance, I assumed that for G type stars the heat from the star was irrelevant. For example, in the theory Earth formed from material that had been processed to at least 1550 degrees Centigrade. That is consistent with the heat relationship where Jupiter forms where water ice is beginning to think about subliming, which is also part of the standard theory. Since the dust should block much of the star’s light, the star might be adding at most a few tens of degrees to Earth’s temperature while the dust was still there at its initial concentration, and given the uncertainties elsewhere, I ignored that.

For Trappist -1 it is clear that such an omission is not valid. The planets would have accreted from material that was essentially near the outer envelope of the actual star during accretion, the star would appear large, the distance involving dust would be small, the flow through would be much more modest, and so the accreting star would now be a major source of heat.

Does this make sense? First, there are no rocky bodies of any size closer to our sun than Mercury. The reason for that, in this theory, is that by this point the dust started to get so hot it vaporized and joined the gas that flowed into the star. It never got that hot at Trappist-1. And that in turn is why Trappist-1 has so many rocky planets. The general coolness due to the small amount of mass falling inwards (relatively speaking) meant that the necessary heat for rocky planets only occurred very close to the star, but because of the relative size of the stellar envelope that temperature was further out than mass flow would predict, and furthermore the fact that the star could not be even vaguely considered as a point source meant that the zone for the rocky planets was sufficiently extended that a larger number of rocky planets was possible.

There are planets close to other stars, and they are usually giants. These almost certainly did not form there, and the usual explanation for them is that when very large planets get too close together, their orbits become unstable, and in a form of gravitational billiards, they start throwing each other around, some even being thrown from the solar system, and some end up very close to the star.

So, what does that mean for the planets of Trappist-1? From the densities quoted in the Nature paper, if they are right, and the authors give a wide range of uncertainty, the fact that the sixth one out has a density approaching that of Earth means they have surprisingly large iron cores, which may mean there is a possibility most of them accreted more or less the same way Mercury or Venus did, i.e. they accreted at relatively high temperatures, in which case they will have very little water on them. Furthermore, it has also been determined that these planets will be experiencing a rather uncomfortable amount of Xrays and extreme ultraviolet radiation. Do not book a ticket to go to them.

Trump and Climate Change

In his first week in office, President Trump has overturned President Obama’s stopping of two pipelines and has indicated a strong preference for further oil drilling. He has also denied that climate change is real. For me, this raises two issues. The first is, will President Trump’s denial of climate change, and his refusal to take action, make much difference to climate change? In my opinion, not in the usual sense, where everybody is calling for restraint on carbon dioxide emissions. The problem is sufficiently big that this will make only a minor difference. The action is a bit like the Captain of the Titanic finding two passengers had brought life jackets so he confiscates them and throws them overboard. The required action was to steer away from a field of icebergs, and the belief the ship was unsinkable was just plain ignorant, and in my opinion, the denial that we have to do something reasonably dramatic about climate change falls into the same category. The second issue is how does science work, and why is it so difficult to get the problem across? I am afraid the answer to this goes back to the education system, which does not explain science at all well. The problem with science for most people is that nature cares not a jot for what you feel. The net result is that opinions and feelings are ultimately irrelevant. You can deny all you like, but that will not change the consequences.

Science tries to put numbers to things, and it tries to locate critical findings, which are when the numbers show that alternative propsitions are wrong. It may be that only one observation is critical. Thus Newtonian mechanics was effectively replaced by Einstein’s relativity because it alone allowed the calculation of the orbital characteristics of Mercury. (Some might say Eddington’s observation of light bending around the sun during an eclipse, but Newton predicted that too. Einstein correctly predicted the bending would be twice that of Newton, but I think Newton’s prediction could be patched given Maxwell’s electrodynamics. For Newton’s theory, Mercury’s orbit was impossible to patch.)

So what about climate change? The key here is to find something with the fewest complicating factors, and that was done when Lyman et al. (Nature 465: 334-337, 2010) measured the power flows across ocean surfaces, and found there was a net input of approximately 0.6 W/m2. That is every square meter gets a net input of 0.6 Joules per second, averaged over the 24 hr period. Now this will obviously be approximate because they did not measure every square meter of ocean, but the significance is clear. The total input from the star is about 1300 W/m2 at noon, so when you allow for night, the fact that it falls away significantly as we get reasonably away from noon, and there are cloudy days, you will see that the heat retained is a non-trivial fraction of the input.

Let us see what that means for the net input. Over a year it becomes a little under 19 MJ for our square meter, and over the oceans, I make it about 6.8 x 1021 J. There is plenty of room for error there (hopefully not my arithmetic) but that is not the point. The planet is a big place, and that is really a lot of energy: about a million million times 1.6 tonnes of TNT.

That has been going on every year this century, and here is the problem: that net heat input will continue, even if we totally stopped burning carbon tomorrow, and the effects would gradually decay as the carbon we have burnt gradually weathers away. It would take over 300 years to return to where we were at the end of the 19th century. That indicates the size of the physical problem. The fact that so many people can deny a problem exists, with no better evidence than, “I don’t believe it,” is our current curse. The next problem is that just slowing down the production of CO2, and other greenhouse gases, is not going to solve it. This is a problem that has crept up on us because a planet is a rather large object. It has taken a long time for humanity’s efforts to make a significant increase to the world’s temperatures, but equally it will take a long time to stop the increase from continuing. Worse, one of the reasons the temperature increases have been modest is that a lot of this excess heat has gone into melting ice. Eight units of water at ten degrees centigrade will melt one unit of ice, and we end up with nine units of water at nought degrees Centigrade. The ice on the planet is a great restraint on temperature increases, but once the ice in contact with water has melted, temperatures may surge. If we want to retain our current environment and sea levels, we have some serious work to do, and denying the problem exists is a bad start.

Dark Energy and Modern Science

Most people think the scientific endeavour is truly objective; scientists draw their conclusions from all the facts and are never swayed by fashion. Sorry, but that is not true, as I found out from my PhD work. I must post about that sometime, but the shortened version is that I entered a controversy, my results unambiguously supported one side, but the other side prevailed for two reasons. Some “big names” chose that side, and the review that settled the issue conveniently left out all reference to about sixty different sorts of observations (including mine) that falsified their position. Even worse, some of the younger scientists who were on the wrong side simply abandoned the field and tried to conveniently forget their work. But before I bore people with my own history, I thought it would be worth noting another issue, dark energy. Dark energy is supposed to make up about 70% of the Universe so it must be important, right?

Nobody knows, or can even speculate with some valid reason, what dark energy is, and there is one reason only for believing it even exists, and that is it is believed that the expansion of the Universe is accelerating. We are reasonably certain the Universe is expanding. Originally this was discovered by Hubble, who noticed that the spectra of distant galaxies have a red shift in their frequencies and the further away they are, the bigger the red shift. This means that the whole universe must be expanding.

Let me digress and try to explain the Doppler shift. If you think of someone beating a drum regularly, then the number of beats per unit time is the frequency. Now, suppose the drum is on the back of a truck. If you hear a beat, and expect the next one at, say, 1 second later, if the truck starts to move away, the beat will come slightly later because the sound has had further to go. If the truck goes away at a regular speed, the beats will be delayed from each other by the same interval, the frequency is less, and that is called a red shift in the frequency. Now, the sound intensity will also become quieter with distance as the sound spreads out. Thus you can determine how far away the drum is and how fast it is moving away. The same applies to light, and if the universe is expanding regularly, then the red shift should also give you the distance. Similarly, provided you know the source intensity, the measured light intensity should give you the distance.

That requires us to measure light from stars that produce a known light output, which are called standard candles. Fortunately, there is a type of very bright standard candle, or so they say, and that is the type 1A supernova. It was observed in the 1990s that the very distant supernovae were dimmer than they should be according to the red shift, which means they are further away than they should be, which means the expansion must be accelerating. To accelerate there must be some net force pushing everything apart. That something is called dark energy, and it is supposed to make up about two thirds of the Universe. The discoverers of this phenomenon won a Nobel prize, and that, of course, in many people’s eyes means it must be true.

The type 1A supernova is considered to arise when a white dwarf star starts to strip gas from a neighbouring star. The dwarf gradually increases in mass, and because its nuclear cycle has already burnt helium into carbon and oxygen, where because the mass of the dwarf is too low, the reactions stop. As the dwarf consumes its neighbour, eventually the mass becomes great enough to start the burning of carbon and oxygen; this is uncontrollable and the whole thing explodes. The important point here is that because the explosion point is reached because of the gradual addition of fresh mass, it will occur at the same point for all such situations, so you get a standard output, or so it is assumed.

My interest in this came when I went to hear a talk on the topic, and I asked the speaker a question relating to the metallicity of the companion star. (Metallicity is the fraction of elements heavier than helium, which in turn means the amount of the star made up of material that has already gone through supernovae.) What I considered was that if you think of the supernova as a bubble of extremely energetic material, what we actually see is the light from the outer surface nearest to us, and most of that surface will be the material of the companion. Since the light we see is the result of the heat and inner light promoting electrons to higher energy levels, the light should be dependent on the composition of the outer surface. To support that proposition, Lyman et al. (arXiv: 1602.08098v1 [astro-ph.HE] 2016) have shown that calcium-rich supernovae are dimmer than iron-rich ones. Thus the 1A supernova may not be such a standard candle, and the earlier it was, the lower the metallicity will be, and that metallicity will favour lighter atoms, which do not have as many energy levels from which to radiate so they will be less efficient at converting energy to light.

Accordingly, my question was, “Given that low metallicity leads to dimmer 1a supernovae, and given that the most distant stars are the youngest and hence will have the lowest metallicity, could not that be the reason the distant ones are dimmer?” The response was a crusty, “That was taken into account.” Implied: go away and learn the standard stuff. My problem with that was, how could they take into account something that was not discovered for another twenty years or so? Herein lies one of my gripes about modern science: the big names who are pledged to a position will strongly discourage anyone questioning that position if the question is a good one. Weak questions are highly desired, as the name can satisfactorily deal with it and make himself feel better.

So, besides this issue of metallicity, how strong is the evidence for this dark energy. Maybe not as strong as everyone seems to say. In a recent paper (Nielsen et al. arXiv:1506.01354v2) analysed data for a much larger number of supernovae and came to a somewhat surprising conclusion: so far, you cannot actually tell whether expansion is accelerating or not. One interesting point in this analysis is that we do not simply relate the measured magnitude to distance. In addition there are corrections for light curve shape and for colour, and each has an empirical constant attached to it, and the “constant” is assumed to be constant. There must also be corrections for intervening dust, and again it is a sheer assumption that the dust will be the same in the early universe as now, despite space being far more compact.

If we now analyse all the observed data carefully (the initial claims actually chose a rather select few) we find that any acceptable acceleration consistent with the data does not deviate significantly from no acceleration out to red shift 1, and that the experimental errors are such that to this point we cannot distinguish between the options.

Try this. Cabbolet (Astrophys. Space Sci. DOI 10.1007/s10509-014-1791-4) argues that from the Eöt-Wash experiment that if there is repulsive gravity (needed to accelerate the expansion), then quantum electrodynamics is falsified in its current formulation! Quantum electrodynamics is regarded as one of the most accurate theory ever produced. We can, of course, reject repulsive gravity, but that also rejects dark energy. So, if that argument is correct, then at least one of the two has to go, and maybe dark energy is the one more prone to go.

Another problem is that it is assumed that type 1a supernovae are standard because they all form by the gradual accretion of extra matter from a companion. But Olling et al. (Nature, 521: 332 – 335, 2015) argue that they have found three supernovae where the evidence is that the explosion occurred by one dwarf simply swallowing another, and now there is no standard mass, so the energy could be almost anyhting, depending on the mass of the companion.

Milne (ApJ 803 20. doi:10.1088/0004-637X/803/1/20) has shown there are two classes of 1a supernovae, and for one of those there is a significant underestimation of the optical luminosity of the NUV-blue SNe Ia, in particular, for the high-redshift cosmological sample. Not accounting for this effect should thus produce a distance bias that increases with redshift and could significantly bias measurements of cosmological parameters.

So why am I going on like this? I apologize to some for the details, but I think this shows a problem in that the scientific community is not always as objective as they should be. It appears to be, they do not wish to rock the boat holding the big names. All evidence should be subject to examination, and instead what we find is that only too much is “referred to the experts”. Experts are as capable of being wrong as anyone else, when there is something they did not know when they made their decision.

Interstellar probes

I get annoyed when something a little unexpected hits the news, and immediately after, a number of prominent people come up with proposals that look great, but somehow miss what seems to be obvious. In my last post (https://wordpress.com/post/ianmillerblog.wordpress.com/603), I discussed the newly discovered exoplanet Proxima b, which is orbiting a red dwarf about 4.25 light years away. Unfortunately, that is a long way away. A light year is about 9.46 x 10^12 km, or nearly nine and a half trillion km. The Universe is not exactly small. This raises the question, what do we do with this discovery? The problem with such distances is that if we sent something at the speed of light it would take 4.25 years to get there. Apart from electromagnetic radiation, we cannot get anything up to light speed. If we got something up to a tenth of light speed, it would take 42.5 years to get there, and if it sent back messages through electromagnetic radiation, say light or radio, it would take a further 4.25 years to get back, and we would get information in 46.75 years. In other words, it is close enough that in principle, someone could get information back in his/her lifetime, provided they started young. So, how could one get even up to one tenth light speed? This is where the experts jump in.

What, they propose, with about one day’s thought at most, is to send small, almost micro, probes that have a sail attached. The reason for small is that we can get more acceleration for any applied force. The idea then is that these miniprobes can be accelerated up to a significant fraction of light speed by directing a laser at the sail. Light has momentum. There is the classic way to demonstrate this: little paddles with one side light and the other dark are suspended with bearings so the paddles can rotate, then these are enclosed in a glass jar and the air evacuated. If light is shone on it, the paddles rotate. It is very easy to show from Maxwell’s electromagnetic theory why this must be. Maxwell showed that electromagnetic radiation is emitted if charge is accelerated. The reason why light must carry momentum: if the body carrying the charge is accelerated, its momentum changes, and conservation of momentum requires the radiation to carry that momentum away. For our sail, the great advantage is all the energy is generated on Earth. The normal problem with space travel is so much weight has to be carried just for propulsion, in the fuel, the engines, the piping, and the structure connecting the engines to the rest. Here all the power is generated somewhere else, so that weight can be left behind. All the weight needed for acceleration is the sail and a connection to the probe. This proposal seems superficially to be a great idea, but in my opinion, there will be great difficulties in making this work.

The first reason is aim. If the probe has motors, they can correct for faulty aim, but if all the power comes from Earth, you have to get it right from Earth. The probe may fly by, but it has to get reasonably close. Assume you can tolerate it being anywhere on a 1000 km arc (you can put in your own number) where the arc is part of a circle with the centre at Earth. The length of the arc is r θ, where θ now represents the maximum angle that the “aim can be wrong. That means θ has to be less than 1000/40.2×10^12, or 1 part in 40 billion. Do you really think you can aim that well? Of course, the aim is for where the planet will be then. If you get the speed slightly wrong, the planet could be on the other side of the star when the fly-by happens. The velocity control has to have similar accuracy. Besides the planet orbiting, the star is also moving, so the whole system has to be there when required.

The next problem is that the laser beam must strike the sail exactly above the centre of mass of the probe. Any deviation and there is an applied torque to the probe, and the sail starts to spin. Can we be that accurate?

The final problem is the sail has to be exactly at right angles to the beam. Suppose it deviates by φ. Now the accelerating impulse is p.cos φ, where p is the mometum available to be transferred to the probe, and there is a sideways impulse of p.sinφ. Now, if the probe moves even slightly sideways, as noted above it starts to spin, and it is out of control. Alternatively, if it starts to spin in any way at all, the sail will give the probe a lateral nudge. It does not need much to exceed one part in forty billion. It will move out of the laser beam, and it would be seemingly extremely difficult to devise a means of correcting for such errors becaus eyou have no way of knowing wher eit is, once it has got underway for a reasonable distance. In short, a great idea in principle, but geometry is not going to make this at all easy. What I find hard to understand is why the proponents of this scheme do not seem to have stated how they can get around this obvious problem.

Hawking’s Genius

How do people reach conclusions? How can these conclusions be swayed? How do you know you are correct, as opposed to being manipulated? How could a TV programme about physics and cosmology tell us something about our current way of life, including politics?

I have recently been watching a series of TV programmes entitled Genius, in which Stephen Hawking starts out by suggesting that anyone can answer the big questions of nature, given a little help. He gives them some equipment for them to do some experiments, and they are supposed to work out how to use it and reach the right conclusion. As an aside, this procedure greatly impressed me; Hawking would make a magnificent teacher because he has the ability to make his subjects really register the points he is trying to make. Notwithstanding that, it was quite a problem to get them to see what they did not expect. In the first episode there was a flat lake, or that was what they thought. With more modern measuring devices, including a laser, they showed the surface of the lake was actually significantly curved. Even then, it was only the incontrovertible evidence that persuaded them that the effect was real, despite the fact they also knew the earth is a sphere. In another program, he gets the subjects to recognise relative distances, thus he gets the distances between the earth and the moon by eclipses, then the distance to the sun. The problem here is the eclipses only really give you angles; somewhere along the line you need a distance, and Hawking cheats by giving the relative sizes of the moon and the sun. He makes the point about relative distances very well, but he overlooks how to find the real distances in the first place, although, to be fair, in a TV program for the masses, he probably felt that there was only a limited amount to be covered in one hour.

It is a completely different thing to discover something like that for the first time. Hawking mentions that Aristarchus was the first to do this properly, and his method of getting the earth-moon distance was to wait for an eclipse of the moon, and get two observers some distance apart (from memory, I believe it was 500 miles) to measure the angle from the horizontal when the umbra first made its appearance. Now he had two angles and a length. The method was valid, although there were errors in the measurements, with the result he was out by a factor of two. To get the earth-sun distance, he measured the moon-earth-sun angle when the moon was precisely half shaded. The second angle would be a right angle, he knew the distance to the moon, so with Pythagorus, or, if he were into trigonometry that was not available then, secants, he could get the distance. There was a minor problem; the angle was so close to a right angle and the sun is not exactly a point, and the half-shading of the moon is rather difficult to get right, and of course his actual earth-moon distance was wrong, so he had errors here, and had the sun too close by a factor approaching 5. Nevertheless, with such primitive instruments that he had, he was on the right track.

Notwithstanding the slight cheat, Hawking’s demonstration made one important point. By giving the relative sizes and putting the moon about 5 meters away from the earth (the observer) to get a precise eclipse of the given sun, he showed what the immense distance really looks like proportionately. I know as a scientist I am often using quite monstrous numbers, but what do they mean? What does 10 to the power of twenty look like compared with 10? Hawking stunned his subjects when comparing four hundred billion with one, using grains of sand. Quite impressive.

All of which raises the question, how do you make a discovery, or perhaps how do discoveries get made? One way, in my opinion, is to ask questions, then try to answer them. Not just once, but every answer you can think of. The concept I put in my ebook on this topic was that for any phenomenon, there is most likely to be more than one theoretical explanation, but as you increase the number of different observations, the false ones will start to drop out as they cannot answer some of the observations. Ultimately, you can prove a theory in the event you can say, if and only if this theory is correct, will I see the set of observations X. The problem is to justify the “only if” part. This, of course, goes to any question that can be answered logically.

However, Hawking’s subjects would not have been capable of that because the first step in forming the theory is to see that it is possible. Seeing something for the first time when you have not been told is not easy, whereas if you are told what should be there, but only faintly, most of the time you will see it, even if it isn’t really there. There are a number of psychological tests that show people tend to see what they expect to see. Perhaps the most spectacular example was the canals on Mars. After the mention of canali, astronomers, and Lovell in particular, drew lovely maps of Mars, with lines that are simply not there.

Unfortunately, I feel there was a little cheating in Hawking’s programs, which showed up in a program that looked at whether determinism ruled our lives, i.e. were we pre-programmed to carry out our lives, or do we have free will? To do this, he had a whole lot of people line up, and at given times, they could move one square left or right, at their pleasure. After a few rounds, there was the expected scattered distribution. So, what did our “ordinary people” conclude? I expected them to conclude that that sort of behaviour was statistical, and there was really choice in what people do. But no. These people conclude there is a multiverse, and all the choices are made somewhere. I don’t believe that for an instant, but I also don’t believe three people picked off the street would reach that conclusion, unless they had been primed to reach it.

And the current relevance? Herein lies what I think is the biggest problem of our political system: people can be made to believe what they have been primed to believe, even if they really don’t understand anything relating to the issue.

Where did our water and air come from, and what form was it in?

If we want to answer the question, how likely is there to be life on other planets, then these questions must be answered because life depends on these elements. Also, to answer questions such as how the catastrophic floods on Mars could have occurred also requires answers to where the materials for the floods came from. The questions are also of interest because they illustrate what I think is an ideal method for solving problems, and that is to ask questions and from the answers, eliminate what could not have happened. This gets to Conan Doyle’s mantra: if you have eliminated all but one, even if it seems unreasonable it must be what happened.

First, all material came from the accretion disk, and that got hotter and hotter as it fell into the sun. While the sun was accreting rapidly, temperatures a bit further away than where Earth is now reached about 1550 degrees Centigrade, and much of the material present at the end of rapid accretion made up the material from which the rocky planets formed. We know that because iron melted, we now have iron meteorites, and Earth has a large iron core, whereas Mars has much less iron in its core. (It would still accrete some iron, which would melt, but a lot would also oxidise and form rust if it were finely divided.) The gases in the disk at these temperatures and likely pressures would be mainly hydrogen, helium, water, carbon monoxide, nitrogen, neon, followed by a number of lesser gases. There came a point where the sun’s rate of accretion reduced by several orders of magnitude. The disk that remained now formed the rocky planets, and these planets formed rather quickly. Mars apparently formed within three million years, although of course bombardment led to some additional mass being added later.

So, where did the rocky planets’ atmospheres come from.

Option 1. The rocky planets formed and gravity started to hold atmosphere. We have the residue, after the hydrogen and helium was lost to space. This is obviously wrong. If our air arrived that way, then there should be roughly the same amount of neon in the air as nitrogen, but neon is extremely rare, although some of the krypton and xenon may have survived from this period. Accordingly, the early earth, and presumably the other rocky planets, were basically lumps of rock without atmosphere or oceans.

Option 2. The volatiles were delivered by comets. This was what was thought to be the case for some time, but while some comets probably have struck the earth, they are not considered to be a significant source of water or air. The reason is that the levels of deuterium in comets is about five times that of our seawater. You can concentrate the levels of deuterium by boiling off water, and you would have to lose the hydrogen to space, but you cannot concentrate the hydrogen. There have also been claims that the comets might have come from around Jupiter, because at least one of those has lower deuterium levels. That, however, would not get us any nitrogen, assuming the composition of Europa is typical of icy bodies in the Jovian region. It also presupposes a mass of comets that have essentially disappeared, but not into Jupiter, the obvious gravitational accretional centre.

Option 3 The volatiles were delivered by asteroids. There is some water, nitrogen and carbon in some asteroids, mainly carbonaceous chondrites, but these are in the minority. So, what is being asked for is that there were massive numbers of these asteroids further from the sun that were dislodged and struck Earth, but did not significantly affect the inner asteroids. Unfortunately, there is worse. Earth had to be struck by asteroids with very large amounts of water, large amounts of carbon, and modest amounts of nitrogen. Venus had to be struck by asteroids with modest amounts of water, about the same amount of carbon as Earth, and three times the amount of nitrogen. Mars had to be struck by asteroids with good amounts of water and carbon, but for some reason not as many, despite the fact that all asteroid striking anything have to cross Mars’ path, and seemingly very little nitrogen, although as later posts will show, there is ambiguity here. In other words, there were three different sorts of asteroids, and they each separately struck a different planet. Now, if you believe that, there is another problem. Isotope analysis of other elements (Drake, M. J., Righter, K., 2002. Determining the composition of the Earth. Nature 416: 39-44.) shows us that the rocks in asteroids can only have had a very minor part to play in Earth’s accretion. There are other compositional problems as well, and the overall conclusion is that the volatiles did not come from asteroids either.

Option 4 The volatiles were adsorbed onto dust in the accretion disk, and as the planet accreted, the dust got hotter, turned into rock, and the gases came out of volcanoes to form the atmosphere. Actually, there are two premises here. The first is that the gases were trapped on dust, and the second one is that the gases were emitted from volcanoes (or fumaroles). The first premise is wrong, because nitrogen, carbon monoxide and neon are each adsorbed on silicate dust to roughly the same degree, and so would end up being present in roughly the same amounts. Compared with nitrogen, Earth has roughly a hundred times more carbon, and about five orders of magnitude less neon. That is not the source of the atmosphere. On the other hand, there is good evidence that the volatiles on Mars came from volcanoes. Thus the large “normal” river systems and remains of lakes, etc, are of about the same age as the volcanic activity. For Mars, at least, it seems that for about half a billion years Mars had no atmosphere of significance, then volcanic activity started and there were rivers flowing. This happened on and off for a few hundred million years, then everything started to freeze out. The catastrophic flows actually occurred almost a billion years after the main rivers stopped flowing.

So, the problem now is, we have eliminated just about every possible physical mechanism for getting the gases there. Obviously, we have missed something in the above analysis, yet the clues are there. But to get further, we have to think about how rocky planets could form in the first place, and that will be material for a further Monday post. To get to an explanation for the catastrophic floods, or how the materials for life emerged, we have to go to a number of other places first.