Phlogiston – Early Science at Work

One of the earlier scientific concepts was phlogiston, and it is of interest to follow why this concept went wrong, if it did. One of the major problems for early theory was that nobody knew very much. Materials had properties, and these were referred to as principles, which tended to be viewed either as abstractions, or as physical but weightless entities. We would not have such difficulties, would we? Um, spacetime?? Anyway, they then observed that metals did something when heated in air:

M   + air +  heat        ÞM(calx) ±  ???  (A calx was what we call an oxide.)

They deduced there had to be a metallic principle that gives the metallic properties, such as ductility, lustre, malleability, etc., but they then noticed that gold refuses to make a calx, which suggested there was something else besides the metallic principle in metals. They also found that the calx was not a mixture, thus rust did not lead to iron being attached to a lodestone. This may seem obvious to us now, but conceptually this was significant. For example, if you mix blue and yellow paint, you get green and they cannot readily be unmixed, nevertheless it is a mixture. Chemical compounds are not mixtures, even though you might make them by mixing two materials. Even more important was the work by Paracelsus, the significance of which is generally overlooked. He noted there were a variety of metals, calces and salts, and he generalized that acid plus metal or acid plus metal calx gave salts, and each salt was specifically different, and depended only on the acid and metal used. He also recognized that what we call chemical compounds were individual entities, that could be, and should be, purified.

It was then that Georg Ernst Stahl introduced into chemistry the concept of phlogiston. It was well established that certain calces reacted with charcoal to produce metals (but some did not) and the calx was usually heavier than the metal. The theory was, the metal took something from the air, which made the calx heavier. This is where things became slightly misleading because burning zinc gave a calx that was lighter than the metal. For consistency, they asserted it should have gained but as evidence poured in that it had not, they put that evidence in a drawer and did not refer to it. Their belief that it should have was correct, and indeed it did, but this avoiding the “data you don’t like” leads to many problems, not the least of which include “inventing” reasons why observations do not fit the theory without taking the trouble to abandon the theory. This time they were right, but that only encourages the act. As to why there was the problem, zinc oxide is relatively volatile and would fume off, so they lost some of the material. Problems with experimental technique and equipment really led to a lot of difficulties, but who amongst us would do better, given what they had?

Stahl knew that various things combusted, so he proposed that flammable substances must contain a common principle, which he called phlogiston. Stahl then argued that metals forming calces was in principle the same as materials like carbon burning, which is correct. He then proposed that phlogiston was usually bound or trapped within solids such as metals and carbon, but in certain cases, could be removed. If so, it was taken up by a suitable air, but because the phlogiston wanted to get back to where it came from, it got as close as it could and took the air with it. It was the phlogiston trying to get back from where it came that held the new compound together. This offered a logical explanation for why the compound actually existed, and was a genuine strength of this theory. He then went wrong by arguing the more phlogiston, the more flammable the body, which is odd, because if he said some but not all such materials could release phlogiston, he might have thought that some might release it more easily than others. He also argued that carbon was particularly rich in phlogiston, which was why carbon turned calces into metals with heat. He also realized that respiration was essentially the same process, and fire or breathing releases phlogiston, to make phlogisticated air, and he also realized that plants absorbed such phlogiston, to make dephlogisticated air.

For those that know, this is all reasonable, but happens to be a strange mix of good and bad conclusions. The big problem for Stahl was he did not know that “air” was a mixture of gases. A lesson here is that very seldom does anyone single-handedly get everything right, and when they do, it is usually because everything covered can be reduced to a very few relationships for which numerical values can be attached, and at least some of these are known in advance. Stahl’s theory was interesting because it got chemistry going in a systemic way, but because we don’t believe in phlogiston, Stahl is essentially forgotten.

People have blind spots. Priestley also carried out Lavoisier’s experiment:  2HgO  + heat   ⇌   2Hg  + O2and found that mercury was lighter than the calx, so argued phlogiston was lighter than air. He knew there was a gas there, but the fact it must also have weight eluded him. Lavoisier’s explanation was that hot mercuric oxide decomposed to form metal and oxygen. This is clearly a simpler explanation. One of the most important points made by Lavoisier was that in combustion, the weight increase of the products exactly matched the loss of weight by the air, although there is some cause to wonder about the accuracy of his equipment to get “exactly”. Measuring the weight of a gas with a balance is not that easy. However, Lavoisier established the fact that matter is conserved, and that in chemical reactions, various species react according to equivalent weights. Actually, the conservation of mass was discovered much earlier by Mikhail Lomonosov, but because he was in Russia, nobody took any notice. The second assertion caused a lot of trouble because it is not true without a major correction to allow for valence. Lavoisier also disposed of the weightless substance phlogiston simply by ignoring the problem of what held compounds together. In some ways, particularly in the use of the analytical balance, Lavoisier advanced chemistry, but in disposing of phlogiston he significantly retarded chemistry.

So, looking back, did phlogiston have merit as a concept? Most certainly! The metal gives off a weightless substance that sticks to a particular gas can be replaced with the metal gives off an electron to form a cation, and the oxygen accepts the electron to form an anion. Opposite charges attract and try to bind together. This is, for the time, a fair description of the ionic bond. As for weightless, nobody at the time could determine the weight difference between a metal and a metal less one electron, if they could work out how to make it. Of course the next step is to say that the phlogiston is a discrete particle, and now valence falls into place and modern chemistry is around the corner. Part of the problem there was that nobody believed in atoms. Again, Lomonosov apparently did, but as I noted above, nobody took any notice of him. Of course, is it is far easier to see these things in retrospect. My guess is very few modern scientists, if stripped of their modern knowledge and put back in time would do any better. If you think you could, recall that Isaac Newton spent a lot of time trying to unravel chemistry and got nowhere. There are very few ever that are comparable to Newton.

Advertisements

What is Dark Matter?

First, I don’t know what dark matter is, or even if it is, and while they might have ideas, neither does anyone else know. However, the popular press tells us that there is at least five times more of this mysterious stuff in the Universe than ordinary matter and we cannot see it. As an aside, it is not “dark”; rather it is transparent, like perfect glass. The reason is light does not interact with it, nevertheless we also know that there are good reasons for thinking that something is there because assuming our physics are correct, certain things should happen, and they do not happen as calculated. The following is a very oversimplified attempt at explaining the problem.

All mass exerts a force on other mass called gravity. Newton produced laws on how objects move according to forces, and he outlined an equation for how gravity operates. If we think about energy, follow Aristotle as he considered throwing a stone into the air. First we give the stone kinetic energy (that is the energy of motion) but as it goes up, it slows down, stops, and then falls back down. So what happened to the original energy? Aristotle simply said it passed away, but we now say it got converted to potential energy. That permits us to say that the energy always stayed the same. Note we can never see potential energy; we say it is there because in makes the conservation of energy work. The potential energy for a mass munder the gravitational effect of a mass Mis given by V = GmM/r. Gis the gravitational constant and ris the distance between them.

When we have three bodies, we cannot solve the equations of motion, so we have a problem. However, the French mathematician Lagrange showed that any such system has a function that we call a Lagrangian, in his honour, and this states that the difference between the total kinetic and potential energies equals this term. Further, provided we know the basic function for the potential energy, we can derive the virial theorem from this Lagrangian, and for gravitational interactions, the average kinetic energy has to be half the magnitude of the potential energy.

So, to the problem. As the potential energy drops off with distance from the centre of mass, so must the kinetic energy, which means that velocity of a body orbiting a central mass must slow down as the distance from the centre increases. In our solar system Jupiter travels much more slowly than Earth, and Neptune is far slower still. However, when measurements of the velocity of stars moving in galaxies were made, there was a huge surprise: the stars moving around the galaxy have an unexpected velocity distribution, being slowest near the centre of the galaxy, then speeding up and becoming constant in the outer regions. Sometimes the outer parts are not quite constant, and a plot of speed vs distance from the centre rises, then instead of flattening, has wiggles. Thus they have far too much velocity in the outer regions of the galactic disk. Then it was found that galaxies in clusters had too much kinetic energy for any reasonable account of the gravitational potential energy. There are other reasons why things could be considered to have gone wrong, for example, gravitational lensing with which we can discover new planets, and there is a problem with the cosmic microwave background, but I shall stick mainly with galactic motion.

The obvious answer to this problem is that the equation for the potential is wrong, but where? There are three possibilities. First, we add a term Xto the right hand side, then try to work out what Xis. Xwill include the next two alternatives, plus anything else, but since it is essentially empirical at this stage, I shall ignore it in its own right. The second is to say that the inverse dependency on ris wrong, which is effectively saying we need to modify our law of gravity. The problem for this is that Newton’s gravity works very well right out to the outer extensions of the solar system. The third possibility is there is more mass there than we expect, and it is distributed as a halo around the galactic centre. None of these are very attractive, but the third option does save the problem of why gravity does not vary from Newtonian law in our solar system (apart from Mercury). We call this additional mass dark matter.

If we consider modified Newtonian gravity (MOND), this starts with the proposition that with a certain acceleration, the force takes the form where the radial dependency on the potential contained a further term that was proportional to the distance rthen it reached a maximum. MOND has the advantage that it predicts naturally the form to the velocity distribution and its seeming constancy between galaxies. It also provides a relationship for the observed mass and the rate of rotation of a galaxy, and this appears to hold. Further, MOND predicts that for a star, when its acceleration reaches a certain level, the dynamics revert to Newtonian, and this has been observed. Dark matter has a problem with this. On the other hand, something like MOND has real trouble trying to explain the wiggly structure of velocity distributions in certain galaxies, it does not explain the dynamics of galaxy clusters, it has been claimed it offers a poor fit for velocities in globular clusters, the predicted rotations of galaxies are good, but they require different values of what should be constant, and it does not apply well to colliding galaxies. Of course we can modify gravity in other ways, but however we do it, it is difficult to fit it with General Relativity without a number of ad hocadditions, and there is no real theoretical reason for the extra terms required to make it work. General Relativity is based on ten equations, and to modify it, you need ten new terms to be self-consistent; the advantage of dark matter is you only need 1.

The theory that the changes are due to dark matter has to assume that each galaxy has to incorporate dark matter roughly proportional to its mass, and possibly has to do that by chance. That is probably it biggest weakness, but it has the benefit that it assumes all our physics are more or less right, and what has gone wrong is there is a whole lot of matter we cannot see. It predicts the way the stars rotate around the galaxy, but that is circular reasoning because it was designed to do that. It naturally predicts that not all galaxies rotate the same way, and it permits the “squiggles” in the orbital speed distribution, again because in each case you assume the right amount of dark matter is in the right place. However, for a given galaxy, you can use the same dark matter distribution to determine motion of galaxy clusters, the gas temperature and densities within clusters, and gravitational lensing, and these are all in accord with the assumed amount of dark matter. The very small anisotropy of the cosmic microwave background also fits in very well with the dark matter hypothesis, and not with modified gravity.

Dark matter has some properties that limit what it could be. We cannot see it, so it cannot interact with electromagnetic radiation, at least to any significant extent. Since it does not radiate energy, it cannot “cool” itself, therefore it does not collapse to the centre of a galaxy. We can also put constraints on the mass of the dark matter particle (assuming it exists) from other parts of physics, by how it has to behave. There is some danger in this because we are assuming the dark matter actually follows those relationships, and we cannot know that. However, with that kept in mind, the usual conclusions are that it must not collide frequently, and it should have a mass larger than about 1 keV. That is not a huge constraint, as the electron has a mass of a little over 0.5 MeV, but it says the dark matter cannot simply be neutrinos. There is a similar upper limit in that because the way gravitational lensing works, it cannot really be a collection of brown dwarfs. As can be seen, so far there are no real constraints on the mass of the dark matter constituent particles.

So what is the explanation? I don’t know. Both propositions have troubles, and strong points. The simplest means of going forward would be to detect and characterize dark matter, but unfortunately our inability to do this does not mean that there is no dark matter; merely that we did not detect it with that technique. The problem in detecting it is that it does not do anything, other than interact gravitationally. In principle we might detect it when it collides with something, as we would see an effect on the something. That is how we detect neutrinos, and in principle you might think dark matter would be easier because it has a considerably higher mass. Unfortunately, that is wrong, because the neutrino usually travels at near light speed; if dark matter were much larger, but much slower, it would be equally difficult to detect, if not more so. So, for now nobody knows.

Just to finish, a long shot guess. In the late 20th century, a German physicist B Heim came up with a theory of elementary particles. This is largely ignored in favour of the standard model, but Heim’s theory produces a number of equations that are surprisingly good at calculating the masses and lifetimes of elementary particles, both of which are seemingly outside the scope of the standard model. One oddity of his results is he predicts a “neutral electron” with a mass slightly greater than the electron and with an infinite lifetime. If matter and antimatter originally annihilated and left a slight preponderance of matter, and if this neutral electron is its own antiparticle, then it would survive, and although it is very light, there would be enough of it to explain why its total mass now is so much greater than matter. In short Heim predicted a particle that is exactly like dark matter. Was he right? Who knows? Maybe this problem will be solved very soon, but for now it is a mystery.

Is science being carried out properly?

How do scientists carry out science, and how should they? These are questions that have been raised by reviewers in a recent edition of Science magazine, one of the leading science journals. One of the telling quotes is “resources (that) influence the course of science are still more rooted in traditions and intuitions than in evidence.” What does that mean? In my opinion, it is along the lines, for those who have, much will be given. “Much” here refers to much of what is available. Government funding can be tight. And in fairness, those who provide funds want to see something for their efforts, and they are more likely to see something from someone who has produced results consistently in the past. The problem is, the bureaucrats responsible for providing the finds have no idea of the quality of what is produced, so they tend to count scientific papers. This favours the production of fairly ordinary stuff, or even rubbish. Newbies are given a chance, but there is a price: they cannot afford to produce nothing. So what tends to happen is that funds are driven towards something that is difficult to fail, except maybe for some very large projects, like the large hadron collider. The most important thing required is that something is measured, and that something is more or less understandable and acceptable by a scientific journal, for that is a successful result. In some cases, the question, “Why was that measured?” would best be answered, “Because it was easy.” Even the large hadron collider fell into that zone. Scientists wanted to find the Higgs boson, and supersymmetry particles. They found the first, and I suppose when the question of building the collider, the reference (totally not apt) to the “God Particle” did not hurt.

However, while getting research funding for things to be measured is difficult, getting money for analyzing what we know, or for developing theories (other than doing applied mathematics on existing theories), is virtually impossible. I believe this is a problem, and particularly for analyzing what we know. We are in this quite strange position that while in principle we have acquired a huge amount of data, we are not always sure of what we know. To add to our problems, anything found more than twenty years ago is as likely as not to be forgotten.

Theory is thus stagnating. With the exception of cosmic inflation, there have been no new major theories that have taken hold since about 1970. Yet far more scientists have been working during this period than in all of previous history. Of course this may merely be due to the fact that new theories have been proposed, but nobody has accepted them. A quote from Max Planck, who effectively started quantum mechanics may show light on this: “A new scientific truth does not triumph
by convincing its opponents and making them see the light, but rather because its opponents eventually die.” Not very encouraging. Another reason may be that it failed to draw attention to itself. No scientist these days can read more than an extremely tiny fraction of what is written, as there are tens of millions of scientific papers in chemistry alone. Computer searching helps, but only for well-defined problems, such as a property of some material. How can you define carefully what you do not know exists?

Further information from this Science article provided some interest. An investigation led to what then non-scientists might consider a highly odd result, namely for scientific papers to be a hit, it was found that usually at least 90 per cent of what is written is well established. Novelty might be prized, but unless well mixed with the familiar, nobody will read it, or even worse, it will not be published. That, perforce, means that in general there will be no extremely novel approach, but rather anything new will be a tweak on what is established. To add to this, a study of “star” scientists who had premature deaths led to an interesting observation: the output of their collaborators fell away, which indicates that only the “star” was contributing much intellectual effort, and probably actively squashing dissenting views, whereas new entrants to the field who were starting to shine tended not to have done much in that field before the “star” died.

A different reviewer noticed that many scientists put in very little effort to cite past discoveries, and when citing literature, the most important is about five years old. There will be exceptions, usually through citing papers by the very famous, but I rather suspect in most cases these are cited more to show the authors in a good light than for any subject illumination. Another reviewer noted that scientists appeared to be narrowly channeled in their research by the need to get recognition, which requires work familiar to the readers, and reviewers, particularly those that review funding applications. The important thing is to keep up an output of “good work”, and that tends to mean only too many go after something that they more or less already now the answer. Yes, new facts are reported, but what do they mean? This, of course, fits in well with Thomas Kuhn’s picture of science, where the new activities are generally puzzles that are to be solved, but not puzzles that will be exceedingly difficult to solve. What all this appears to mean is that science is becoming very good at confirming that which would have been easily guessed, but not so good at coming up with the radically new. Actually, there is worse, but that is for the next post.

Have you got what it takes to form a scientific theory?

Making a scientific theory is actually more difficult than you might think. The first step involves surveying what knowledge is already available. That comes in two subsets: the actual observational data and the interpretation of what everyone thinks that set of data means. I happen to think that set theory is a great start here. A set is a collection of data with something in common, together with the rule that suggests it should be put into one set, as opposed to several. That rule must arise naturally from any theory, so as you form a rule, you are well on your way to forming a theory. The next part is probably the hardest: you have to decide what interpretation that is allegedly established is in fact wrong. It is not that easy to say that the authority is wrong, and your idea is right, but you have to do that, and at the same time know that your version is in accord with all observational data and takes you somewhere else. Why I am going on about this now is I have written two novels that set a problem: how could you prove the Earth goes around the sun if you were an ancient Roman? This is a challenge if you want to test yourself as a theoretician. If you don’t. I like to think there is still an interesting story there.

From September 13 – 20, my novel Athene’s Prophecy will be discounted in the US and UK, and this blog will give some background information to make the reading easier as regards the actual story not regarding this problem. In this, my fictional character, Gaius Claudius Scaevola is on a quest, but he must also survive the imperium of a certain Gaius Julius Caesar, aka Caligulae, who suffered from “fake news”, and a bad subsequent press. First the nickname: no Roman would call him Caligula because even his worst enemies would recognize he had two feet, and his father could easily afford two bootlets. Romans had a number of names, but they tended to be similar. Take Gaius Julius Caesar. There were many of them, including the father, grandfather, great grandfather etc. of the one you recognize. Caligulae was also Gaius Julius Caesar. Gaius is a praenomen, like John. Unfortunately, there were not a lot of such names so there are many called Gaius. Julius is the ancient family name, but it is more like a clan, and eventually there needed to be more, so most of the popular clans had a cognomen. This tended to be anything but grandiose. Thus for Marcus Tullius Cicero, Cicero means chickpea. Scaevola means “lefty”. It is less clear what Caesar means because in Latin the “ar” ending is somewhat unusual. Gaius Plinius Secundus interpreted it as coming from caesaries, which means “hairy”. Ironically, the most famous Julius Caesar was bald. Incidentally, in pronunciation, the latin “C” is the equivalent of the Greek gamma, so it is pronounced as a “G” or “K” – the difference is small and we have now way of knowing. “ae” is pronounced as in “pie”. So Caesar is pronounced something like the German Kaiser.

Caligulae is widely regarded as a tyrant of the worst kind, but during his imperium he was only personally responsible for thirteen executions, and he had three failed coup attempts on his life, the leaders of which contributed to that thirteen. That does not sound excessively tyrannical. However, he did have the bad habit of making outrageous comments (this is prior to a certain President tweeting, but there are strange similarities). He made his horse a senator. That was not mad; it was a clear insult to the senators.

He is accused of making a fatuous invasion of Germany. Actually, the evidence is he got two rebellious legions to build bridges over the Rhine, go over, set up camp, dig lots of earthworks, march around and return. This is actually a text-book account of imposing discipline and carrying out an exercise, following the methods of his brother-in-law Gnaeus Domitius Corbulo, one of the stronger Roman Generals on discipline. He then took these same two legions and ordered them to invade Britain. The men refused to board what are sometimes called decrepit ships. Whatever, Caligulae gave them the choices between “conquering Neptune” and collecting a mass of sea shells, invading Britain, or face decimation. They collected sea shells. The exercise was not madness: it was a total humiliation for the two legions to have to carry these through Rome in the form of a “triumph”. This rather odd behaviour ended legionary rebellion, but it did not stop the coups. The odd behaviour and the fact he despised many senators inevitably led to bad press because it was the senatorial class that wrote histories, but like a certain president, he seemed to go out of his way to encourage the bad press. However, he was not seen as a tyrant by the masses. When he died the masses gave a genuine outpouring of anger at those who killed him. Like the more famous Gaius Julius Caesar, Caligulae had great support from the masses, but not from the senators. I have collected many of his most notorious acts, and one of the most bizarre political incidents I have heard of is quoted in the novel more or less as reported by Philo of Alexandria, with only minor changes for style consistency, and, of course, to report it in English.

As for showing how scientific theory can be developed, in TV shows you find scientists sitting down doing very difficult mathematics, and while that may be needed when theory is applied, all major theories start with relatively simple concepts. If we take quantum mechanics as an example of a reasonably difficult piece of theoretical physics, thus to get to the famous Schrödinger equation, start with the Hamilton-Jacobi equation from classical physics. Now the mathematician Hamilton had already shown you can manipulated that into a wave-like equation, but that went nowhere useful. However, the French physicist de Broglie had argued that there was real wave-like behaviour, and he came up with an equation in which the classical action (momentum times distance in this case) for a wave length was constant, specifically in units of h (Planck’s quantum of action). All that Schrödinger had to do was to manipulate Hamilton’s waves and ensure that the action came in units of h per wavelength. That may seem easy, but everything was present for some time before Schrödinger put that together. Coming up with an original concept is not at all easy.

Anyway, in the novel, Scaevola has to prove the Earth goes around the sun, with what was available then. (No telescopes that helped Galileo.) The novel gives you the material avaiable, including the theory and measurements of Aristarchus. See if you can do it. You, at least, have the advantage you know it does. (And no, you do not have to invent calculus or Newtonian mechanics.)

The above is, of course, merely the background. The main part of the story involves life in Egypt, the aanti-Jewish riots in Egypt, then the religious problems of Judea as Christianty starts.

How Earth Cools

As you may have seen at the end of my last post, I received an objection to the existence of a greenhouse effect on the grounds that it violated the thermodynamics of heat transfer, and if you read what it says it is essentially focused on heat conduction. The reason I am bothering with this post is that it is an opportunity to consider how theories and explanations should be formed. We start by noting that mathematics does not determine what happens; it calculates what happens provided the background premises are correct.

The objection mentioned convection as a complicating feature. Actually, the transfer of heat in the lower atmosphere is largely dependent on the evaporation and condensation of water, and wind transferring the heat from one place to another, and it is these, and ocean currents, that are the problems for the ice caps. Further, as I shall show, heat conduction cannot be relevant to the major cooling of the upper atmosphere. But first, let me show you how complicated heat conduction is. The correct equation for one-dimensional heat conduction is represented by a partial differential equation of the Laplace type, (which I would quote if I knew how to get such an equation into this limited htm formatting) and the simplest form only works as written when the medium is homogenous. Since the atmosphere thins out with height, this clearly needs modification, and for those who know anything about partial differential equations, they become a nightmare once the system becomes anything but absolutely simple. Such equations also apply to convection and evaporative transfer, once corrected for the nightmare of non-homogeneity and motion in three dimensions. Good luck with that!

This form of heat transfer is irrelevant to the so-called greenhouse effect. To show why, I start by considering what heat is, and that is random kinetic energy. The molecules are bouncing around, colliding with each other, and the collisions are elastic, which means energy is conserved, as is momentum. Most of the collisions are glancing, and that means from momentum conservation that we get a range of velocities distributed about an “average”. Heat is transferred because fast moving molecules collide with slower ones, and speed them up. The objection noted heat does not flow from cold to hot spontaneously. That is true because momentum is conserved in collisions. A molecule does not speed up when hit by a slower molecule. That is why that equation has heat going only in one way.

Now, suppose with this mechanism, we get to the top of the atmosphere. What happens then? No more heat can be transferred because there are no molecules to collide with in space. If heat pours in, and nothing goes out, eventually we become infinitely hot. Obviously that does not happen, and the reason becomes obvious when we ask how the heat gets in in the first place. The heat from the sun comes from the effects of solar radiation. Something like 1.36 kW/m^2 comes in on a surface in space at right angles to the line from the sun, but the average is much less on the surface of earth as the angle is at best normal only at noon, and if the sun is overhead. About a quarter of that is directly reflected to space, and that may increase if the cloud cover increases. The important point here is that light is not heat. When it is absorbed, it will direct an electronic transition, but that energy will eventually decay into heat. Initially, however, the material goes to an excited state, but its temperature remains constant, because the energy has not been randomised. Now we see that if energy comes in as radiation, it follows to get an equilibrium, equivalent energy must go out, and as radiation, not heat, because that is the only way it can get out in a vacuum.

The ground continuously sends radiation (mainly infrared) upwards and the intensity is proportional to the fourth power of the temperature. The average temperature is thus determined through radiant energy in equals radiant out. The radiance for a given material, which is described as a grey body radiator, is also dependent on its nature. The radiation occurs because any change of dipole moment leads to electromagnetic radiation, but the dipoles must change between quantised energy states. What that means is they come from motion that can be described in one way or another as a wave, and the waves change to longer wavelengths when they radiate. The reason the waves representing ground states switch to shorter wavelengths is that the heat energy from collisions can excite them, similar in a way to when you pluck a guitar string. Thus the body cools by heat exciting some vibratory states, which collapse by radiation leaving them. (This is similar to the guitar string losing energy by emitting sound, except that the guitar string emits continuous decaying sound; the quantised state lets it go all at once as one photon.)

Such changes are reversible; if the wave has collapsed to a longer wavelength when energy is radiated away, then if a photon of the same frequency is returned, that excites the state. That slows cooling because the next photon emitted from the ground did not need heat to excite it, and hence that same heat remains. The reason there is back radiation is that certain frequencies of infrared radiation leaving the ground get absorbed by molecules in the atmosphere when their molecular vibrational or rotational excited states have a different electric moment from the ground state. Carbon dioxide has two such vibrational states that absorb mildly, and one that does not. Water is a much stronger absorber, and methane has more states available to it. Agriculture offers N2O, which is bad because it is harder to remove than carbon dioxide, and the worst are chlorocarbons and fluorocarbons, because the vibrations have stronger dipole moment changes. Each of these different materials has vibrations at different frequencies, which make them even more problematical as radiation at more frequencies are slowed in their escape to space. The excited states decay and emit photons in random directions, hence only about half of that continues on it way to space, the rest returning to the ground. Of that that goes upwards, it will be absorbed by more molecules, and the same will happen, and of course some coming back from up there with be absorbed at a lower level and half of that will go back up. In detail, there is some rather difficult calculus, but the effect could be described as a field of oscillators.

So the take-away message is the physics are well understood, the effect of the greenhouse gases is it slows the cooling process, so the ground stays warmer than it would if they were not there. Now the good thing about a theory is that it should predict things. Here we can make a prediction. In winter, in the absence of wind, the night should be warmer if there is cloud cover, because water is a strong greenhouse material. Go outside one evening and see.

That Was 2017, That Was

With 2017 coming to a close, I can’t resist the urge to look back and see what happened from my point of view. I had plenty of time to contemplate because the first seven months were largely spent getting over various surgery. I had thought the recovery periods would be good for creativity. With nothing else to do, I could write and advance some of my theoretical work, but it did not work but like that. What I found was that painkillers also seemed to kill originality. However, I did manage one e-novel through the year (The Manganese Dilemma), which is about hacking, Russians and espionage. That was obviously initially inspired by the claims of Russian hacking in the Trump election, but I left that alone. It was clearly better to invent my own scenario than to go down that turgid path. Even though that is designed essentially as just a thriller, I did manage to insert a little scientific thinking into the background, and hopefully the interested potential reader will guess that from the “manganese” in the title.

On the space front, I am sort of pleased to report that there was nothing that contradicted my theory of planetary formation found in the literature, but of course that may be because there is a certain plasticity in it. The information on Pluto, apart from the images and the signs of geological action, were well in accord with what I had written, but that is not exactly a triumph because apart from those images, there was surprisingly little new information. Some of which might have previously been considered “probable” was confirmed, and details added, but that was all. The number of planets around TRAPPIST 1 was a little surprising, and there is limited evidence that some of them are indeed rocky. The theory I expounded would not predict that many, however the theory depended on temperatures, and for simplicity and generality, it considered the star as a point. That will work for system like ours, where the gravitational heating is the major source of heat during primary stellar accretion, and radiation for the star is most likely to be scattered by the intervening gas. Thus closer to our star than Mercury, much of the material, and even silicates, had reached temperatures where it formed a gas. That would not happen around a red dwarf because the gravitational heating necessary to do that is very near the surface of the star (because there is so much less falling more slowly into a far smaller gravitational field) so now the heat from the star becomes more relevant. My guess is the outer rocky planets here are made the same way our asteroids were, but with lower orbital velocities and slower infall, there was more time for them to grow, which is why they are bigger. The inner ones may even have formed closer to the star, and then moved out due to tidal interactions.

The more interesting question for me is, do any of these rocky planets in the habitable zone have an atmosphere? If so, what are the gases? I am reasonably certain I am not the only one waiting to get clues on this.

On another personal level, as some might know, I have published an ebook (Guidance Waves) that offers an alternative interpretation of quantum mechanics that, like de Broglie and Bohm, assumes there is a wave, but there are two major differences, one of which is that the wave transmits energy (which is what all other waves do). The wave still reflects probability, because energy density is proportional to mass density, but it is not the cause. The advantage of this is that for the stationary state, such as in molecules, that the wave transmits energy means the bond properties of molecules should be able to be represented as stationary waves, and this greatly simplifies the calculations. The good news is, I have made what I consider good progress on expanding the concept to more complicated molecules than outlined in Guidance Waves and I expect to archive this sometime next year.

Apart from that, my view of the world scene has not got more optimistic. The US seems determined to try to tear itself apart, at least politically. ISIS has had severe defeats, which is good, but the political futures of the mid-east still remains unclear, and there is still plenty of room for that part of the world to fracture itself again. As far as global warming goes, the politicians have set ambitious goals for 2050, but have done nothing significant up to the end of 2017. A thirty-year target is silly, because it leaves the politicians with twenty years to do nothing, and then it would be too late anyway.

So this will be my last post for 2017, and because this is approaching the holiday season in New Zealand, I shall have a small holiday, and resume half-way through January. In the meantime, I wish all my readers a very Merry Christmas, and a prosperous and healthy 2018.

Ross 128b a Habitable Planet?

Recently the news has been full of excitement that there may be a habitable planet around the red dwarf Ross 128. What we know about the star is that it has a mass of about 0.168 that of the sun, it has a surface temperature of about 3200 degrees K, it is about 9.4 billion years old (about twice as old as the sun) and consequently it is very short of heavy elements, because there had not been enough supernovae that long ago. The planet is about 1.38 the mass of Earth, and it is about 0.05 times as far from its star as Earth is. It also orbits its star every 9.9 days, so Christmas and birthdays would be a continual problem. Because it is so close to the star it gets almost 40% more irradiation than Earth does, so it is classified as being in the inner part of the so-called habitable zone. However, the “light” is mainly at the red end of the spectrum, and in the infrared. Even more bizarrely, in May this year the radio telescope at Arecibo appeared to pick up a radio signal from the star. Aliens? Er, not so fast. Everybody now seems to believe that the signal came from a geostationary satellite. Apparently here is yet another source of electromagnetic pollution. So could it have life?

The first question is, what sort of a planet is it? A lot of commentators have said that since it is about the size of Earth it will be a rocky planet. I don’t think so. In my ebook “Planetary Formation and Biogenesis” I argued that the composition of a planet depends on the temperature at which the object formed, because various things only stick together in a narrow temperature range, but there are many such zones, each giving planets of different composition. I gave a formula that very roughly argues at what distance from the star a given type of body starts forming, and if that is applied here, the planet would be a Saturn core. However, the formula was very approximate and made a number of assumptions, such as the gas all started at a uniform low temperature, and the loss of temperature as it migrated inwards was the same for every star. That is known to be wrong, but equally, we don’t know what causes the known variations, and once the star is formed, there is no way of knowing what happened so that was something that had to be ignored. What I did was to take the average of observed temperature distributions.

Another problem was that I modelled the centre of the accretion as a point. The size of the star is probably not that important for a G type star like the sun, but it will be very important for a red dwarf where everything happens so close to it. The forming star gives off radiation well before the thermonuclear reactions start through the heat of matter falling into it, and that radiation may move the snow point out. I discounted that largely because at the key time there would be a lot of dust between the planet and the star that would screen out most of the central heat, hence any effect from the star would be small. That is more questionable for a red dwarf. On the other hand, in the recently discovered TRAPPIST system, we have an estimate of the masses of the bodies, and a measurement of their size, and they have to have either a good water/ice content or they are very porous. So the planet could be a Jupiter core.

However, I think it is most unlikely to be a rocky planet because even apart from my mechanism, the rocky planets need silicates and iron to form (and other heavier elements) and Ross 128 is a very heavy metal deficient star, and it formed from a small gas cloud. It is hard to see how there would be enough material to form such a large planet from rocks. However, carbon, oxygen and nitrogen are the easiest elements to form, and are by far the most common elements other than hydrogen and helium. So in my theory, the most likely nature of Ross 128b is a very much larger and warmer version of Titan. It would be a water world because the ice would have melted. However, the planet is probably tidally locked, which means one side would be a large ocean and the other an ice world. What then should happen is that the water should evaporate, form clouds, go around the other side and snow out. That should lead to the planet eventually becoming metastable, and there might be climate crises there as the planet flips around.

So, could there be life? If it were a planet with a Saturn core composition, it should have many of the necessary chemicals from which life could start, although because of the water/ice live would be limited to aquatic life. Also, because of the age of the planet, it may well have been and gone. However, leaving that aside, the question is, could life form there? There is one restriction (Ranjan, Wordsworth and Sasselov, 2017. arXiv:1705.02350v2) and that is if life requires photochemistry to get started, then the intensity of the high energy photons required to get many photochemical processes started can be two to four orders of magnitude less than what occurred on Earth. At that point, it depends on how fast everything that follows happens, and how fast the reactions that degrade them happen. The authors of that paper suggest that the UV intensity is just too low to get life started. Since we do not know exactly how life started yet, that assessment might be premature, nevertheless it is a cautionary point.