Why do we do science?

What is the point of science? In practice, most scientists use their knowledge to try to make something, or solve some sort of problem, or at least help someone else do that. (Like most occupations, most junior ones turn up to work and work on what they are told to work on.) But, you might say, surely, deep down, they are seekers of the truth? Unfortunately, I rather fancy this is not the case. The problem was first noted by Thomas Kuhn, in his book, “The structure of scientific revolutions”. In Kuhn’s view, scientific results are almost always interpreted in terms of the current paradigm, i.e. while the data are reproduced properly, they are interpreted in terms of current thinking, even if that does not fit very well. No other theory gets a look-in. If a result does not conform to the standard theory, the researcher does not question the standard theory. The first effort is to find some way of accommodating it, and if that does not work, it may be listed as a question for further work, in other words the researcher tries to persuade someone else to find a way of fitting it to the standard paradigm rather than taking the effort to find an alternative theory.

According to Kuhn, most science is carried out as “normal science”, wherein researchers create puzzles that should be solved by the standard paradigm, in other words, experiments are set up not to try to find the truth, but rather to confirm what everyone believes to be true. This is not entirely unreasonable. If we stop and think for a moment, an awful lot of such research is carried out by PhD students, or post-doctoral fellows. The lead researcher has submitted his idea as a request for funding, and this is overseen by a panel. If you submit something that would not get anywhere within the current paradigm, you will not get funding because the panel will usually consider this to be a waste of time. On top of that, if you are going to include a PhD student in this work, that student needs a thesis at the end of his work, and that student will not thank the supervisor for coming up with something that does not produce results that can be written up. In other words, the projects are chosen such that the lead researcher has a very good idea as to what will be found, and it will be chosen so that it is unlikely to lead to too great an intellectual challenge. An example of a good project might to make a new chemical compound that might be a useful drug. The project might involve new synthetic work, there will be problems in choosing a route, but the project will not founder on some conceptual problem.

Natually, the standard paradigm clearly must have much going for it to get adopted in the first place. It cannot be just anything, and there will be a lot of truth in it, nevertheless as I mentioned in my first ebook, part 1 of “Elements of Theory”, any moderate subset of data frequently has at least two theories that would explain the data, and when the paradigm is chosen, the subset is moderate. If all that follows it to investigate very similar problems, then a mistake can last. The classic mistake was Claudius Ptolemy’s cosmological theory, which was the “truth” for over 1600 years, even though it was wrong and, as we now recognize, with no physical basis. If you wish to find the truth, you might follow Popper and try to design experiments that would falsify such a theory, but PhD theses cannot be based like that as it is too risky that the student will find nothing and fail to get his degree through no fault of his.

What brought these thoughts on was a recent article in the journal Icarus. The subject was questioning how the Moon was formed. The standard theory of planetary formation goes like this. After the star forms, the accretion disk that remains settles the dust on the central plane, and this gradually congeals into larger bodies, which further join together when they collide, and so on, until you get planetesimals (objects about the size of asteroids) then, apart from the asteroids, eventually embryos (objects about the size of Mars) which gravitationally interact and form very eccentric orbits, and then collide to form planets (except for Mars, which is a remaining embryo). All such collisions once planetesimals form are random, and the underpinning material could have come from a very large region, thus Earth was made from embryos formed from material beyond Mars and Venus. The Moon was formed from the splatter arising from a near glancing collision of a Mars-sized body called Theia with Earth.

If you carefully measure the isotope ratios of samples of meteorites, what you find is that all from the same origin have the same isotope ratios, but those from different parts of the solar system have different ratios. As an example, oxygen has three stable isotopes of atomic weights 16, 17 and 18. We have carbonaceous chondrites from the outer asteroid belt, a number of samples from Vesta, some from Mars, and of course unlimited supplies from here. The isotope ratios of these samples are all the same from one source, but different between sources. We also have a good number of samples from the Moon, thanks to the Apollo program. Now, the unusual fact is, the Moon is made of material that is essentially identical to our rocks, at least in terms of isotope ratios.

This Icarus paper carried out simulations of planetary formation employing the standard theory, and showed that since the Moon is largely Theia, the chances of the Moon and Earth having the same ratio of even oxygen isotopes is less than 5%. So, what conclusion do the authors draw? The obvious one is that the Moon did not form that way; a more subtle one is that planets did not form by the random collision of growing rocky bodies. However, they drew neither. Instead, they really refused to draw a conclusion.

I should add that I have in interest in this debate, as my mechanism outlined in Planetary Formation and Biogenesis has the planets grow from relatively narrow zones, although the disk material is always heading towards the star to provide new feed. The Moon grows at the same distance as Earth (at a Lagrange point) from the star and hence has the same composition. The concept that the Moon formed at either L4 or L5 was originally proposed by Belbruno and Gott in 2005 (Astron. J. 129: 1724–1745) and I regard it as almost dishonest not to have mentioned their work, which predicts their result provided the bodies form from local material. Unfortunately, the citing of scientific work that contradicts the standard theory is not exactly frequent, and in my view, does science no service. The real problem is, how common is this rejection of that which is currently uncomfortable?

You may say, who cares? It may very well be that how the Moon formed is totally irrelevant to modern society. My point is, society is becoming extremely dependent on science, and if science starts to become disinterested in seeking the truth, then eventually the mistakes may become very significant. Of course mistakes will be made. That happens in any human endeavor. But, do we want to restrict them to unavoidable accidents, or are we prepared to put up with avoidable errors?

Origin of life, and a challenge!

Here is a chance to test yourself as a theoretician. But do not worry if you cannot solve this. Most people will not, and I predict nobody will, but prove me wrong! And as a hint, while nobody actually knows the answer, as I shall show eventually, getting a very reasonable answer is actually relatively simple, although you need a little background knowledge for the first question.

Just before Christmas, I posted with the title Biogenesis: how did life get started?” (http://wp.me/p2IwTC-6e ) but as some may have noticed, I did not get very far along the track indicated by the title. The issue is, of course, somewhat complicated, and it is easier to discuss it in small pieces I also mentioned I was about to give a talk on this early this year. Well, the talk will come on March 4, so it is approaching quickly. Accordingly, I have put out an abstract, and am including two challenges, which readers here may or may not wish to contemplate. Specifically,
1. Why did nature choose ribose for nucleic acids?
2. How did homochirality arise?
Put your guesses or inspired knowledgeable comments at the end of this post. The answers are not that difficult, but they are subtle. In my opinion, they are also excellent examples of how to go about forming a theory. I shall post my answers in due course.

The question of, why ribose, is a little complicated and cannot be answered without some chemical knowledge, so most readers probably won’t be able to answer that. Notwithstanding that, it is a very interesting question because I believe it gives a clue as to how life got underway. RNA is a polymer in which each mer is made up of three entities: one of four nucleobases, ribose and a phosphate ester. The nucleobase is attached to C-1 of ribose (if you opened it up, at the aldehyde end) and the phosphate is at C-5 (the other end, ribose being a five carbon sugar. The nucleobases are, in general, easy to make. If you leave ammonium cyanide lying around, they make themselves, but that is the only thing that appears to be easy about this entity. Sugars can be made in solution by having formaldehyde, which is easily made, react in water with lime, and a number of other solids. That seems easy, except that when you do this, you do not get much, if any, ribose. The reason is, ribose is a high-energy pentose (five carbon sugar) because all the hydroxyl groups are eclipsing each other in the closest orientation (axial, for those who know some chemistry). In the laboratory, double helix nucleobases (duplexes) have been made from xylose and arabinose, and in many ways these have superior properties to ribose, but nature chose ribose, so the question is, why? Not only did it do it for RNA, but the unit adenine – ribose – phosphate turns up very frequently.

Adenine combined with ribose is usually called adenosine, and the adenosine phosphate linkage turns up in the energy transfer chemical ATP (adenosine tripolyphosphate), the reduction oxidation catalysts NAD and FAD, where the AD stands for adenosine diphosphate, and in a number of enzyme cofactors, to give solubility in water. Giving solubility in water is an obvious benefit, but putting a sugar unit on the group would also do that. Giving an electric charge would also be of benefit, because it helps keep the entity in the cell, nevertheless there are also other ways of doing that. You may say, well, it had to choose something, but recall, ribose is hard to make, so why was it selected for so many entities?

The phosphate ester also causes something of a problem. In the laboratory, phosphate esters are usually made with highly reactive phosphorus-based chemicals, but life could not have started that way. Another way to form phosphate esters is to heat a phosphate and an alcohol (including the hydroxyl groups on a sugar) to about 180 oC, when water is driven off. Note that if water is around, as in the undersea thermal vents that are often considered to be the source of life, the superheated water converts phosphate esters to phosphate and alcohol groups. Life did not start at the so-called black smokers, although with sophisticated protection mechanisms, it has evolved to tolerate such environments. Another problem with phosphate is that phosphates are insoluble in neutral or alkaline water, and phosphate esters hydrolyse in acidic water.
However, notwithstanding the difficulty with using phosphate, there is no real choice if you want a linking agent with three functions (two used up to join two groups, one to be ionic to enhance water solubility). Boron is rare, and has unusual chemistry, while elements such as arsenic, besides being much less common, do not give bonds with as much strength.

Homochirality is different matter. (Chirality can be though of like handedness. If you have gloves, your left hand has its glove and the right hand its, even though they are identical in features, such as four fingers and a thumb. The handedness comes from the fact you cannot put those fingers and thumb on a hand where the top differs from the bottom without making the right hand different from the left.) The sugars your body uses are D sugars (think of this as right handed) while all your amino acids are L, or left handed. The problem is, when you synthesis any of these through any conceivable route given the nature of the starting materials, which have no chirality, you get an equal mix of D and L. How did nature select one lot and neglect the others?
Put your guesses below! In the meantime, my ebook, “Planetary formation and biogenesis”, which summarizes what we knew up to about 2012, is going to be discounted on Amazon for a short period following March 6. This is to favour those going to my talk, but you too can take advantage. It has a significant scientific content (including an analysis of over 600 scientific papers) so if your scientific knowledge is slight, it may be too difficult.

What now for Ukraine?

As the situation in Ukraine seems to deteriorate, the question is, what now? Accurate information is, understandably, rather scarce but from a strategic point of view, most parties seem to be digging in, more with a view to making the problem worse than in improving it. The first step in forming a strategy is to have a clear goal, and from what I can make out, the various parties have goals that are essentially irreconcilable. My guess is that the following is approximately what the goals are, but I could be wrong. Poroshenko wants to exert control over all of what he claims is Ukraine on the basis he was elected president of it, except of course the parts that don’t want him were not given a vote. The leaders of Eastern Ukraine want independence from Poroshenko. Crimea is part of Russia again. The position of the US and NATO is less clear. They claim they want Ukraine united, but the real position may be that they want to put one over Russia, and have military bases close to Russia. Russia almost certainly wants fewer missiles aimed at it, and not in Ukraine, and additionally, it wants to support Russian-speaking people in Ukraine, who reports say either are or most certainly will be oppressed by right wing militias. Missing from all this is what do the average Ukrainian want? Do they all want the same thing?

The West has sent Ukraine various supplies to help those afflicted by the war, and sent them to Kiev, where they have been sent eastwards. From what I can make out, a very high per centage of these have been hijacked and looted. Further, the land near the separatists may or may not have Ukrainian regular soldiers present, but they most certainly have right wing militias and paramilitary groups. The separatists may or may not have irregular soldiers from Russia, and they may or may not have been supplied with weapons from Russia. Everyone says they have, but it should be recalled that there were a number of arsenals in Eastern Ukraine that are now under separatist control, and from what we can make out, most of the weapons used by the separatists are of Soviet age. Thus the BUK missile that brought down the airliner was designed and supplied up to thirty odd years ago.

So, what to do? Germany and France have apparently argued for a demilitarized zone between the east and west and a cease-fire. In my opinion, that is not going to work unless there are good troops there to enforce it. The problem with a cease-fire is that its only real purpose is to buy time until some permanent settlement is reached. Even in Korea, there is a permanent settlement, at least to the extent it has survived for nearly fifty years. But this will not work while the right wing militias want to bring the East to heel. The US is talking about giving Kiev better arms. What that will do, based on recent history, is to first better arm the militias, who are uncontrollable, and secondly they will be looted and sold off, and may well end up in terrorists hands. Worse still, if the US supplies military aid, Russia will be obliged to match it, which will merely escalate things. If the US sends “advisors”, or troops, Russia will match it. The danger of a real war breaking out if someone makes a mistake is only too obvious. Suppose a US weapon was used against Russians in Russia, now what?

So what should happen? My view is that the previous cease-fire was time wasted. What the West could do is to try to get Putin onside by promising not to have Ukraine in NATO and promising not to have missiles there, then offer Ukraine an independently monitored election, district by district, to decide what they want to happen. There must be sufficient external force to guarantee militias stand down, and clear instructions to the parties that undermining this process will not be tolerated. At the end of this, those districts that have a majority to secede should be permitted to do so. I know, people will say, this is interfering with a sovereign nation, but my response is, it is actually offering the people the chance to get what they want, not what various other parties that do not live there want. After the election, if any districts do secede, then there should also be financial assistance to permit those who do not want to be a minority in a district to move. In all probability, the numbers moving each way should be roughly equal. That would be expensive, but nowhere nearly as expensive as an all-out war.

What do you think?

Life after death

The issue of whether there is life, or consciousness, after death is one of those questions that can only be answered by dying. If there is, you find out. My wife was convinced there is, and she was equally convinced that I, as a scientist, would quietly argue the concept was ridiculous. However, as she was dying of metastatic cancer we had a discussion of this issue, and I believe the following theory gave her considerable comfort. Accordingly, I announced this at her recent funeral, in case it helped anyone else, and I have received a number of requests to post the argument. I am doing two posts: one with the mathematics, and one where I merely assert the argument for those who want a simpler account. The more mathematical post is at (http://my.rsc.org/blogs/84/1561 ).

First, is there any evidence at all? There are numerous accounts of people who nearly die but do not, and they claim to see a tunnel of light, and relations at the other end. There are two possible explanations:
(1) What they see is true,
(2) When the brain shuts down, it produces these illusions.
The problem with (2) is, why does it do it the same way for all? There was also an account recently of someone who died on an operating table, but was resuscitated, and he then gave an account of what the surgeons were doing as viewed from above. The following study may be of interest (http://rt.com/news/195056-life-after-death-study/ ) One can take this however one likes, but it is certainly weird.

What I told Claire arises from my interpretation of quantum mechanics, which is significantly different from most others’. First, some background. (If you have no interest in physics, you can skip this and go to the last three paragraphs.) If you fire particles such as electrons one at a time through a screen with two slits, each electron will give a point reading on a detector screen, but if you do this for long enough, the points give the pattern of wave diffraction. This is known as wave-particle duality, and at the quantum level, an experiment either gives properties of a particle or those consistent with a wave, depending on how you do it. So, how is that explained? Either there is a wave guiding the particles or there is not. Most physicists argue there is not and the electrons just happen to give that distribution. You ask, why? They tend to say, “Shut up and compute!” Einstein did not agree, and said, “God does not play dice.” What we know is that computations based on a wave equation give remarkably good agreement with observation, but nobody can find evidence for the wave. All we detect are the particles, but of course that is what the detectors are set up to detect. It is generally agreed that the formalism that enables calculations is sufficient. For me, that is not sufficient, and I think there must be something causing this behaviour. Suppose you cannot see ducks but you here a lot of quacking, why do you assume the quacks are just the consequence of your listening, and there are no ducks? There is a minority who believe there is a wave, and the pilot wave concept was formed by de Broglie.

Modern physics states the wave function is complex. In general, this is true, but from Euler’s theory of complex numbers, once (or twice) a period (which is defined as the time from one crest, say, to the next) the wave becomes momentarily real. My first premise is
The physics of the system are determined only when the wave becomes real.
From this, the stability of atoms, the Uncertainty Principle and the Exclusion Principle follow. Not that that is of importance here, other than to note that this interpretation does manage to do what standard theory effectively has as premises. My next premise is
The wave causes the wave behaviour.
At first sight, this seems obvious, but recall that modern quantum theory does not assert this. Now, if so, it follows that the wave front must travel at the same velocity as the particle; if it did not, how could it affect the particle? But if it travels at the same velocity, the energy of the system must be twice the kinetic energy of the particle. This simply asserts that the wave transmits energy. Actually, every other wave in physics transmits energy, except for the textbook quantal matter wave, which transmits nothing, it does not exist, but it defines probabilities. (As an aside, since energy is proportional to mass, in general this interpretation does not conflict with standard quantum mechanics.) For this discussion, the most important consequence is that both particle and wave must maintain the same energy. The wave sets the particle energy because the wave is deterministic, which means that once the wave is defined, it is defined for every future with known conditions. The particle, however, suffers random motion and has to be guided by the wave in my theory.

Now, what is consciousness? Strictly speaking, we do not know exactly, but examination of brains that are conscious appear to show considerable ordered electrical activity. But if electrical activity is occurring, that is the expenditure of energy. (The brain uses a remarkably high fraction of the body’s energy.) But since the movement of electrons is quantum controlled, then the corresponding energy must be found in an associated set of waves. Moreover, it is the associated wave that is causal, and it alone can overcome the randomness that may arise through the uncertainty of position of any particle. The wave guides the particle! Another important feature of these Guidance Waves is they are linear, which means they are completely separable. This is a general property of waves, and is not an ad hoc addition. It therefore follows that when we are conscious and living “here”, there is a matrix of waves with corresponding energy “there”.

Accordingly, if this Guidance Wave interpretation of quantum mechanics is correct, then the condition for life after death is very simple: death occurs because the body cannot supply the energy required to match the Guidance Waves that are organizing consciousness, and the random motion of particles in the brain, due to heat, overpower the order that bodily consciousness requires. The body now is no longer conscious, and hence is dead, and useful brain activity ceases. But if at the point where the brain can no longer provide its energy contribution for consciousness, the energy within the Guidance Wave can dissociate itself from the body and maintain itself “there”, and recall that the principle of linearity is that other waves do not affect it, then that wave package can continue, and since it represents the consciousness of a person, that consciousness continues. What happens next depends on the conditions applicable “there”, and for that we have no observations.

Is the Guidance Wave interpretation correct? As far as I am aware, there is no observation that would falsify my alternative interpretation of quantum mechanics, while my Guidance Wave theory does make two experimental predictions that contradict standard quantum mechanics. It also greatly simplifies the calculation of some chemical bond properties. However, even if it is correct, that does not mean there is life after death, but at least in my interpretation of quantum mechanics it is permitted. That thought comforted Claire in her last days, and if it comforts anyone else, this post is worth it.

Is time relative?

In the previous post (http://wp.me/p2IwTC-6m) I gave a simplified account of why time and position are considered relative, in which each observer has his own version of what “here” and “now” means. We need some means of describing what an observer sees. An absolute position would be like GPS coordinates. Everybody agrees where the equator is, and we have made Greenwich a reference point for longitude, but in the general Universe there are no obvious reference points. Without a reference point, “here” is meaningless unless expressed as a distance from something else, and this has been well established since Galileo’s time, if not earlier. There was thought to be “aether” through which everything travelled, but Michelson and Morley provided evidence there was no such thing. The formalism of Einstein’s relativity puts time in a similar position, and it dilates as velocities approach that of light. This is accounted for with what is called “space-time”, in which time is just another relative coordinate.
All observed evidence is in accord with this, and an example is the lifetime of muons. The muon is an elementary particle that decays to an electron with a half-life of about 1.5 microseconds. However, if the muons were travelling at about 98% the velocity of light, applying the Lorentz-Fitzgerald factor for time dilation, as required by special relativity, it has been shown that this half-life is about 5 times longer, and most importantly, muons behave as if they live five times longer when travelling at such velocities. From an observer considering the muon’s point of view, the reason it lasts longer is because the distance it thinks it has travelled is shorter. This suggests that time is relative, and the equations of relativity invariably give the correct prediction of a measurement.
Consider a space traveller. According to relativity, if the traveller heads off at near the speed of light and travels far enough, then comes back, time has essentially stopped for the traveller, but not for whoever is left behind. That was the basis of my scifi trilogy “Gaius Claudius Scaevola”. Within the trilogy, Scaevola starts in Roman times, gets abducted by aliens, and returns sometime like the 23rd century, and he has aged a few years only. The principle of relativity is that all clocks in a moving ship must slow down equally; as Feynman remarks in Six not so easy pieces, if this were not so, you could use something like the rate of development of a cancer to work out the absolute velocity of a space ship. To further quote Feynman, “if no way of measuring time gives anything but a slower rate, we shall have to say, in a certain sense, that time itself appears to be slower in the space ship”. The best-known application is the GPS system. Without the equations of General Relativity, this simply would not work.
Nevertheless, I believe there is a way of measuring an absolute time. Suppose a similar traveller headed off to a galaxy five hundred light years away at light speed, and, in accord with relative time, came back a billion years later without having aged. Now suppose he and another physicist from the future decided to measure the age of the Universe, that is the time from the big bang. The equipment is set up and gives a meter reading. Surely both must obtain the same reading since they see the same dial, yet according to the traveller, the Universe should be only13.8 billion years old, while the measurement gives it at 14.8 billion years old. There is only one possibility: the Universe is 14.8 billion years old, and all that has happened is that the traveller has simply not observed the passing of a billion years. The point is, when considering distance, there is no reference position. When considering time, there IS a reference time, and the expansion of the universe provides a fixed clock that is a reference visible to any observer. Worse, you could in principle work out the age of the Universe from within the ship, so in principle you could use this to work out the speed, apart from the fact that determining the age of the Universe is not exactly accurate. So why does muon decay slow?
Suppose we start with no muons, then at time t we shall have nt muons, given by (assuming the number of decays are proportional to the number there)
n_t=n_0 e^(-kt)
Now it is obvious that you get the same result if either k or t is dilated.
What is k? It is the “constant” that is characteristic of the decay, and it can be considered as the barrier to decay, or the tendency of the particle to hold together. Is there any way that could change? Does it have to be constant?
This gets a bit more difficult, but Einstein’s relativity can actually be represented in a slightly different way than usual. For those with a grasp of physics, I recommend Feynman’s “Six not so easy pieces”. When Feynman says they are not so easy, he is not joking. Nevertheless one point he makes is that Einstein’s special theory of relativity can be represented solely in terms of a mass enhancement due to velocities near the speed of light. What that means is that as the muon (or the space traveller) approaches the speed of light, it gets more massive. If that energy is concentrated on the muon, then the added mass might dilate k by increasing the barrier to decomposition. It is not necessarily time that is changing, but rather the physical relationships dependent on time. Does it matter? In my view, yes. I would like to think in science we are trying to determine what nature does, and not that which happens to be convenient at the time.
In many cases in science, like the equation above, there can be more than one reason why an equation works. Another point is that the essence of a scientific theory should be able to be conveyed without the use of difficult mathematics, although, of course, to make specific use of the concepts, difficult mathematics are needed. What the scientists should do is to ask questions of a theory, and then test the answers.
As an example of such a question, we might ask, did Michelson and Morley really prove there is no aether? My view is, no they did not, although that does not mean there is aether. The reason is this. If light always has the same velocity relative to the aether, it must interact with it. That means there is an interaction between aether and electromagnetism. Now molecules have local electromagnetic fields, and such molecules travel fast and randomly, and might very well “trap” aether. Think about a river flowing, with reeds along the bank. The water flows strongly, but if you try to measure the flow in a reed-bed, the water is virtually stationary. In the same way, the random motion of air might trap aether near the earth’s surface. What science suggests now is simple: repeat the Michelson Morley experiment outside the space station. Suppose the answer was still zero. Then Einstein’s theory is firm. Suppose the answer is not zero? Actually, the equations of Einstein’s relativity would not change all that much, and would actually become a little more complicated, but the differences would probably not be discernible in any current experiment. What do I think? That is actually irrelevant. The whole point of science is to ask questions, to try and uncover further aspects of nature. For it is what nature does that is relevant, not what we want it to do. What do you think?

Simple relativity

During the summer break, I got involved in the issue of whether time was relative, but before I can discuss that, I need to be sure readers understand what relativity is. Most would consider relativity to be essentially mathematical. Not really. The principle of relativity is quite simple, and goes back a long way. In Il Dialogo, Galileo pointed out that if you were below decks in a boat, you have no idea how fast it was going, nor for that matter, in what direction. You could get up on deck and work out how fast you were going relative to the water, but there is no absolute velocity, for if you can see land, you may have a different velocity if there is a tidal flow or current. Then, of course, the earth is rotating, orbiting the sun, the sun is orbiting the galactic centre, and the galaxy is also moving relative to other galaxies. The point is, unless there is a fundamental reference there is no absolute velocity, but only a velocity relative to something else, and that depends on your perspective. As Einstein once remarked when on a train, “The Zurich Railway Station is approaching, and will shortly stop outside the train.” Bizarre though that may sound, that encompasses relativity.

The simplest way to look at this is to answer the question, “Where are you?” There are two probable answers. One is “Here!” Not very helpful when half the population answer the same way, in which case “here” is a different place for different people. The second answer is to give an address, or coordinates. The means you are defining your position as being at some distance from something else. Velocities represent the rate of change of position, and are vectors, which means they have magnitude and direction. Coming and going have quite different effects. Think of standing in the middle of a road and there is a car on it. However, when direction is properly taken into account, velocities are additive, at least in Galilean relativity. Suppose we have two fleets of ships heading to each other. Each is entitled to consider itself as motionless in its own frame of reference, with the other fleet approaching at a velocity that is the sum of the vectors in a third frame of reference.

James Clerk Maxwell gave physics a huge problem by writing his equations of electromagnetism in the form of a wave equation, when he found the velocity of his wave was more or less equal to the known speed of light. Accordingly, he stated that light was an electromagnetic wave that travelled at velocity c. The problem was, relative to what? His equation equated the velocity to constants that were properties of space itself. Still, if the waves moved through something, namely aether, they would have a velocity relative to the aether. When Michelson and Morley carried out an experiment to measure this, they found nothing. (Actually, they found a very small velocity, but that was put down to experimental error because it did not reflect the earth’s movement properly.) For Einstein, the velocity of light was constant to any observer, and there was no aether, nor any absolute motion. Making sense of this involves mathematics that are a little more complicated than those of Newtonian physics, and now we have a problem as to what it means. The interpretation most people accept was proposed by George Fitzgerald and Hendrik Lorentz, and involved space contraction in the direction of motion. The basis of this can be imagined by considering two space ships flying parallel to each other, and going in a fixed direction. Suppose one sends a signal to the other that is reflected. The principle of relativity is from the space-ships’ point of view, the other ship is stationary, but from an external observer, the signal does not go directly to the other ship, but rather travels along the hypotenuse of a right-angles triangle, which now requires Pythagoras’ theorem to untangle the maths. Complicated?

There are some seemingly absurd results obtained from relativity, but it should be noted that these arise from what different observers, each travelling at near light speed, interpret an event they see. The complication is each sees the light coming to them at the same velocity, and this leads to some more complicated maths. Strange though it may seem, the equations always give correct agreement with observation, and there is little doubt the equations are correct. The question then is, are the observations and equations being properly interpreted? Generally speaking, the maths have taken relativity quite some distance, using a concept called space-time, and in that, time is always relative as well. It would generally be thought to be near impossible to solve anything of significance in General Relativity without the use of space-time, so it must be right, surely? In my Gaius Claudius Scaevola trilogy I make use of the time dilation effect. To fix a problem in the 23rd century, a small party of Romans have to be abducted by aliens in the first century. They travel extremely close to the speed of light, and when they return, they arrive at the right time, having burned through 2,200 years, and have aged a few years. Obviously, I believe time is relative too, don’t I? More next week.

Science, the nature of theory, and global warming.

My summery slumbers have passed, but while having them, I had web discussions, including one on the nature of time. (More on that in a later post.) I also got entangled in a discussion on global warming, and got one comment that really annoyed me: I was accused of being logical. It was suggested that how you feel is more important. Well, how you feel cannot influence nature. Unfortunately, it seems to influence politicians, who end up deciding. So what I thought I would do is post on the nature of theory. I have written an ebook on what theory is and how to form theories, and while the name I gave it was not one that would attract a lot of readers (Aristotelian methodology in the physical sciences) it was no worse than “How to form a theory”. Before some readers turn off, I started that ebook with this thought: everyone has theories. For most, they are not that important, e.g. a theory on who trashed the letterbox. Nevertheless, the principles of how to go about it should be the same.

In the above ebook, I gave global warming as an example of where science has failed, not because we do not understand it, but rather the public has not really been presented with the issue properly. One comment about global warming is that scientists have not resolved the issue. That depends on what you mean by “resolved”. Thus one person said scientists are still working on relativity. Yes, they are, but that does not mean that what we have is wrong. The scientific process is to continually check with nature. So, what I want to do in some of my posts this year is try to give an impression of what science is.

The first thing it is not is mathematics. Mathematics are required, and part of the problem is that only too often scientists do not state clearly what they are saying, preferring to leave a raft of maths for the few who are closely in the field. This is definitely not helpful. Nor are TV shows that imply that theories are only made by stunning mathematics. That is simply not true.

The essence of science is a sequence of simple statements, which are the premises. For me, the correct methodology was invented by Aristotle, and the tragedy is, Aristotle made some howling mistakes by overlooking his own methodology. Aristotle’s methodology is to examine nature and from it, draw the premises, then apply logic to the statements to draw some conclusions, check with observation, and if the hypothesis still stands up, try to determine whether there are any other hypotheses that could have given equivalent predictions. Proof of a concept is only possible if one can say, “if and only if X, then Y”, in which case observing Y is the proof. Part of the problem lies in the “only”; part lies in seeing the wood for the trees. One of the first steps in analyzing a problem is to try to reduce it to its essentials by avoiding complicating features. This does not mean that complicating features should be ignored; rather it means we try to find a means of avoiding them until we can sort out the basics. If we do not get the basics right, there is no point in worrying about complicating factors.

To consider global warming, the first thing to do is put aside the kilotonnes of published data. Instead, in order to focus on the critical points, try modeling something simpler. Consider a room in your house in winter, and consider you have an electric bar heater. Suppose you set it to 1 Kw and turn it on. That will deliver 1 kilojoule of heat per second. Now, suppose doors are open or not open. Obviously, if they are open, the heat can move elsewhere through the house, so the temperature will be slower to rise. Nevertheless you know it will, because you know there is 1 kilojoule per second of heat being liberated.

The condition for long term constant temperature (equilibrium) is
(P in) – (P o) = 0
where (P in) is the power in and (P o) is the power out, both at equilibrium. This works for a room, or a planet. Why power? Because we are looking to see whether the temperature will remain constant or change, and to do that we need to see whether the system is changing, i.e. gaining or losing heat. To detect change, we usually consider differentials, and power is the differential of energy with respect to time. Because we are looking at differentials, we can say, if and only if the power flow into a system equals the power flow out is it at an energy equilibrium. We can use this to prove equilibrium, or otherwise, but we may have to be careful because certain other energy flows, such as radioactive decay, may be generated internally. So, what can we say about Earth? What Lyman et al. found was there is a net power input of 0.64 watts per square meter of ocean surface. That means the system cannot be at equilibrium.

We now need a statement that could account for this. Because the net warming effect is recent, the cause must be recent. The “greenhouse” hypothesis is that humanity has put additional infrared absorbers into the air, and these absorb a small fraction of the infrared radiation that would otherwise go to space, then re-emit the radiation in random directions. Accordingly, a certain fraction is returned to earth. The physics are very clear that this happens; the question is, is it sufficient to account for the 0.64 W? If so, power into the ground increases by (P b) and the power out decreases by (P b). This has the effect of adding 2 (P b) to the left hand side of our previous equation, so we must add the same to the right hand side, and the equation is now
(P in + P b) – (P o – P b) = 2 (P b)
The system is now not in equilibrium, and there is a net power input.
The next question is, is there any other cause possible for (P b)? One obvious one is that the sun could have changed output. It has done this before, for example, the “Little Ice Age” was caused by the sun’s output dropping with a huge decrease in sunspot activity. However, NASA has also been monitoring stellar output, and this cannot account for (P b). There are few other changes possible other than atmospheric composition for radiation over the ocean, so the answer is reasonably clear: the planet is warming and these gases are the only plausible cause. Note what we have done. We are concerned about a change, so we have selected a variable that measures change. We want to keep the possible “red herrings” to a minimum, so the measurements have been carried out over the ocean, where buildings, land development, deforestation, etc are irrelevant. By isolating the key variable and minimizing possible confusing data, we have a clear answer.

So, what do we do about it? Well, that requires a further set of theories, each one giving an effect to a proposed cause, and we have to choose. And that is why I believe we need the general population to have some idea as to how to evaluate theories, because soon we will have no choice. Do nothing, and we lose our coastal cities, coastal roads and coastal agricultural land up to maybe forty meters, and face a totally different climate. Putting your head in the sand and feeling differently will not cool the planet.

* Lyman, J. M. and 7 others, 2010. Nature 465:334-337.