Martian Fluvial Flows, Placid and Catastrophic

Image

Despite the fact that, apart localized dust surfaces in summer, the surface of Mars has had average temperatures that never exceeded about minus 50 degrees C over its lifetime, it also has had some quite unexpected fluid systems. One of the longest river systems starts in several places at approximately 60 degrees south in the highlands, nominally one of the coldest spots on Mars, and drains into Argyre, thence to the Holden and Ladon Valles, then stops and apparently dropped massive amounts of ice in the Margaritifer Valles, which are at considerably lower altitude and just north of the equator. Why does a river start at one of the coldest places on Mars, and freeze out at one of the warmest? There is evidence of ice having been in the fluid, which means the fluid must have been water. (Water is extremely unusual in that the solid, ice, floats in the liquid.) These fluid systems flowed, although not necessarily continuously, for a period of about 300 million years, then stopped entirely, although there are other regions where fluid flows probably occurred later. To the northeast of Hellas (the deepest impact crater on Mars) the Dao and Harmakhis Valles change from prominent and sharp channels to diminished and muted flows at –5.8 k altitude that resemble terrestrial marine channels beyond river mouths.

So, how did the water melt? For the Dao and Harmakhis, the Hadriaca Patera (volcano) was active at the time, so some volcanic heat was probably available, but that would not apply to the systems starting in the southern highlands.

After a prolonged period in which nothing much happened, there were catastrophic flows that continued for up to 2000 km forming channels up to 200 km wide, which would require flows of approximately 100,000,000 cubic meters/sec. For most of those flows, there is no obvious source of heat. Only ice could provide the volume, but how could so much ice melt with no significant heat source, be held without re-freezing, then be released suddenly and explosively? There is no sign of significant volcanic activity, although minor activity would not be seen. Where would the water come from? Many of the catastrophic flows start from the Margaritifer Chaos, so the source of the water could reasonably be the earlier river flows.

There was plenty of volcanic activity about four billion years ago. Water and gases would be thrown into the atmosphere, and the water would ice/snow out predominantly in the coldest regions. That gets water to the southern highlands, and to the highlands east of Hellas. There may also be geologic deposits of water. The key now is the atmosphere. What was it? Most people say it was carbon dioxide and water, because that is what modern volcanoes on Earth give off, but the mechanism I suggested in my “Planetary Formation and Biogenesis” was the gases originally would be reduced, that is mainly methane and ammonia. The methane would provide some sort of greenhouse effect, but ammonia on contact with ice at minus 80 degrees C or above, dissolves in the ice and makes an ammonia/water solution. This, I propose, was the fluid. As the fluid goes north, winds and warmer temperatures would drive off some of the ammonia so oddly enough, as the fluid gets warmer, ice starts to freeze. Ammonia in the air will go and melt more snow. (This is not all that happens, but it should happen.)  Eventually, the ammonia has gone, and the water sinks into the ground where it freezes out into a massive buried ice sheet.

If so, we can now see where the catastrophic flows come from. We have the ice deposits where required. We now require at least fumaroles to be generated underneath the ice. The Margaritifer Chaos is within plausible distance of major volcanism, and of tectonic activity (near the mouth of the Valles Marineris system). Now, let us suppose the gases emerge. Methane immediately forms clathrates with the ice (enters the ice structure and sits there), because of the pressure. The ammonia dissolves ice and forms a small puddle below. This keeps going over time, but as it does, the amount of water increases and the amount of ice decreases. Eventually, there comes a point where there is insufficient ice to hold the methane, and pressure builds up until the whole system ruptures and the mass of fluid pours out. With the pressure gone, the remaining ice clathrates start breaking up explosively. Erosion is caused not only by the fluid, but by exploding ice.

The point then is, is there any evidence for this? The answer is, so far, no. However, if this mechanism is correct, there is more to the story. The methane will be oxidised in the atmosphere to carbon dioxide by solar radiation and water. Ammonia and carbon dioxide will combine and form ammonium carbonate, then urea. So if this is true, we expect to find buried where there had been water, deposits of urea, or whatever it converted to over three billion years. (Very slow chemical reactions are essentially unknown – chemists do not have the patience to do experiments over millions of years, let alone billions!) There is one further possibility. Certain metal ions complex with ammonia to form ammines, which dissolve in water or ammonia fluid. These would sink underground, and if the metal ions were there, so might be the remains of the ammines now. So we have to go to Mars and dig.

 

 

 

 

 

How can we exist?

One of the more annoying questions in physics is why are we here? Bear with me for a minute, as this is a real question. The Universe is supposed to have started with what Fred Hoyle called “The Big Bang”. Fred was being derisory, but the name stuck. Anyway what happened is that a very highly intense burst of energy began expanding, and as it did, perforce the energy became less dense. As that happened, out condensed elementary particles. On an extremely small scale, that happens in high-energy collisions, such as in the Large Hadron Collider. So we are reasonably convinced we know what happened up to this point, but there is a very big fly in the ointment. When such particles condense out we get an equal amount of matter and what we call antimatter. (In principle, we should get dark matter too, but since we do not know what that is, I shall leave that.) 

Antimatter is, as you might guess, the opposite of matter. The most obvious example is the positron, which is exactly the same as the electron except it has positive electric charge, so when a positron is around an electron they attract. In principle, if they were to hit each other they would release an infinite amount of energy, but nature hates the infinities that come out of our equations so when they get so close they annihilate each other and you get two gamma ray photons that leave in opposite directions to conserve momentum. That is more or less what happens when antimatter generally meets matter – they annihilate each other, which is why, when we make antimatter in colliders, if we want to collect it we have to do it very carefully with magnetic traps and in a vacuum.

So now we get to the problem of why we are here: with all that antimatter made in equal proportions to matter, why do we have so much matter? As it happens, the symmetry is violated very slightly in kaon decay, but this is probably not particularly helpful because the effect is too slight. In the previous post on muon decay I mentioned that that could be a clue that there might be physics beyond the Standard Model to be unraveled. Right now, the fact that there is so much matter in the Universe should be a far stronger clue that something is wrong with the Standard Model. 

Or is it? One observation that throws that into doubt was published in the Physical Review, D, 103, 083016 in April this year. But before coming to that, some background. A little over ten years ago, colliding heavy ions made a small amount of anti helium-3, and a little later, antihelium-4. The antihelium has two antiprotons, and one or two antineutrons. To make this, the problem is to get enough antiprotons and antineutrons close enough. To give some idea of the trouble, a billion collisions of gold ions with energies of two hundred billion and sixty-two billion electron volts produced 18 atoms of antihelium 4, with masses of 3.73 billion electron volts. In such a collision, the energy requires a temperature of over 250,000 times that of the sun’s core. 

Such antihelium can be detected through gamma ray frequencies when the atoms decay on striking matter, and apparently also through the Alpha Magnetic Spectrometer on the International Space Station, which tracks cosmic rays. The important point is that antihelium-4 behaves exactly the same as an alpha particle, except that, because the antiprotons have negative charge, their trajectories bend in the opposite direction to ordinary nuclei. These antinuclei can be made through the energies of cosmic rays hitting something, however it has been calculated that the amount of antihelium-3 detected so far is 50 times too great to be explained by cosmic rays, and the amount of antihelium-4 detected is 100,000 times too much.

How can this be? The simple answer is that the antihelium is being made by antistars. If you accept them, gamma ray detection indicates 5787 sources, and it has been proposed that at least fourteen of these are antistars, and if we look at the oldest stars near the centre of the galaxy, then estimates suggest up to a fifth of the stars there could be antistars, possibly with antiplanets. If there were people on these, giving them a hug would be outright disastrous for each of you.Of course, caution here is required. It is always possible that this antihelium was made in a more mundane way that as yet we do not understand. On the other hand, if there are antistars, it solves automatically a huge problem, even if it creates a bigger one: how did the matter and antimatter separate? As is often the case in science, solving one problem creates even bigger problems. However, real antistars would alter our view of the universe and as long as the antimatter is at a good distance, we can accept them.

Much Ado About Muons

You may or may not have heard that the Standard Model, which explains “all of particle physics”, is in trouble and “new physics” may be around the corner. All of this arises from a troublesome result from the muon, a particle that is very similar to an electron except it is about 207 times more massive and has a mean lifetime of 2.2 microseconds. If you think that is not very important for your current lifestyle (and it isn’t) wait, there’s more. Like the electron it has a charge of -1, and a spin of ½, which means it acts like a small magnet. Now, if the particle is placed in a strong magnetic field, the direction of the spin wobbles (technically, precesses) and the strength of this interaction is described by a number called the g factor, which for a classical situation, g = 2. Needless to say, in the quantum world that is wrong. For the electron, it is roughly 2.002 319 304 362, the numbers here stop where uncertainty starts. If nothing else, this shows the remarkable precision achieved by experimental physicists. Why is it not 2? The basic reason is the particle interacts with the vacuum, which is not quite “nothing”. You will see quantum electrodynamics has got this number down fairly precisely, and quantum electrodynamics, which is part of the standard model, is considered to be the most accurate theoretical calculation ever, or the greatest agreement between calculation and observation. All was well, until this wretched muon misbehaved.

Now, the standard model predicts the vacuum comprises a “quantum foam” of virtual particles popping in and out of existence, and these short-lived particles affect the g-factor, causing the muon’s wobble to speed up or slow down very slightly, which in turn leads to what is called an “anomalous magnetic moment”. The standard model should calculate these to the same agreement as with the electron, and the calculations give:

  • g-factor: 2.00233183620
  • anomalous magnetic moment: 0.00116591810

The experimental values announced by Fermilab and Brookhaven are:

  • g-factor: 2.00233184122(82)
  • anomalous magnetic moment: 0.00116592061(41)

The brackets indicate uncertainty. Notice a difference? Would you say it is striking? Apparently there is only a one in 40,000 chance that it will be a statistical error. Nevertheless, apparently they will keep this experiment running at Fermilab for another two years to firm it up. That is persistence, if nothing else.

This result is what has excited a lot of physicists because it means the calculation of how this particle interacts with the vacuum has underestimated the actual effect for the muon. That suggests more physics beyond the standard model, and in particular, a new particle may be the cause of the additional effect. Of course, there has to be a fly in the ointment. One rather fearsome calculation claims to be a lot closer to the observational value. To me the real problem is how can the same theory come up with two different answers when there is no arithmetical mistake?

Anyway, if the second one is right, problem gone? Again, not necessarily. At the Large Hadron collider they have looked at B meson decay. This can produce electrons and positrons, or muons and antimuons. According to the standard model, these two particles are identical other than for mass, which means the rate of production of each should be identical, but it isn’t quite. Again, it appears we are looking at small deviations. The problem then is, hypothetical particles that might explain one experiment fail for the other. Worse, the calculations are fearsome, and can take years. The standard model has 19 parameters that have to be obtained from experiment, so the errors can mount up, and if you wish to give the three neutrinos mass, in come another eight parameters. If we introduce yet another particle, in come at least one more parameter, and probably more. Which raises the question, since adding a new assignable parameter will always answer one problem, how do we know we are even on the right track?

All of which raises the question, is the standard model, which is a part of quantum field theory, itself too complicated, and maybe not going along the right path? You might say, how could I possibly question quantum field theory, which gives such agreeable results for the electron magnetic moment, admittedly after including a series of interactions? The answer is that it also gives the world’s worst agreement with the cosmological constant. When you sum the effects of all these virtual particles over the cosmos, the expansion of the Universe is wrong by 10^120, that is, 10 followed by 120 zeros. Not exceptionally good agreement. To get the agreement it gets, something must be right, but as I see it, to get such a howling error, something must be wrong also. The problem is, what?

How Many Tyrannosaurs Were There?

Suppose you were transported back to the late Cretaceous, what is the probability that you would see a Tyrannosaurus? That depends on a large number of factors, and to simplify, I shall limit myself to T Rex. There were various Tyrannosaurs, but probably in different times and different places. As far as we know, T Rex was limited to what was effectively an island land mass known as Laramidia that has now survived as part of Western North America. In a recent edition of Science, a calculation was made, and it starts with the premise, known as “Damuth’s Law” that population density is negatively correlated with body mass through a power law that involves two assignable constants, plus the body mass. What does that mean? It is an empirical relationship that says the bigger the animal, the fewer will be found in a given area. The reason is obvious: the bigger the animal, the more it will eat, and a given area has only so much food. Apparently one of the empirical constants has been assigned a value of 0.75, more or less, so now we are down to one assignable constant.

If we concentrate on the food requirement, then it depends on what it eats, and what it does with it. To explain the last point, carnivores kill prey, so there has to be enough prey there to supply the food, AND to be able to reproduce. There has to be a stable population of prey, otherwise the food runs out and everyone dies. The bigger the animal, the more food it needs to generate body mass and to provide the energy to move, however mammals have a further requirement over animals like snakes: they burn food to provide body heat, so mammals need more food per unit mass. It also depends on how specialized the food is. Thus pandas, specializing on eating bamboo, depend on bamboo growth rates (which happens to be fast) and on something else not destroying the bamboo. For Tyrannosaurs, they presumably would concentrate on eating large animals. Anything that was a few centimeters high would probably be safe, apart from being accidentally stood on, because the Tyrannosaur could not get its head down low enough and keep it there long enough to catch it. The smaller raptors were also probably safe because they could run faster. So now the problem is, how many large animals, and was there a restriction? My guess is it would take on any large herbivore. In terms of the probability of meeting one, it also depends on how they hunt. If they hunted in packs, which is sometimes postulated, you are less likely to meet them, but you are in more trouble if you do.

That now gets back to how many large herbivores would be in a given area, and that in turn depends on the amount of vegetation, and its food value. We have to make guesses about that. We also have to decide whether the Tyrannosaur generated its own heat. We cannot tell exactly, but the evidence does seem to support the fact that it was concerned about heat as it probably had feathers. The article assumed that the dinosaur was about half-way between mammals and large lizards as far as heat generation goes. Provided the temperatures were warm, something as large as a Tyrannosaur would probably be able to retain much of its own heat as surface area is a smaller fraction of volume than for small animals.The next problem is assigning body mass, which is reasonably straightforward for a given skeleton, but each animal starts out as an egg.  How many juvenile ones were there? This is important because juvenile ones will have different food requirements; they eat smaller herbivores. The authors took a distribution that is somewhat similar to that for tigers. If so, an area the size of California could support 3,800 T. Rex. We now need the area over which they roamed, and with a considerable possible error range and limiting ourselves to land that is above sea level now, they settled on 2.3 + 0.88 million square kilometers, which, at any one time would support about 20,000 individuals. If we take a mid-estimate of how long they roamed, which is 2.4 million years, we get, with a very large error range, that the total number of T. Rex that ever lived was about 2.5 billion individuals. Currently, there are 32 individual fossils (essentially all are partial), which shows how difficult fossilization really is. Part of this, of course, arises because fossilization is dependent on appropriate geology and conditions. So there we are: more useless information, almost certainly erroneous, but fun to speculate on.

Microplastics

You may have heard that the ocean is full of plastics, and while full is an excessive word, there are huge amounts of plastics there, thanks to humans inability to look after some things when they have finished using them. Homo litterus is what we are. You may even have heard that these plastics degrade in light, and form microscopic particles that are having an adverse effect on the fish population. If that is it, as they say, “You aint heard nothin’ yet.”

According to an article in the Proceedings of the National Academy of Science, there is roughly 1100 tons of microplastics in the air over the Western US, and presumably there are corresponding amounts elsewhere. When you go for a walk in the wilderness to take in the fresh air, well, you also breathe in microplastics. 84% of that in the western US comes from roads outside the major cities, and 11% appear to be blowing in from the oceans. They stay airborne for about a week, and eventually settle somewhere. As to source, plastic bags and bottles photodegrade and break down into ever-smaller fragments. When you put clothes made from synthetic fibers into your washing machine, tiny microfibers get sloughed off and end up wherever the wastewater ends up. The microplastics end up in the sludge, and if that is sold off as fertilizer, it ends up in the soil. Otherwise, it ends up in the sea. The fragments of plastics get smaller, but they stay more or less as polymers, although nylons and polyesters will presumably hydrolyse eventually. However, at present there are so many plastics in the oceans that there may even be as much microplastics blowing out as plastics going in.

When waves crash and winds scour the seas, they launch seawater droplets into the air. If the water can evaporate before the drops fall, i.e. in the small drops, you are left with an aerosol that contains salts from the sea, organic matter, microalgae, and now microplastics.

Agricultural dust provided 5% of the microplastics, and these are effectively recycled, while cities only provided 0.4%. The rest mainly come from roads outside cities. When a car rolls down a road, tiny flecks come off the tyres, and tyre particles are included in the microplastics because at that size the difference between a plastic and an elastomer is trivial. Road traffic in cities does produce a huge amount of such microplastics, but these did not affect this study because in the city, buildings shield the wind and particles do not get lifted to the higher atmosphere. They will simply pollute the citizens’ air locally so city dwellers merely get theirs “fresher”.  Also, the argument goes, cars moving at 100 k/h impart a lot of energy but in cities cars drive much more slowly. I am not sure how they counted freeways/motorways/etc that go through cities. They are hardly rural, although around here at rush hour they can sometimes look like they think they ought to be parking lots.

Another reason for assigning tyre particles as microplastics is that apparently all sources are so thoroughly mixed up it is impossible to differentiate them. The situation may be worse in Europe because there they get rid of waste plastics by incorporating them in road-surface material, and hence as the surface wears, recycled waste particles get into the air.

Which raises the question, what to do? Option 1 is to do nothing and hope we can live with these microplastics. You can form your own ideas on this. The second is to ban them from certain uses. In New Zealand we have banned supermarket plastic bags and when I go shopping I have reusable bags that are made out of, er, plastics, but of course they don’t get thrown away or dumped in the rubbish. The third option is to destroy the used plastics.I happen to favour the third option, because it is the only way to get rid of the polymers. The first step in such a system would be to size reduce the objects and separate those that float on water from those that do not. Those that do can be pyrolysed to form hydrocarbon fuels that with a little hydrotreating can make good diesel or petrol, while those that sink can be broken down with hydrothermal pyrolysis to get much the same result. Hydrothermal treatment of wastewater sludge also makes fuel, and the residues, essentially carbonaceous solids, can be buried to return carbon to the ground. Such polymers will no longer exist as polymers. However, whatever we do, all that will happen is we limit the load. The question then is, how harmless are they? Given we have yet to notice effects, they cannot be too hazardous, but what is acceptable?

A Discovery on Mars

Our space programs now seem to be focusing in the increasingly low concentrations or more obscure events, as if this will tell us something special. Recall earlier there was the supposed finding of phosphine in the Venusian atmosphere. Nothing like stirring up controversy because this was taken as a sign of life. As an aside, I wonder how many people actually have ever noticed phosphine anywhere? I have made it in the lab, but that hardly counts. It is not a very common material, and the signal in the Venusian atmosphere was almost certainly due to sulphur dioxide. That in itself is interesting when you ask how would that get there? The answer is surprisingly simple: sulphuric acid is known to be there, and it is denser, and might form a fog or even rain, but as it falls it hits the hotter regions near the surface and pyrolysis to form sulphur dioxide, oxygen and water. These rise, the oxygen reacts with sulphur dioxide to make sulphur trioxide (probably helped by solar radiation), which in turn reacts with water to form sulphuric acid, which in turn is why the acid stays in the atmosphere. Things that have a stable level on a planet often have a cycle.

In February this year, as reported in Physics World, a Russian space probe detected hydrogen chloride in the atmosphere of Mars after a dust storm occurred. This was done with a spectrometer that looked at sunlight as it passed through the atmosphere, and materials such as hydrogen chloride would be picked up as a darkened line at the frequency for the bond vibration in the infrared part of the spectrum. The single line, while broadened due to rotational options, would be fairly conclusive. I found the article to be interesting for all sorts of reasons, one of which was for stating the obvious. Thus it stated that dust density was amplified in the atmosphere during a global dust storm. Who would have guessed that? 

Then with no further explanation, the hydrogen chloride could be generated by water vapour interacting with the dust grains. Really? As a chemist my guess would be that the dust had wet salt on it. UV radiation and atmospheric water vapour would oxidise that, to make at first sodium hypochlorite, like domestic bleach and then hydrogen.  From the general acidity we would then get hydrogen chloride and probably sodium carbonate dust. They were then puzzled as to how the hydrogen chloride disappeared. The obvious answer is that hydrogen chloride would strongly attract water, which would form hydrochloric acid, and that would react with any oxide or carbonate in the dust to make chloride salts. If that sounds circular, yes it is, but there is a net degradation of water; oxygen or oxides would be formed, and hydrogen would be lost to space. The loss would not be very great, of course, because we are talking about parts per billion in a highly rarefied upper atmosphere and only during a dust storm.

Hydrogen chloride would also be emitted during volcanic eruptions, but that is probably able to be eliminated here because Mars no longer has volcanic eruptions. Fumarole emissions would be too wet to get to the upper atmosphere, and if they occurred, and there is no evidence they still do, any hydrochloric acid would be expected to react with oxides, such as the iron oxide that makes Mars look red, rather quickly.  So the unfortunate effect is that the space program is running up against the law of diminishing returns. We are getting more and more information that involves ever-decreasing levels of importance. Rutherford once claimed that physics was the only science – the rest was stamp collecting.  Well, he can turn in his grave because to me this is rather expensive stamp collecting.

Our Financial Future

Interest rates should be the rental cost of money. The greater the opportunities to make profits, the more people will be willing to pay for the available money to invest in further profitable ventures and the interest rates go up. That is reinforced in that if more people are trying to borrow the same limited supply of money the rental price of it must increase, to shake out the less determined borrowers. However, it does not quite work like that. If an economic boom comes along, who wants to kill good times when you can print more money? However, eventually interest rates begin to rise, and then spike to restrict credit and suppress speculation. Recessions tend to follow this spike, and interest rates fall. Ideally, the interest rate reflects what the investor expects future value to be relative to present value. All of this assumes no external economic forces.

An obvious current problem is that we have too many objectives as central banks start to enter the domain of policy. Quantitative easing involved greatly increasing the supply of money so that there was plenty for profitable investment. Unfortunately, what has mainly happened, at least where I live, is that most of it has gone into pre-existing assets, especially housing. Had it gone into building new ones, that would be fine, but it hasn’t; it has simply led to an exasperating increase in prices.

In the last half of the twentieth century, interest rates positively correlated strongly with inflation. Investors add in their expectation of inflation into their demand for bonds, for example. Interest rates and equity values tend to increase during a boom and fall during a recession. Now we find the value of equities and the interest rates on US Treasuries are both increasing, but arguably there is no boom going on. One explanation is that inflation is increasing. However, the Head of the US Federal Reserve has apparently stated that the US economy is a long way from employment and inflation goals, and there will be no increase in interest rates in the immediate future. Perhaps this assumes inflation will not take off until unemployment falls, but the evidence of stagflation, particularly in Japan, says you can have bad unemployment and high inflation, and consequently a poorly performing economy. One of the problems with inflation is that expectations of it tend to be self-fulfilling. 

As a consequence of low inflation, and of central banks printing money, governments tend to be spending vigorously. They could invest in new technology or infrastructure to stimulate the economy, and well-chosen investment will generate a lot of employment, with the consequent benefits in economic growth and that growth and profitability will eventually pay for the cost of the money. However, that does not seem to be happening. There are two other destinations: banks, which lend at low interest, and “helicopter money” to relieve those under strain because of the virus. The former, here at least, has ended up mainly in fixed and existing assets, which inflates their price. The latter has saved many small companies, at least for a while, but there is a price.

The US has spent $5.3 trillion dollars. The National Review looked at what would be needed to pay this back. If you assume the current pattern of taxation depending on income holds, Americans with incomes (in thousand dollars) between $30 – 40 k would pay ~$5,000; between $40 – 50 k would pay ~$9,000; between $50 – 75 k would pay ~$16,000; between $75 – 100 k would pay ~$27,000; between $100 – 200 k would pay ~$51,000. For those on higher incomes the numbers get out of hand. If you roll it over and pay interest, the average American family will get $350 less in government services, which is multiplied by however much interest rates rise. If we assume that the cost of a dollar raised in tax is $1.50 to allow for the depressed effects on the economy, the average American owes $40,000 thanks to the stimulus. Other countries will have their own numbers.I know I seem to be on this issue perhaps too frequently, but those numbers scare me. The question I ask is, do those responsible for printing all this money have any idea what the downstream consequences will be? If they do, they seem to be very reluctant to tell us.

Why We Cannot Get Evidence of Alien Life Yet

We have a curiosity about whether there is life on exoplanets, but how could we tell? Obviously, we have to know that the planet is there, then we have to know something about it. We have discovered the presence of a number of planets through the Doppler effect, in which the star wobbles a bit due to the gravitational force from the planet. The problem, of course, is that all we see is the star, and that tells us nothing other than the mass of the planet and its distance from the star. A shade more is found from observing an eclipse, because we see the size of the star, and in principle we get clues as to what is in an atmosphere, although in practice that information is extremely limited.

If you wish to find evidence of life, you have to first be able to see the planet that is in the habitable zone, and presumably has Earth-like characteristics. Thus the chances of finding evidence of life on a gas giant are negligible because if there were such life it would be totally unlike anything we know. So what are the difficulties? If we have a star with the same mass as our sun, the planet should be approximately 1 AU from the star. Now, take the Alpha Centauri system, the nearest stars, and about 1.3 parsec, or about 4.24 light years. To see something 1 AU away from the star requires an angular separation of about one arc-second, which is achievable with an 8 meter telescope. (For a star x times away, the required angular resolution becomes 1/x arc-seconds, which requires a correspondingly larger telescope. Accordingly, we need close stars.) However, no planets are known around Alpha Centauri A or B, although there are two around Proxima Centauri. Radial velocity studies show there is no habitable planet around A greater than about 53 earth-masses, or about 8.4 earth-masses around B. However, that does not mean no habitable planet because planets at these limits are almost certainly too big to hold life. Their absence, with that method of detection, actually improves the possibility of a habitable planet.

The first requirement for observing whether is life would seem to be that we actually directly observe the planet. Some planets have been directly observed but they are usually super-Jupiters on wide orbits (greater than10 AU) that, being very young, have temperatures greater than 1000 degrees C. The problem of an Earth-like planet is it is too dim in the visible. The peak emission intensity occurs in the mid-infrared for temperate planets, but there are further difficulties. One is the background is higher in the infrared, and another is that as you look at longer wavelengths there is a 2 – 5 times coarser spatial resolution due to the diffraction limit scaling. Apparently the best telescopes now have the resolution to detect planets around roughly the ten nearest stars. Having the sensitivity is another question.

Anyway, this has been attempted, and a candidate for an exoplanet around A has been claimed (Nature Communications, 2021, 12:922 ) at about 1.1 AU from the star. It is claimed to be within 7 times Earth’s size, but this is based on relative light intensity. Coupled with that is the possibility that this may not even be a planet at all. Essentially, more work is required.

Notwithstanding the uncertainty, it appears we are coming closer to being able to directly image rocky planets around the very closest stars. Other possible stars include Epsilon Eridani, Epsilon Indi, and Tau Ceti. But even then, if we see them, because it is at the limit of technology, we will still have no evidence one way or the other relating to life. However, it is a start to look where at least the right sized planet is known to exist. My personal preference is Epsilon Eridani. The reason is, it is a rather young star, and if there are planets there, they will be roughly as old as Earth and Mars were when life started on Earth and the great river flows occurred on Mars. Infrared signals from such atmospheres would tell us what comprised the atmospheres. My prediction is reduced, with a good amount of methane, and ammonia dissolved in water. The reason is these are the gases that could be formed through the original accretion, with no requirements for a bombardment by chondrites or comets, which seemingly, based on other evidence, did not happen here. Older planets will have more oxidized atmospheres that do not give clues, apart possibly if there are signals from ozone. Ozone implies oxygen, and that suggests plants.What should we aim to detect? The overall signal should indicate the temperature if we can resolve it. Water gives a good signal in the infrared, and seeing signals of water vapour in the atmosphere would show that that key material is present. For a young planet, methane and ammonia give good signals, although resolution may be difficult and ammonia will mainly be in water. The problems are obvious: getting sufficient signal intensity, subtracting out background noise from around the planet while realizing the planet will block background, actually resolving lines, and finally, correcting for other factors such as the Doppler effect so the lines can be properly interpreted. Remember phosphine on Venus? Errors are easy to make.

How Fast is the Universe Expanding?

In the last post I commented on the fact that the Universe is expanding. That raises the question, how fast is it expanding? At first sight, who cares? If all the other galaxies will be out of sight in so many tens of billions of years, we won’t be around to worry about it. However, it is instructive in another way. Scientists make measurements with very special instruments and what you get are a series of meter readings, or a printout of numbers, and those numbers have implied dimensions. Thus the number you see on your speedometer in your car represents miles per hour or kilometers per hour, depending on where you live. That is understandable, but that is not what is measured. What is usually measured is actually something like the frequency of wheel revolutions. So the revolutions are counted, the change of time is recorded, and the speedometer has some built-in mathematics that gives you what you want to know. Within that calculation is some built-in theory, in this case geometry and an assumption about tyre pressure.

Measuring the rate of expansion of the universe is a bit trickier. What you are trying to measure is the rate of change of distance between galaxies at various distances from you, average them because they have random motion superimposed, and in some cases regular motion if they are in clusters. The velocity at which they are moving apart is simply change of distance divided by change of time. Measuring time is fine but measuring distance is a little more difficult.  You cannot use a ruler.  So some theory has to be imposed.

There are some “simple” techniques, using the red shift as a Doppler shift to obtain velocity, and brightness to measure distance. Thus using different techniques to estimate cosmic distances such as the average brightness of stars in giant elliptical galaxies, type 1a supernovae and one or two other techniques it can be asserted the Universe is expanding at 73.5 + 1.4 kilometers per second for every megaparsec. A megaparsec is about 3.3 million light years, or three billion trillion kilometers.

However, there are alternative means of determining this expansion, such as measured fluctuations in the cosmic microwave background and fluctuations in matter density of the early Universe. If you know what the matter density was then, and know what it is now, it is simple to calculate the rate of expansion, and the answer is, 67.4 +0.5 km/sec/Mpc. Oops. Two routes, both giving highly accurate answers, but well outside any overlap and hence we have two disjoint sets of answers.

So what is the answer? The simplest approach is to use an entirely different method again, and hope this resolves the matter, and the next big hope is the surface brightness of large elliptical galaxies. The idea here is that most of the stars in a galaxy are red dwarfs, and hence the most “light” from a galaxy will be in the infrared. The new James Webb space telescope will be ideal for making these measurements, and in the meantime standards have been obtained from nearby elliptical galaxies at known distances. Do you see a possible problem? All such results also depend on the assumptions inherent in the calculations. First, we have to be sure we actually know the distance accurately to the nearby elliptical galaxies, but much more problematical is the assumption that the luminosity of the ancient galaxies is the same as the local ones. Thus in earlier times, since the metals in stars came from supernovae, the very earliest stars will have much less so their “colour” from their outer envelopes may be different. Also, because the very earliest stars formed from denser gas, maybe the ratio of sizes of the red dwarfs will be different. There are many traps. Accordingly, the main reason for the discrepancy is that the theory used is slightly wrong somewhere along the chain of reasoning. Another possibility is the estimates of the possible errors are overly optimistic. Who knows, and to some extent you may say it does not matter. However, the message from this is that we have to be careful with scientific claims. Always try to unravel the reasoning. The more the explanation relies on mathematics and the less is explained conceptually, the greater the risk that whoever is presenting the story does not understands it either.

Ebook discount

From March 18 – 25, my thriller, The Manganese Dilemma, will be discounted to 99c/99p on Amazon. When the curvaceous Svetlana escapes to the West with clues that the Russians have developed super stealth, Charles Burrowes, a master hacker living under a cloud of suspicion, must find out what it is. Surveillance technology cannot show any evidence of such an invention, but Svetlana’s father was shot dead as they made their escape. Burrowes must uncover what is going on before Russian counterintelligence or a local criminal conspiracy blow what is left of his freedom out of the water

Dark Energy

Many people will have heard of dark energy, yet nobody knows what it is, apart from being something connected with the rate of expansion of the Universe. This is an interesting part of science. When Einstein formulated General Relativity, he found that if his equations were correct, the Universe should collapse due to gravity. It hasn’t so far, so to avoid that he introduced a term Λ, the so-called cosmological constant, which was a straight-out fudge with no basis other than that of avoiding the obvious mistake that the universe had not collapsed and did not look like doing so. Then, when he found from observations that the Universe was actually expanding, he tore that up. In General Relativity, Λ represents the energy density of empty space.

We think the Universe expansion is accelerating because when we look back in time by looking at ancient galaxies, we can measure the velocity of their motion relative to us through the so-called red shift of light, and all the distant galaxies are going away from us, and seemingly faster the further away they are. We can also work out how far away they are by taking light sources and measuring how bright they are, and provided we know how bright they were when they started, the dimming gives us a measure of how far away they are. What two research groups found in 1998 is that the expansion of the Universe was accelerating, which won them the 2011 Nobel prize for physics. 

The next question is, how accurate are these measurements and what assumptions are inherent? The red shift can be measured accurately because the light contains spectral lines, and as long as the physical constants have remained constant, we know exactly their original frequencies, and consequently the shift when we measure the current frequencies. The brightness relies on what are called standard candles. We know of a class of supernovae called type 1a, and these are caused by one star gobbling the mass of another until it reaches the threshold to blow up. This mass is known to be fairly constant, so the energy output should be constant.  Unfortunately, as often happens, the 1a supernovae are not quite as standard as you might think. They have been separated into three classes: standard 1a, dimmer 1a , and brighter 1a. We don’t know why, and there is an inherent problem that the stars of a very long time ago would have had a lower fraction of elements from previous supernovae. They get very bright, then dim with time, and we cannot be certain they always dim at the same rate. Some have different colour distributions, which makes specific luminosity difficult to measure. Accordingly, some consider the evidence is inadequate and it is possible there is no acceleration at all. There is no way for anyone outside the specialist field to resolve this. Such measurements are made at the limits of our ability, and a number of assumptions tend to be involved.

The net result of this is that if the universe is really expanding, we need a value for Λ because that will describe what is pushing everything apart. That energy of the vacuum is called dark energy, and if we consider the expansion and use relativity to compare this energy with the mass of the Universe we can see, dark energy makes up 70% of the total Universe. That is, assuming the expansion is real. If not, 70% of the Universe just disappears! So what is it, if real?

The only real theory that can explain why the vacuum has energy at all and has any independent value is quantum field theory. By independent value, I mean it explains something else. If you have one observation and you require one assumption, you effectively assume the answer. However, quantum field theory is not much help here because if you calculate Λ using it, the calculation differs from observation by a factor of 120 orders of magnitude, which means ten multiplied by itself 120 times. To put that in perspective, if you were to count all the protons, neutrons and electrons in the entire universe that we can see, you would multiply ten by itself about 83 times to express the answer. This is the most dramatic failed prediction in all theoretical physics and is so bad it tends to be put in the desk drawer and ignored/forgotten about.So the short answer is, we haven’t got a clue what dark energy is, and to make matters worse, it is possible there is no need for it at all. But it most certainly is a great excuse for scientific speculation.