Martian Fluvial Flows, Placid and Catastrophic

Image

Despite the fact that, apart localized dust surfaces in summer, the surface of Mars has had average temperatures that never exceeded about minus 50 degrees C over its lifetime, it also has had some quite unexpected fluid systems. One of the longest river systems starts in several places at approximately 60 degrees south in the highlands, nominally one of the coldest spots on Mars, and drains into Argyre, thence to the Holden and Ladon Valles, then stops and apparently dropped massive amounts of ice in the Margaritifer Valles, which are at considerably lower altitude and just north of the equator. Why does a river start at one of the coldest places on Mars, and freeze out at one of the warmest? There is evidence of ice having been in the fluid, which means the fluid must have been water. (Water is extremely unusual in that the solid, ice, floats in the liquid.) These fluid systems flowed, although not necessarily continuously, for a period of about 300 million years, then stopped entirely, although there are other regions where fluid flows probably occurred later. To the northeast of Hellas (the deepest impact crater on Mars) the Dao and Harmakhis Valles change from prominent and sharp channels to diminished and muted flows at –5.8 k altitude that resemble terrestrial marine channels beyond river mouths.

So, how did the water melt? For the Dao and Harmakhis, the Hadriaca Patera (volcano) was active at the time, so some volcanic heat was probably available, but that would not apply to the systems starting in the southern highlands.

After a prolonged period in which nothing much happened, there were catastrophic flows that continued for up to 2000 km forming channels up to 200 km wide, which would require flows of approximately 100,000,000 cubic meters/sec. For most of those flows, there is no obvious source of heat. Only ice could provide the volume, but how could so much ice melt with no significant heat source, be held without re-freezing, then be released suddenly and explosively? There is no sign of significant volcanic activity, although minor activity would not be seen. Where would the water come from? Many of the catastrophic flows start from the Margaritifer Chaos, so the source of the water could reasonably be the earlier river flows.

There was plenty of volcanic activity about four billion years ago. Water and gases would be thrown into the atmosphere, and the water would ice/snow out predominantly in the coldest regions. That gets water to the southern highlands, and to the highlands east of Hellas. There may also be geologic deposits of water. The key now is the atmosphere. What was it? Most people say it was carbon dioxide and water, because that is what modern volcanoes on Earth give off, but the mechanism I suggested in my “Planetary Formation and Biogenesis” was the gases originally would be reduced, that is mainly methane and ammonia. The methane would provide some sort of greenhouse effect, but ammonia on contact with ice at minus 80 degrees C or above, dissolves in the ice and makes an ammonia/water solution. This, I propose, was the fluid. As the fluid goes north, winds and warmer temperatures would drive off some of the ammonia so oddly enough, as the fluid gets warmer, ice starts to freeze. Ammonia in the air will go and melt more snow. (This is not all that happens, but it should happen.)  Eventually, the ammonia has gone, and the water sinks into the ground where it freezes out into a massive buried ice sheet.

If so, we can now see where the catastrophic flows come from. We have the ice deposits where required. We now require at least fumaroles to be generated underneath the ice. The Margaritifer Chaos is within plausible distance of major volcanism, and of tectonic activity (near the mouth of the Valles Marineris system). Now, let us suppose the gases emerge. Methane immediately forms clathrates with the ice (enters the ice structure and sits there), because of the pressure. The ammonia dissolves ice and forms a small puddle below. This keeps going over time, but as it does, the amount of water increases and the amount of ice decreases. Eventually, there comes a point where there is insufficient ice to hold the methane, and pressure builds up until the whole system ruptures and the mass of fluid pours out. With the pressure gone, the remaining ice clathrates start breaking up explosively. Erosion is caused not only by the fluid, but by exploding ice.

The point then is, is there any evidence for this? The answer is, so far, no. However, if this mechanism is correct, there is more to the story. The methane will be oxidised in the atmosphere to carbon dioxide by solar radiation and water. Ammonia and carbon dioxide will combine and form ammonium carbonate, then urea. So if this is true, we expect to find buried where there had been water, deposits of urea, or whatever it converted to over three billion years. (Very slow chemical reactions are essentially unknown – chemists do not have the patience to do experiments over millions of years, let alone billions!) There is one further possibility. Certain metal ions complex with ammonia to form ammines, which dissolve in water or ammonia fluid. These would sink underground, and if the metal ions were there, so might be the remains of the ammines now. So we have to go to Mars and dig.

 

 

 

 

 

Advertisements

An Ugly Turn for Science

I suspect that there is a commonly held view that science progresses inexorably onwards, with everyone assiduously seeking the truth. However, in 1962 Thomas Kuhn published a book “The structure of scientific revolutions” that suggested this view is somewhat incorrect. He suggested that what actually happens is that scientists spend most of their time solving puzzles for which they believe they know the answer before they begin, in other words their main objective is to add confirming evidence to current theory and beliefs. Results tend to be interpreted in terms of the current paradigm and if it cannot, it tends to be placed in the bottom drawer and is quietly forgotten. In my experience of science, I believe that is largely true, although there is an alternative: the result is reported in a very small section two-thirds through the published paper with no comment, where nobody will notice it, although I once saw a result that contradicted standard theory simply reported with an exclamation mark and no further comment. This is not good, but equally it is not especially bad; it is merely lazy and ducking the purpose of science as I see it, which is to find the truth. The actual purpose seems at times merely to get more grants and not annoy anyone who might sit on a funding panel.

That sort of behaviour is understandable. Most scientists are in it to get a good salary, promotion, awards, etc, and you don’t advance your career by rocking the boat and missing out on grants. I know! If they get the results they expect, more or less, they feel they know what is going on and they want to be comfortable. One can criticise that but it is not particularly wrong; merely not very ambitious. And in the physical sciences, as far as I am aware, that is as far as it goes wrong. 

The bad news is that much deeper rot is appearing, as highlighted by an article in the journal “Science”, vol 365, p 1362 (published by the American Association for the Advancement of Science, and generally recognised as one of the best scientific publications). The subject was the non-publication of a dissenting report following analysis on the attack at Khan Shaykhun, in which Assad was accused of killing about 80 people with sarin, and led, 2 days later, to Trump asserting that he knew unquestionably that Assad did it, so he fired 59 cruise missiles at a Syrian base.

It then appeared that a mathematician, Goong Chen of Texas A&M University, elected to do some mathematical modelling using publicly available data, and he got concerned with what he found. If his modelling was correct, the public statements were wrong. He came into contact with Theodore Postol, an emeritus Professor from MIT and a world expert on missile defence and after discussion he, Postol, and five other scientists carried out an investigation. The end result was that they wrote a paper essentially saying that the conclusions that Assad had deployed chemical weapons did not match the evidence. The paper was sent to the journal “Science and Global Security” (SGS), and following peer review was authorised for publication. So far, science working as it should. The next step is if people do not agree, they should either dispute the evidence by providing contrary evidence, or dispute the analysis of the evidence, but that is not what happened.

Apparently the manuscript was put online as an “advanced publication”, and this drew the attention of Tulsi Gabbard, a Presidential candidate. Gabbard was a major in the US military and had been deployed in Syria in a sufficiently senior position to have a realistic idea of what went on. She has stated she believed the evidence was that Assad did not use chemical weapons. She has apparently gone further and said that Assad should be properly investigated, and if evidence is found he should be accused of war crimes, but if evidence is not found he should be left alone. That, to me, is a sound position: the outcome should depend on evidence. She apparently found the preprint and put it on her blog, which she is using in her Presidential candidate run. Again, quite appropriate: resolve an issue by examining the evidence. That is what science is all about, and it is great that a politician is advocating that approach.

Then things started to go wrong. This preprint drew a detailed critique from Elliot Higgins, the boss of Bellingcat, which has a history of being anti-Assad, and there was also an attack from Gregory Koblentz, a chemical weapons expert who says Postol has a pro-Assad line. The net result is that SGS decided to pull the paper, and “Science” states this was “amid fierce criticism and warnings that the paper would help Syrian President Bashar al-Assad and the Russian government.” Postol argues that Koblentz’s criticism is beside the point. To quote Postol: “I find it troubling that his focus seems to be on his conclusion that I am biased. The question is: what’s wrong with the analysis I used?” I find that to be well said.

According to the Science article, Koblentz admitted he was not qualified to judge the mathematical modelling, but he wrote to the journal editor more than once, urging him not to publish. Comments included: “You must approach this latest analysis with great caution”, the paper would be “misused to cover up the [Assad] regime’s crimes” and “permanently stain the reputation of your journal”. The journal then pulled the paper off the publication rank, at first saying they would edit it, but then they backtracked completely. The editor of the journal is quoted in Science as saying, “In hindsight we probably should have sent it to a different set of reviewers.” I find this comment particularly abhorrent. The editor should not select reviewers on the grounds they will deliver the verdict that the editor wants, or the verdict that happens to be most convenient; reviewers should be restricted to finding errors in the paper.I find it extremely troubling that a scientific institution is prepared to consider repressing an analysis solely on grounds of political expediency with no interest in finding the truth. It is also true that I hold a similar view relating to the incident. I saw a TV clip that was taken within a day of the event where people were taking samples from the hole where the sarin was allegedly delivered without any protection. If the hole had been the source of large amounts of sarin, enough would remain at the primary site to still do serious damage, but nobody was affected. But whether sarin was there or not is not my main gripe. Instead, I find it shocking that a scientific journal should reject a paper simply because some “don’t approve”. The reason for rejection of a paper should be that it is demonstrably wrong, or it is unimportant. The importance cannot be disputed, and if it is demonstrably wrong, then it should be easy to demonstrate where it is wrong. What do you all think?

The Hydrogen Economy

Now that climate change has finally struck home to at least some politicians, we have the problem, what to do next. An obvious point could be that while the politicians made grandiose promises about it thirty years ago, and then for economic reasons did nothing, they could at least have carried out research so they knew what their options are so that when they finally got around to doing something, they knew what to do. Right now, they don’t. One of the possibilities for transport is the use of hydrogen, but is that helpful? If so, where? The first point is you have to make your hydrogen. That is easy: you pass electricity through water. There is no shortage of water but you still have to generate your electricity. This raises the question, how, and at what cost? The good news is that generating hydrogen merely consumes energy so it can be turned down or off at peak load periods, but the difficulty now is the renewables everyone is so happy about offer erratic loads. As an example, Germany is turning off its nuclear power stations and finds it has to burn more coal, especially when the wind is not blowing. 

Assume we have the electricity and we have hydrogen, now what? The hydrogen could be burned directly in a compression motor, or used to power fuel cells. The latter is far more energy efficient, and we can probably manage about 70% overall efficiency. The reason the fuel cell is more desirable than the battery is simply that the battery cannot contain the desired energy density. The advantages of hydrogen include it is light and when burned (including in a fuel cell) all it makes is water. Water is a very powerful greenhouse gas, but the atmosphere has a way of promptly removing excess: rain.

However, hydrogen does have some disadvantages. A hydrogen-air mix is explosive over a rather wide mix ratio. Even outside this ratio, it has a clear flammability and an exceptionally fast flame speed, it leaks far faster than any other gas other than, possibly, helium, and it is odourless and colourless so you may not know it is there. But suppose you put that behind you, there are still clear problems. A small fuel cell car would need approximately 1 kg of hydrogen to drive 100 km. Now, suppose we need a range of 500 km. The storage of 5 kg of hydrogen would take up most of the boot space if you use a tank that is pressurised to 700 bar. (1 bar is atmospheric pressure.) That requires a lot of energy to compress the gas, and it adds a significant weight to the reinforced tank, which you most certainly do not want to rupture. The volume is important for a small car. You wish to go on holiday, then find your boot is occupied by a massive gas tank. However, this is trivial for very large machines, and a company in the US makes hydrogen powered forklifts. Here, a very heavy counterballancing weight is required so a monstrous steel tank is actually an asset. I previously wrote a blog post on hydrogen for vehicles, here.

There are different possible ways to store hydrogen. For those with a technical bent, the objective is to have something that absorbs hydrogen and binds it with an energy of between 15 – 20 kJ/mol. That is fairly weak. If you can mange that range you can store hydrogen at up to 100 bar with good reversibility. If you bind it in metal hydrides, you get a better density of storage at atmospheric pressure, but the difficulty is then to get the hydrogen back out. Most of the proposed metal organic absorbers bind it too weakly and you can’t get enough in. The metals that strongly absorb can be made to release it easier if the metal is present as nanoparticles, and to prevent these clumping, they can be embedded into carbon. There is an issue here, though, that the required volume is starting to become large for a given usage range because there are so many components that are not hydrogen.

There is another problem with hydrogen that most overlook: how do you deliver it to filling stations? Pressurizing won’t work because you can’t get enough into any container to be worth it. You could ship liquefied hydrogen, but it is only a liquid at or below -253 degrees Centigrade. It takes a lot of energy to cool that far, a lot to keep it that cold, and the part that most people will not realize is that at those very low temperatures for very light atoms, there are some effects of quantum mechanics that have to be taken into account. One problem is that hydrogen occurs as two isomers: ortho and para hydrogen. (Isomers are where there are at last two distinctly different forms with the same components, that may or may not readily interconvert.)  These arise because the hydrogen molecule comprises two protons bound by two electrons. The protons have what we call nuclear spin and as a consequence, have a magnetic moment. In ortho hydrogen, the spins are aligned; in para they are opposed. At room temperature, the hydrogen is 75% in the ortho form, but this is of higher energy than the para form. Accordingly, if you just cool hydrogen to the liquid form, you get the room temperature mix. This slowly converts to the para form, but it gives off heat as it does so. That means a tank of liquid hydrogen slowly builds up pressure. To be used as liquid hydrogen it is probably best to let it switch to the para form first, but that takes a lot more energy maintaining the low temperatures while the conversion is going on. Currently, liquefying hydrogen takes 12 kWh of power per kilogram of hydrogen, which is about 25% that of what you get from a fuel cell. In practice, you may need almost that much again to keep it cold, and since this power has to be electrical, we have an even greater demand for electricity.

So, is there an answer? My feeling is still that hydrogen is not the most desirable material for a fuel cell, from the point of view of usage in transport. The reason it is pursued is that it is easiest to make a fuel cell work with hydrogen. There are alternatives. Two that come to mind are ammonia and methanol. Both can drive fuel cells, and ammonia reacts to give water and nitrogen while methanol reacts to give water and carbon dioxide. Currently, the ammonia cell may be more efficient, but ammonia is somewhat difficult to make, although there is evidence it can be made from hydrogen and nitrogen under mild conditions. The methanol fuel cell has a problem that too much of the methanol sneaks through the membrane that keeps the two sides of the cell separate, and carbon monoxide tends to poison electrodes. Methanol could be made by the reduction of carbon dioxide from the air with solar energy.

So where does that leave us? In my opinion, what we need more than anything else is progress on better performing methanol or ammonia fuel cells, or some better fuel cell. My preference for the fuel cell is simply an issue of weight and power density, and I do not see hydrogen as being useful for light vehicles. The very heavy machines are a different matter, and batteries will never adequately power them. The problem of energy production in the future is a real one, and I feel we need to do a lot more research to pick the better options. We should have been doing this over the last thirty years, but we didn’t. However, there is no point in moaning about time wasted; we are here, and we have to act with a lot more urgency. However, it is not right to use the easiest but not very good options; we need to get these problems right.

A Planet Destroyer

Probably everyone now knows that there are planets around other stars, and planet formation may very well be normal around developing stars. This, at least, takes such alien planets out of science fiction and into reality. In the standard theory of planetary formation, the assumption is that dust from the accretion disk somehow turns into planetesimals, which are objects of about asteroid size and then mutual gravity brings these together to form planets. A small industry has sprung up in the scientific community to do computerised simulations of this sort of thing, with the output of a very large number of scientific papers, which results in a number of grants to keep the industry going, lots of conferences to attend, and a strong “academic reputation”. The mere fact that nobody knows how to get to their initial position appears to be irrelevant and this is one of the things I believe is wrong with modern science. Because those who award prizes, grants, promotions, etc have no idea whether the work is right or wrong, they look for productivity. Lots of garbage usually easily defeats something novel that the establishment does not easily understand, or is prepared to give the time to try.

Initially, these simulations predicted solar systems similar to ours in that there were planets in circular orbits around their stars, although most simulations actually showed a different number of planets, usually more in the rocky planet zone. The outer zone has been strangely ignored, in part because simulations indicate that because of the greater separation of planetesimals, everything is extremely slow. The Grand Tack simulations indicate that planets cannot form further than about 10 A.U. from the star. That is actually demonstrably wrong, because giants larger than Jupiter and very much further out are observed. What some simulations have argued for is that there is planetary formation activity limited to around the ice point, where the disk was cold enough for water to form ice, and this led to Jupiter and Saturn. The idea behind the NICE model, or Grand Tack model (which is very close to being the same thing) is that Uranus and Neptune formed in this zone and moved out by throwing planetesimals inwards through gravity. However, all the models ended up with planets being in near circular motion around the star because whatever happened was more or less happening equally at all angles to some fixed background. The gas was spiralling into the star so there were models where the planets moved slightly inwards, and sometimes outwards, but with one exception there was never a directional preference. That one exception was when a star came by too close – a rather uncommon occurrence. 

Then, we started to see exoplanets, and there were three immediate problems. The first was the presence of “star-burners”; planets incredibly close to their star; so close they could not have formed there. Further, many of them were giants, and bigger than Jupiter. Models soon came out to accommodate this through density waves in the gas. On a personal level, I always found these difficult to swallow because the very earliest such models calculated the effects as minor and there were two such waves that tended to cancel out each other’s effects. That calculation was made to show why Jupiter did not move, which, for me, raises the problem, if it did not, why did others?

The next major problem was that giants started to appear in the middle of where you might expect the rocky planets to be. The obvious answer to that was, they moved in and stopped, but that begs the question, why did they stop? If we go back to the Grand Tack model, Jupiter was argued to migrate in towards Mars, and while doing so, throw a whole lot of planetesimals out, then Saturn did much the same, then for some reason Saturn turned around and began throwing planetesimals inwards, which Jupiter continued the act and moved out. One answer to our question might be that Jupiter ran out of planetesimals to throw out and stopped, although it is hard to see why. The reason Saturn began throwing planetesimals in was that Uranus and Neptune started life just beyond Saturn and moved out to where they are now by throwing planetesimals in, which fed Saturn’s and Jupiter’s outwards movement. Note that this does depend on a particular starting point, and it is not clear to me  that since planetesimals are supposed to collide and form planets, if there was an equivalent to the masses of Jupiter and Saturn, why did they not form a planet?

The final major problem was that we discovered that the great bulk of exoplanets, apart from those very close to the star, had quite significant elliptical orbits. If you draw a line through the major axis, on one side of the star the planet moves faster and closer to it than the other side. There is a directional preference. How did that come about? The answer appears to be simple. The circular orbit arises from a large number of small interactions that have no particular directional preference. Thus the planet might form from collecting a huge number of planetesimals, or a large amount of gas, and these occur more or less continuously as the planet orbits the star. The elliptical orbit occurs if there is on very big impact or interaction. What is believed to happen is when planets grow, if they get big enough their gravity alters their orbits and if they come quite close to another planet, they exchange energy and one goes outwards, usually leaving the system altogether, and the other moves towards the star, or even into the star. If it comes close enough to the star, the star’s tidal forces circularise the orbit and the planet remains close to the star, and if it is moving prograde, like our moon the tidal forces will push the planet out. Equally, if the orbit is highly elliptical, the planet might “flip”, and become circularised with a retrograde orbit. If so, eventually it is doomed because the tidal forces cause it to fall into the star.

All of which may seem somewhat speculative, but the more interesting point is we have now found evidence this happens, namely evidence that the star M67 Y2235 has ingested a “superearth”. The technique goes by the name “differential stellar spectroscopy”, and what happens is that provided you can realistically estimate what the composition should be, which can be done with reasonable confidence if stars have been formed in a cluster and can reasonably be assigned as having started from the same gas. M67 is a cluster with over 1200 known members and it is close enough that reasonable details can be obtained. Further, the stars have a metallicity (the amount of heavy elements) similar to the sun. A careful study has shown that when the stars are separated into subgroups, they all behave according to expectations, except for Y2235, which has far too high a metallicity. This enhancement corresponds to an amount of rocky planet 5.2 times the mass of the earth in the outer convective envelope. If a star swallows a planet, the impact will usually be tangential because the ingestion is a consequence of an elliptical orbit decaying through tidal interactions with the star such that the planet grazes the external region of the star a few times before its orbital energy is reduced enough for ingestion. If so, the planet should dissolve in the stellar medium and increase the metallicity of the outer envelope of the star. So, to the extent that these observations are correctly interpreted, we have the evidence that stars do ingest planets, at least sometimes.

For those who wish to go deeper, being biased I recommend my ebook “Planetary Formation and Biogenesis.” Besides showing what I think happened, it analyses over 600 scientific papers, most of which are about different aspects.

Book Discount

From September 19 – 26 Ranh will be discounted to 99c on Amazon in the US and 99p in the UK. An advanced civilization evolved from Cretaceous life on Earth that had been transported to a nearby planet. It is a theocracy, and there are some who believe it is their holy task to recover the planet of creation from those pesky mammals. A delegation from Earth arrives to negotiate a peace treaty, but can such a treaty be realized? Or will the raptors launch against humanity? Only Baht, unacknowledged, the lowest of the low, must do what no other unacknowledged female has ever done or there will be interplanetary war. A tale of plotting, conspiracy, religious fervour, murder, treachery, honour, diplomacy, and tail-ball.

Space News

There were two pieces of news relating to space recently. Thirty years ago we knew there were stars. Now we know there are exoplanets and over 4,000 of them have been found. Many of these are much larger than Jupiter, but that may be because the bigger they are, the easier it is to find them. There are a number of planets very close to small stars for the same reason. Around one giant planet there are claims for an exomoon, that is a satellite of a giant planet, and since the moon is about the size of Neptune, i.e.the Moon is a small giant in its own right, it too might have its satellite: an exomoonmoon. However, one piece of news is going to the other extreme: we are to be visited by an exocomet. Comet Borisov will pass by within 2 A.U. of Earth in December. It is travelling well over the escape velocity of the sun, so if you miss it in December, you miss it. This is of some interest to me because in my ebook “Planetary Formation and Biogenesis” I outlined the major means I believe were involved in the formation of our solar system, but also listed some that did not leave clear evidence in our system. One was exo-seeding, where something come in from space. As this comet will be the second “visitor” we have recorded recently, perhaps they are more common than I suspected.

What will we see? So far it is not clear because it is still too far away but it appears to be developing a coma. 2 A.U. is still not particularly close (twice the distance from the sun), so it may be difficult to see anyway, at least without a telescope. Since it is its first visit, we have no real idea how active it will be. It may be that comets become better for viewing after they have had a couple of closer encounters because from our space probes to comets in recent times it appears that most of the gas and dust that forms the tail comes from below the surface, through the equivalent of fumaroles. This comet may not have had time to form these. On the other hand, there may be a lot of relatively active material quite loosely bound to the surface. We shall have to wait and see.

The second piece of news was the discovery of water vapour in the atmosphere of K2-18b, a super-Earth that is orbiting an M3 class red dwarf that is a little under half the size of our sun. The planet is about eight times the mass of earth, and has about 2.7 times the radius. There is much speculation about whether this could mean life. If it has, with the additional gravity, it is unlikely that, if it did develop technology, it would be that interested in space exploration. So far, we know there is probably another planet in the system, but that is a star-burner. K2-18b orbits its star in 33 days, so birthdays would come round frequently, and it would receive about five per cent more solar radiation than Earth does, although coming from a red dwarf, there will be a higher fraction of infra-red light and less visible.

The determination of the water could be made because first, the star is reasonably bright so good signals can be received, second, the planet transits across the star, and third, the planet is not shrouded with clouds. What has to happen is that as the planet transits, electromagnetic radiation from the star is absorbed by any molecule at the frequency determined by the bond stretching or bending energies. The size of the planet compared with its mass is suggestive of a large atmosphere, i.e.it has probably retained some of the hydrogen and helium of the accretion disk. This conclusion does have risks because if it were primarily a water or ice world (water under sufficient pressure forms ice stable at quite high temperatures) then it would be expected to have an even greater size for the mass.

The signal was not strong, in part, from what I can make out, it was recorded in the overtone region of the water stretching frequency, which is of low intensity. Accordingly, it was not possible to look for other gases, but the hope is, when the James Webb telescope becomes available and we can look for signals in the primary thermal infrared spectrum, this planet will be a good candidate.So, what does this mean for the possibilities of life? At this stage, it is too early to tell. The mechanism for forming life as outlined in my ebook, “Planetary Formation and Biogenesis” suggests that the chances of forming life do not depend on planetary size, as long as there is sufficient size to maintain conditions suitable for life, such as an adequate atmospheric pressure, liquid water, and the right components, and it is expected that there will be an upper size, but we do not know what that will be, except again, water must be liquid at temperatures similar to ours. That would eliminate giants. However, more precise limits are more a matter of guess-work. The composition of the planet may be more important. It must be able to support fumaroles and I suspect it should have pre-separated felsic material so that it can rapidly form continents, with silica-rich water emitted, i.e.the type of water that forms silica terraces. That is because the silica acts as a template to make ribose. Ribose is important for biogenesis because something has to link the nucleobases to the phosphate chain. The nucleobases are required because they alone are the materials that form with the chemicals likely to be around, and they alone form multiple hydrogen bonds that can form selectively and add as a template for copying, which is necessary for retaining useful information. Phosphate is important because it alone has three functional sites – two to form a polymer, and one to convey solubility. Only the furanose form of the sugar seems to manage the linkage, at least under conditions likely to have been around at the time and ribose is the only sugar with significant amounts of the furanose form. I believe the absence of ribose means the absence of reproduction, which means the absence of life. But whether these necessary components are there is more difficult to answer.

Gravitational Waves, or Not??

On February 11, 2016 LIGO reported that on September 14, 2015, they had verified the existence of gravitational waves, the “ripples in spacetime” predicted by General Relativity. In 2017, LIGO/Virgo laboratories announced the detection of a gravitational wave signal from merging neutron stars, which was verified by optical telescopes, and which led to the award of the Nobel Prize to three physicists. This was science in action and while I suspect most people had no real idea what this means, the items were big news. The detectors were then shut down for an upgrade to make them more sensitive and when they started up again it was apparently predicted that dozens of events would be observed by 2020, and with automated detection, information could be immediately relayed to optical telescopes. Lots of scientific papers were expected. So, with the program having been running for three months, or essentially half the time of the prediction, what have we found?

Er, despite a number of alerts, nothing has been confirmed by optical telescopes. This has led to some questions as to whether any gravitational waves have actually been detected and led to a group at the Neils Bohr Institute at Copenhagen to review the data so far. The detectors at LIGO correspond to two “arms” at right angles to each other running four kilometers from a central building. Lasers are beamed down each arm and reflected from a mirror and the use of wave interference effects lets the laboratory measure these distances to within (according to the LIGO website) 1/10,000 the width of a proton! Gravitational waves will change these lengths on this scale. So, of course, will local vibrations, so there are two laboratories 3,002 km apart, such that if both detect the same event, it should not be local. The first sign that something might be wrong was that besides the desired signals, a lot of additional vibrations are present, which we shall call noise. That is expected, but what was suspicious was that there seemed to be inexplicable correlations in the noise signals. Two labs that far apart should not have the “same” noise.

Then came a bit of embarrassment: it turned out that the figure published in Physical Review Letters that claimed the detection (and led to Nobel prize awards) was not actually the original data, but rather the figure was prepared for “illustrative purposes”, details added “by eye”.  Another piece of “trickery” claimed by that institute is that the data are analysed by comparison with a large database of theoretically expected signals, called templates. If so, for me there is a problem. If there is a large number of such templates, then the chances of fitting any data to one of them is starting to get uncomfortably large. I recall the comment attributed to the mathematician John von Neumann: “Give me four constants and I shall map your data to an elephant. Give me five and I shall make it wave its trunk.” When they start adjusting their best fitting template to fit the data better, I have real problems.

So apparently those at the Neils Bohr Institute made a statistical analysis of data allegedly seen by the two laboratories, and found no signal was verified by both, except the first. However, even the LIGO researchers were reported to be unhappy about that one. The problem: their signal was too perfect. In this context, when the system was set up, there was a procedure to deliver artificially produced dummy signals, just to check that the procedure following signal detection at both sites was working properly. In principle, this perfect signal could have been the accidental delivery of such an artifical signal, or even the deliberate insertion by someone. Now I am not saying that did happen, but it is uncomfortable that we have only one signal, and it is in “perfect” agreement with theory.

A further problem lies in the fact that the collision of two neutron stars as required by that one discovery and as a source of the gamma ray signals detected along with the gravitational waves is apparently unlikely in an old galaxy where star formation has long since ceased. One group of researchers claim the gamma ray signal is more consistent with the merging of white dwarfs and these should not produce gravitational waves of the right strength.

Suppose by the end of the year, no further gravitational waves are observed. Now what? There are three possibilities: there are no gravitational waves; there are such waves, but the detectors cannot detect them for some reason; there are such waves, but they are much less common than models predict. Apparently there have been attempts to find gravitational waves for the last sixty years, and with every failure it has been argued that they are weaker than predicted. The question then is, when do we stop spending increasingly large amounts of money on seeking something that may not be there? One issue that must be addressed, not only in this matter but in any scientific exercise, is how to get rid of the confirmation bias, that is, when looking for something we shall call A, and a signal is received that more or less fits the target, it is only so easy to say you have found it. In this case, when a very weak signal is received amidst a lot of noise and there is a very large number of templates to fit the data to, it is only too easy to assume that what is actually just unusually reinforced noise is the signal you seek. Modern science seems to have descended into a situation where exceptional evidence is required to persuade anyone that a standard theory might be wrong, but only quite a low standard of evidence to support an existing theory.

Brexit Strikes Again

Last week, I reblogged a post that I found to be quite interesting. It appears that currently there is chaos in Britain regarding Brexit, and it is worth looking at how we got here. As Philip Henley pointed out, the vote to leave the EU in accord with the results of a referendum was passed by Parliament by 498 votes to 114 votes. That became law and is the default position should a deal not be made. The May government then set about negotiating a deal with the EU, and the EU became very hard-nosed: its attitude was that it would make the situation as tough for the UK as it could reasonably do to discourage others from leaving, but also leave an easy route to remain. One of the provisions of this deal was the so-called Irish Backstop, nominally a transition period to ensure the Irish border could be kept open, but with the proviso that it would remain in force until the EU decided that it was no longer needed. The net result of this is the possibility that it could refuse indefinitely, in which case Northern Ireland would effectively become part of Eire. This deal was rejected by Parliament three times.

As her tenure as PM came to an end, Parliament came together and the ordinary MPs rebelled and took over the House, claiming they were trying to reach an agreement. At first they came up with eight possible options, but when put to the vote, all eight were rejected. Obviously, they were a negative bunch. After a panicking weekend, they reduced the number of options, but again nothing got a positive vote. Missing from the choice was “no deal”; the reason being that the Speaker stated that was the default option. That meant that everybody who wanted the “no deal” exit voted no to everything and those who wanted various deals cancelled each other out. Of course, there was no alternative deal that was realistic; both sides have to agree for there to be a deal and the EU stated there were no alternatives. Accordingly, the “no” vote won. What we learn from that is that in such a situation, the order you do things is important.

Part of the problem appears to be there are a number of hidden agendas. Nicola Sturgeon wants another referendum, as do the “Remainers”. Sturgeon simply wants a precedent for another referendum for Scotland leaving the UK, and presumably taking the North Sea Oil revenues with it. The “Remainers” simply won’t accept they lost the Parliamentary vote. Corbyn merely wants to be Prime Minister. I have heard no clue what he really wants to do about Brexit, other than annoy the government.

How could this have been different? First, decisions should be final, and the first decision was whether to leave or not leave. An overwhelming majority took the leave option. MPs then had the obligation to make that decision work. That vote was the time to argue whether the first referendum was fair, binding, or what. They declined because they did not want to come out and tell their own constituents they don’t care what they think.

The next step is to negotiate a deal. The mathematics of decision-making is called Game Theory. In terms of mathematics, there are clear requirements to get the best from a negotiation, one of which is that if the bottom line is not met, you will walk. For that to mean anything, it has to be credible. If the UK politicians want anything better than the May deal, then “No Deal” must be on the table, and it must be credible that will apply. Johnson is as near to credible as possible. If he is undermined, the UK is highly likely to lose.

At this point, the behaviour of some MPs is unconscionable. They have no proposal of their own, they have heard Johnson say he will try for a deal, and Johnson has laid down just one condition – the Irish backstop must be replaced. He should be supported in his efforts unless they have a better idea. There is talk of Johnson being undemocratic for suspending Parliament for 23 days. As Philip Henley has pointed out in the previous post, 23 days is far from being unprecedented. Johnson has the job of negotiating some sort of deal with the EU with a pack of yapping dysfunctional MPs offering a major distraction. The fact is, none of them have come up with something workable.

Now Parliament has voted to block a “no-deal” exit. Does that mean there must be a deal? No, of course not. First, the bill must be passed by the Lords. Since they are largely “Remainers”, they probably will pass it, although when is another matter. However, for that to be effective, there actually has t be a deal on offer. The only one that is the one they have voted out three times. The EU says they will not offer another one, although what would happen if Johnson offered a workable option to the Irish border is uncertain. The Commons also voted that the UK request another extension. Whether the EU would be interested in that is less certain; they must be on the verge of saying they want rid of this ridiculous situation. Note if only one EU member votes against it, it fails. Then after demanding an election for the last few months, Corbyn has vetoed one before Brexit date, deciding instead he wants another referendum. (His problem is that many of the Labour seats come from regions that voted strongly for leaving.) Just what that would solve with this dysfunctional lot of MPs eludes me. However, the so-called blocking vote has arisen because a number of Conservative MPs have defected. They were always “Remainers”, but their defection means Johnson at best runs a minority government that will not accept anything, or everybody else votes in Corbyn as Prime Minister. That is unlikely, so it will be Johnson who goes to Brussels to ask for a deal or an extension. The question then is, how intense will his asking be?