Where to Look for Alien Life?

One intriguing question is what is the probability of life elsewhere in the Universe? In my ebook, “Planetary Formation and Biogenesis” I argue that if you need the sort of chemistry I outline to form the appropriate precursors, then to get the appropriate planet in the habitable zone your best bet is to have a G-type or heavy K-type star. Our sun is a G-type. While that eliminates most stars such as red dwarfs, there are still plenty of possible candidates and on that criterion alone the universe should be full of life, albeit possibly well spread out, and there may be other issues. Thus, of the close stars to Earth, Alpha Centauri has two of the right stars, but being a double star, we don’t know whether it might have spat out its planets when it was getting rid of giants, as the two stars come as close as Saturn is to our sun. Epsilon Eridani and Tau Ceti are K-type, but it is not known whether the first has rocky planets, and further it is only about 900 million years old so any life would be extremely primitive. Tau Ceti has claims to about 8 planets, but only four have been confirmed, and for two of these, one gets about 1.7 times Earth’s light (Venus get about 1.9 times as much) while the other gets about 29%. They are also “super Earths”. Interestingly, if you apply the relationship I had in my ebook, the planet that gets the most light, is the more likely to be similar geologically to Earth (apart from its size) and is far more likely than Venus to have accreted plenty of water, so just maybe it is possible.

So where do we look for suitable planets? Very specifically how probable are rocky planets? One approach to address this came from Nibauer et al. (Astrophysical Journal, 906: 116, 2021). What they did was to look at the element concentration of stars and picked on 5 elements for which he had data. He then focused on the so-called refractory elements, i.e., those that make rocks, and by means of statistics he separated the stars into two groups: the “regular” stars, which have the proportion of refractory elements expected from the nebular clouds, or a “depleted” category, where the concentrations are less than expected. Our sun is in the “depleted” category, and oddly enough, only between 10 – 30% are “regular”. The concept here is the stars are depleted because these elements have been taken away to make rocky planets. Of course, there may be questions about the actual analysis of the data and the model, but if the data holds up, this might be indicative that rocky planets can form, at least around single stars. 

One of the puzzles of planetary formation is exemplified by Tau Ceti. The planet is actually rather short of the heavy elements that make up planets, yet it has so many planets that are so much bigger than Earth. How can this be? My answer in my ebook is that there are three stages of the accretion disk: the first when the star is busily accreting and there are huge inflows of matter; the second a transition when supply of matter declines, and a third period when stellar accretion slows by about four orders of magnitude. At the end of this third period, the star creates huge solar winds that clear out the accretion disk of gas and dust. However, in this third stage, planets continue accreting. This third stage can last from less than 1 million years to up to maybe forty. So, planets starting the same way will end up in a variety of sizes depending on how long the star takes to remove accretable material. The evidence is that our sun spat out its accretion disk very early, so we have smaller than average planets.

So, would the regular stars not have planets? No. If they formed giants, there would be no real selective depletion of specific elements, and a general depletion would register as the star not having as many in the first place. The amount of elements heavier than helium is called metallicity by astronomers, and this can vary by a factor of at least 40, and probably more. There may even be some first-generation stars out there with no heavy elements. It would be possible for a star to have giant planets but show no significant depletion of refractory elements. So while Nibauer’s analysis is interesting, and even encouraging, it does not really eliminate more than a minority of the stars. If you are on a voyage of discovery, it remains something of a guess which stars are of particular interest.

New Ebook

Now on preorder, and available from July 15 at Amazon and most outlets that sell .epub, Spoliation.

When a trial to cover-up a corporate failure ends Captain Jonas Stryker’s career, he wants revenge against The Board, a ruthless, shadowy organization with limitless funds that employs space piracy, terrorism, and even weaponised asteroids. Posing as a space miner, Stryker learns that The Board wants him killed, while a young female SCIB police agent wants retribution against him for having her career spoiled at his trial. As Stryker avoids attempts to kill him, he becomes the only chance to prevent The Board from overturning the Federation Government and imposing a Fascist-style rule.

A story of greed, corruption and honour, combining science and visionary speculation that goes from the high frontier to outback Australia.

The complications involved in processing small asteroids means they have to be moved to a central point. The background to this novel shows the science behind that, and also how to convert an asteroid into a weapon. You know what happened to the dinosaurs so the weapon has punch.

Preorder at:

Amazon:  https://www.amazon.com/dp/B097M95LCJ 

Smashwords:  https://www.smashwords.com/books/view/1090447

B&N                https://www.barnesandnoble.com/s/2940164941673

Apple:             https://books.apple.com/us/book/x/id1574442266

Kobo               https://store.kobobooks.com/en-us/Search?Query=9781005532796

Interpreting Observations

The ancients, with a few exceptions, thought the Earth was the centre of the Universe and everything rotated around it, thus giving day and night. Contrary to what many people think, this was not simply stupid; they reasoned that it could not be rotating. An obvious experiment that Aristotle performed was to throw a stone high into the air so that it reached its maximum height directly above. When it dropped, it landed directly underneath it and its path was vertical to the horizontal. Aristotle recognised that up at that height and dropped, if the Earth was rotating the angular momentum from the height should carry it eastwards, but it did not. Aristotle was a clever reasoner, but he was a poor experimenter. He also failed to consider consequences of some of his other reasoning. Thus he knew that the Earth was a sphere, and he knew the size of it and thanks to Eratosthenes this was a fairly accurate value. He had reasoned correctly why that was, which was that matter fell towards the centre. Accordingly, he should also have realised his stone should also fall slightly to the south. (He lived in Greece; if he lived here it would move slightly northwards.) When he failed to notice that he should have realized his technique was insufficiently accurate. What he failed to do was to put numbers onto his reasoning, and this is an error in reasoning we see all the time these days from politicians. As an aside, this is a difficult experiment to do. If you don’t believe me, try it. Exactly where is the point vertically below your drop point? You must not find it by dropping a stone!

He also realised that Earth could not orbit the sun, and there was plenty of evidence to show that it could not. First, there was the background. Put a stick in the ground and walk around it. What you see is the background moves and moves more the bigger the circle radius, and smaller the further away the object is. When Aristarchus proposed the heliocentric theory all he could do was make the rather unconvincing bleat that the stars in the background must be an enormous distance away. As it happens, they are. This illustrates another problem with reasoning – if you assume a statement in the reasoning chain, the value of the reasoning is only as good as the truth of the assumption. A further example was that Aristotle reasoned that if the earth was rotating or orbiting the sun, because air rises, the Universe must be full of air, and therefore we should be afflicted by persistent easterly winds. It is interesting to note that had he lived in the trade wind zone he might have come to the correct conclusion for entirely the wrong reason.

But if he did he would have a further problem because he had shown that Earth could not orbit the sun through another line of reasoning. As was “well known”, heavy things fall faster than light things, and orbiting involves an acceleration towards the centre. Therefore there should be a stream of light things hurling off into space. There isn’t, therefore Earth does not move. Further, you could see the tail of comets. They were moving, which proves the reasoning. Of course it doesn’t because the tail always goes away from the sun, and not behind the motion at least half the time. This was a simple thing to check and it was possible to carry out this checking far more easily than the other failed assumptions. Unfortunately, who bothers to check things that are “well known”? This shows a further aspect: a true proposition has everything that is relevant to it in accord with it. This is the basis of Popper’s falsification concept.

One of the hold-ups involved a rather unusual aspect. If you watch a planet, say Mars, it seems to travel across the background, then slow down, then turn around and go the other way, then eventually return to its previous path. Claudius Ptolemy explained this in terms of epicycles, but it is easily understood in term of both going around the sun provided the outer one is going slower. That is obvious because while Earth takes a year to complete an orbit, it takes Mars over two years to complete a cycle. So we had two theories that both give the correct answer, but one has two assignable constants to explain each observation, while the other relies on dynamical relationships that at the time were not understood. This shows another reasoning flaw: you should not reject a proposition simply because you are ignorant of how it could work.I went into a lot more detail of this in my ebook “Athene’s Prophecy”, where for perfectly good plot reasons a young Roman was ordered to prove Aristotle wrong. The key to settling the argument (as explained in more detail in the following novel, “Legatus Legionis”) is to prove the Earth moves. We can do this with the tides. The part closest to the external source of gravity has the water fall sideways a little towards it; the part furthest has more centrifugal force so it is trying to throw the water away. They may not have understood the mechanics of that, but they did know about the sling. Aristotle could not detect this because the tides where he lived are miniscule but in my ebook I had my Roman with the British invasion and hence had to study the tides to know when to sail. There you can get quite massive tides. If you simply assume the tide is cause by the Moon pulling the water towards it and Earth is stationary there would be only one tide per day; the fact that there are two is conclusive, even if you do not properly understand the mechanics.

Venus with a Watery Past?

In a recent edition of Science magazine (372, p1136-7) there is an outline of two NASA probes to determine whether Venus had water. One argument is that Venus and Earth formed from the same material, so they should have started off very much the same, in which case Venus should have had about the same amount of water as Earth. That logic is false because it omits the issue of how planets get water. However, it argued that Venus would have had a serious climatic difference. A computer model showed that when planets rotate very slowly the near absence of a Coriolis force would mean that winds would flow uniformly from equator to pole. On Earth, the Coriolis effect leads to the lower atmosphere air splitting into three  cells on each side of the equator: tropical, subtropical and polar circulations. Venus would have had a more uniform wind pattern.

A further model then argued that massive water clouds would form, blocking half the sunlight, then “in the perpetual twilight, liquid water could have survived for billions of years.”  Since Venus gets about twice the light intensity as Earth does, Venusian “perpetual twilight” would be a good sunny day here. The next part of the argument was that since water is considered to lubricate plates, the then Venus could have had plate tectonics. Thus NASA has a mission to map the surface in much greater detail. That, of course, is a legitimate mission irrespective of the issue of water.

A second aim of these missions is to search for reflectance spectra consistent with granite. Granite is thought to be accompanied by water, although that correlation could be suspect because it is based on Earth, the only planet where granite is known.

So what happened to the “vast oceans”? Their argument is that massive volcanism liberate huge amounts of CO2 into the atmosphere “causing a runaway greenhouse effect that boiled the planet dry.” Ultraviolet light now broke down the water, which would lead to the production of hydrogen, which gets lost to space. This is the conventional explanation for the very high ratio of deuterium to hydrogen in the atmosphere. The concept is the water with deuterium is heavier, and has a slightly higher boiling point, so it would be the least “boiled off”. The effect is real but it is a very small one, which is why a lot of water has to be postulated. The problem with this explanation is that while hydrogen easily gets lost to space there should be massive amounts of oxygen retained. Where is it? Their answer: the oxygen would be “purged” by more ash. No mention of how.

In my ebook “Planetary Formation and Biogenesis” I proposed that Venus probably never had any liquid water on its surface. The rocky planets accreted their water by binding to silicates, and in doing so helped cement aggregate together and get the planet growing. Earth accreted at a place that was hot enough during stellar accretion to form calcium aluminosilicates that make very good cements and would have absorbed their water from the gas disk. Mars got less water because the material that formed Mars had been too cool to separate out aluminosilicates so it had to settle for simple calcium silicate, which does not bind anywhere near as much water. Venus probably had the same aluminosilicates as Earth, but being closer to the star meant it was hotter and less water bonded, and consequently less aluminosilicates.

What about the deuterium enhancement? Surely that is evidence of a lot of water? Not necessarily. How did the gases accrete? My argument is they would accrete as solids such as carbides, nitrides, etc. and the gases would be liberated by reaction with water. Thus on the road to making ammonia from a metal nitride

M – N  + H2O   →  M – OH  +  N-H  ; then  M(OH)2    →  MO + H2O and this is repeated until ammonia is made. An important point is one hydrogen atom is transferred from each molecule of water while one is retained by the oxygen attached to the metal. Now the bond between deuterium and oxygen is stronger than that from hydrogen, the reason being that the hydrogen atom, being lighter, has its bond vibrate more strongly. Therefore the deuterium is more likely to remain on the oxygen atom and end up in further water. This is known as the chemical isotope effect, and it is much more effective at concentrating deuterium. Thus as I see it, too much of the water was used up making gas, and eventually also making carbon dioxide. Venus may never have had much surface water.

A New Way of Mining?

One of the bigger problems our economies face is obtaining metals. Apparently the price of metals used in lithium-ion batteries is soaring because supply cannot expand sufficiently, and there appears to be no way current methodology can keep up.

 Ores are obtained by physically removing them from the subsurface, and this tends to mean that huge volumes of overburden have to be removed. Global mining is estimated to produce 100 billion t of overburden per year, and that usually has to be carted somewhere else and dumped.  This often leads to major disasters, such as mine tailing causing dams, and then collapsing, thus Brazil has had at least two such collapses that led to something like 140 million cubic meters of rubble moving and at least 256 deaths. The better ores are now worked out and we are resorting to poorer ores, most of which contain less than 1% is what you actually want. The rest, gangue, is often environmentally toxic and is quite difficult to dispose of safely. The whole process is energy intensive. Mining contributes about 10% of the energy-related greenhouse gas emissions. Yet if we take copper alone, it is estimated that by 2050 demand will increase by up to 350%. The ores we know about are becoming progressively lower grade and they are found at greater depths.

We have heard of the limits to growth. Well, mining is becoming increasingly looking like becoming unsustainable, but there is always the possibility of new technology to get the benefit from increasingly more difficult sources. One such possible technique involves first inserting acid or lixiviant into the rock to dissolve the target metal in the form of an ion then use a targeted electric field to transport the metal-rich solution to the surface. This is a variant of a technique used to obtain metals from fly ash, sludge, etc.

The objective is to place an electrode either within or surrounding the ore, then the acid is introduced from an external reservoir. There is an alternative reservoir with a second electrode with opposite charge to that of the metal-bearing ion. The metal usually bears a positive charge in the textbooks, so you would have your reservoir electrode negatively charged, but it is important to keep track of your chemistry. For example, if iron were dissolved in hydrochloric acid, the main ion would be FeCl4-, i.e. an anion.

Because transport occurs through electromigration, there is no need for permeability enhancement techniques, such as fracking. About 75% of copper ore reserves are as copper sulphide that lie beneath the water table. The proposed technique was demonstrated on a laboratory scale with a mix of chalcopyrite (CuFeS2) and quartz, each powdered. A solution of ferric chloride was added, and a direct current of 7 V was applied to electrodes at opposite ends of a 0.57 m path, over which there was a potential drop of about 5V, giving a maximal voltage gradient of 1.75 V/cm. The ferric chloride liberated copper as the cupric cation. The laboratory test extracted 57 weight per cent of the available copper from a 4 cm-wide sample over 94 days, although 80% was recovered in the first 50 days. The electric current decreased over the first ten days from 110 mA to 10 mA, suggestive of pore blocking. Computer simulations suggest that in the field, about 70% of the metal in a sample accessed by the electrodes could be recovered over a three year period. The process would have the odd hazard, thus a 5 meter spacing between electrodes employed, in the simulation, a 500 V difference. If the ore is several hundred meters down, this could require quite a voltage. Is this practical? I do not know, but it seems to me that at the moment the amount of dissolved material, the large voltages, the small areas and the time taken will count against it. On the other hand, the price of metals are starting to rise dramatically. I doubt this will be a final solution, but it may be part of one.

Some Lesser Achievements of Science

Most people probably think that science is a rather dull quest for the truth, best left to the experts, who are all out to find the truth. Well, not exactly. Here is a video link where Sean Carroll points out that most physicists are really uninterested in understanding what quantum mechanics is about: https://youtu.be/ZacggH9wB7Y

This is rather awkward because quantum mechanics is one of the two greatest scientific advances of the twentieth century, and here we find all but a few of its exponents really neither understand what is going on nor do they care. What happens is they have a procedure by which they can get answers, so that is all that matters, is it not? Not in my opinion. What happens thereafter is that many of these are University teachers, and when they don’t care, that gets passed on to the students, so they don’t care. The system is degenerating.

But, you protest, we still get the right answers. That leaves open the question, do we really? From my experience in chemistry, we know that the only theories required to explain chemical observations (apart from maybe what atoms are made of) are electromagnetic theory and quantum mechanics. Those in the know will know there are floods of computational papers published so we must understand? Not at all. Almost all the papers calculate something that is known, and because integrating the differential equations means a number of constants are required, and because it is impossible to solve the equations analytically, the constants can be assigned so the correct answers are obtained. Fortunately, for very similar problems the same constants will suffice. If you find that hard to believe, the process is called validation, and you can read about it in John Pople’s Nobel Prize lecture. Actually, I believe all the computations are wrong except for the hydrogen molecule because everybody uses the wrong wave functions, but that is another matter.

That scientists do not care about their most important theory is bad, but there is worse, as published in Nature ( https://doi.org/10.1038/d41586-021-01436-7) Apparently, in 2005 three PhD students wrote a computer program called SCIgen for amusement. What this program does is write “scientific papers”. The research for them? Who needs that? It cobbles together words with random titles, text and charts and is essentially nonsense. Anyone can write them. (Declaration: I did not use this software for this or any other post!) While the original purpose was for “maximum amusement” and papers were generated for conferences, because the software is freely available various people have sent them to scientific journals , the peer review process failed to spot the gibberish, and the journals published them. There are apparently hundreds of these nonsensical papers floating around. Further, they can be for relatively “big names” because apparently articles can get through under someone’s name without the someone knowing anything about it. Why give someone else an additional paper? A big name is more likely to get through peer review and the writer needs to get it out there because they can be published with genuine references, although of course with no relevance to the submission. The reason for doing this is simple: it pads the number of citations for the cited authors, which is necessary to make their CV look better and to improve the chances when applying for funds. With money at stake, it is hardly surprising that sort of fraud has crept in.

Another unsettling aspect to scientific funding has been uncovered (Nature 593: 490 -491). Funding panels are more likely to give EU early-career grants to applicants connected to the granting panelists’ institutions, in other words the panelists have this tendency to give the money to “themselves”. Oops. A study of the grants showed that “applicants who shared both a home and a host organization with one panellist or more received a grant 40% more often than average” and “the success rate for connected applicants was approximately 80% higher than average in the life sciences and 40% higher in the social sciences and humanities, but there seemed to be no discernible effect in physics and engineering.” Here, physics is clean!  One explanation might be that the best applicants want to go to the most prestigious institutions. Maybe, but would that not apply to physics? An evaluation to test such bias in the life sciences showed “successful and connected applicants scored worse on these performance indicators than did funded applicants without such links, and even some unsuccessful applicants.” You can draw your own conclusions, but they are not good looking.

Dark Matter Detection

Most people have heard of dark matter. Its existence is clear, at least so many so state. Actually, that is a bit of an exaggeration. All we know is that galaxies do not behave exactly as General Relativity would have us think. Thus the outer parts of galaxies orbit the centre faster than they should and galaxy clusters do not have the dynamics expected. Worse, if we look at gravitational lensing, where light is bent as it goes around a galaxy, it is bent as if there is additional mass there that we just cannot see. There are two possible explanations for this. One is there is additional matter there we cannot see, which we call dark matter. The other alternative is that our understanding of what gravity behaves like is wrong on a large scale. We understand it very well on the scale of our solar system, but that is incredibly small when compared with a galaxy so it is possible we simply cannot detect such anomalies with our experiments. As it happens, there are awkward aspects of each, although the modified gravity does have the advantage that one explanation is that we might simply not understand how it should be modified.

One way of settling this dispute is to actually detect dark matter. If we detect it, case over. Well, maybe. However, so far all attempts to detect it have failed. That is not critical because to detect something, we have to know what it is, or what its properties are. So far all we can say about dark matter is that its gravity affects galaxies. It is rather hard to do an experiment on a galaxy so that is not exactly helpful. So what physicists have done is to make a guess as to what it will be, and. not surprisingly, make the guess in the form they can do something about it if they are correct. The problem now is we know it has to have mass because it exerts a gravitational effect and we know it cannot interact with electromagnetic fields, otherwise we would see it. We can also say it does not clump because otherwise there would be observable effects on close stars. There will not be dark matter stars. That is not exactly much to work on, but the usual approach has been to try and detect collisions. If such a particle can transfer sufficient energy to the molecule or atom, it can get rid of the energy by giving off a photon. So one such detector had huge tanks containing 370 kg of liquid xenon. It was buried deep underground, and from theory massive particles of dark matter could be separated from occasional neutron events because a neutron would give multiple events. In the end, they found nothing. On the other hand, it is far from clear to me why dark matter could not give multiple events, so maybe they saw some and confused it with stray neutrons.

On the basis that a bigger detector would help, one proposal (Leane and Smirnov, Physical Review Letters 126: 161101 (2021) suggest using giant exoplanets. The idea is that as the dark matter particles collide with the planet, they will deposit energy as they scatter and with the scattering eventually annihilate within the planet. This additional energy will be detected as heat. The point of the giant is that its huge gravitational field will pull in extra dark matter.

Accordingly, they wish someone to measure the temperatures on the surface of old exoplanets of mass between Jupiter and 55 times Jupiter’s mass, and temperatures above that otherwise expected can be allocated to dark matter. Further, since dark matter density should be higher near the galactic centre, and collisional velocities higher, the difference in surface temperatures between comparable planets may signal the detection of dark matter.

Can you see problems? To me, the flaw lies in “what is expected?” In my opinion, one is the question of getting sufficient accuracy in the infrared detection. Gravitational collapse gives off excess heat. Once a planet gets to about 16 Jupiter masses it starts fusing deuterium. Another lies in estimating the heat given off by radioactive decay. That should be understandable from the age of the planet, but if it had accreted additional material from a later supernova the prediction could be wrong. However, for me the biggest assumption is that the dark matter will annihilate, as without this it is hard to see where sufficient energy will come from. If galaxies all behave the same way, irrespective of age (and we see some galaxies from a great distance, which means we see them as they were a long time ago) then this suggests the proposed dark matter does not annihilate. There is no reason why it should, and that our detection method needs it to will be totally ignored by nature. However, no doubt schemes to detect dark matter will generate many scientific papers in the near future and consume very substantial research grants. As for me, I would suggest one plausible approach, since so much else has failed by assuming large particles, is to look for small ones. Are there any unexplained momenta in collisions from the large hadron collider? What most people overlook is that about 99% of the data generated is trashed (because there is so much of it), but would it hurt to spend just a little effort on examining if fine detail that which you do not expect to see much?

Geoengineering – to do or not to do?

Climate change remains an uncomfortable topic. Politicians continue to state it is such an important problem, and then fail to do anything sufficient to solve it. There seems to be an idea amongst politicians that if everyone drove electric vehicles, all would be well. Leaving aside the question as to whether over the life of a vehicle the electric vehicle actually emits less greenhouse gas (and at best, that is such a close call it is unlikely to have any hope of addressing the problem) as noted in my post https://wordpress.com/post/ianmillerblog.wordpress.com/885

the world cannot sustain then necessary extractions to make it even vaguely possible. If that is the solution, why waste the effort – we are doomed.

As an article in Nature (vol 593, p 167, 2021) noted, we need to evaluate all possible options. As I have remarked in previous posts, it is extremely unlikely there is a silver bullet. Fusion power would come rather close, but we still have to do a number of other things, such as to enable transport, and as yet we do not have fusion power. So what the Nature article said is we should at least consider and analyse properly the consequences of geoengineering. The usual answer here is, horrors, we can’t go around altering the planet’s climate, but the fact is we have already. What do you think those greenhouse gases are doing?

The problem is that while the world has pledged to reduce emissions by 3 billion t of CO2 per year, even if this is achieved, and that is a big if, it remains far too little. Carbon capture will theoretically solve some of the problems, but it costs so much money for no benefit to the saver that you should not bet the house on that one. The alternative, as the Nature article suggests, is geoengineering. The concept is to raise the albedo of the planet, which reflects light back to space. The cooling effect is known: it happens after severe volcanic eruptions.

The basic concept of sending reflective stuff into the upper atmosphere is that it is short-term in nature, so if you get it wrong and there is an effect you don’t like, it does not last all that long. On the other hand, it is also a rapid fix and you get relatively quick results. That means provided you do things with some degree of care you can generate a short-term effect that is mild enough to see what happens, and if it works, you can later amplify it.

The biggest problem is the so-called ethical one: who decides how much cooling, and where do you cool? The article notes that some are vociferously opposed to it as “it could go awry in unpredictable ways”. It could be unpredictable, but the biggest problem would be that the unpredictability would be too small. Another listed reason to oppose it was it would detract from efforts to reduce greenhouse emissions. The problem here is that China and many other places are busy building new coal-fired electricity generation. Exactly how do you reduce emissions when so many places are busy increasing emissions? Then there is the question, how do you know what the effects will be? The answer to that is you carry out short-term mild experiments so you can find out without any serious damage.

The other side of the coin is, if we even stopped emissions right now, the existing levels will continue to heat the planet and nobody knows by how much. The models are simply insufficiently definitive. All we know is that the ice sheets are melting, and when they go, much of our prime agricultural land goes with it. Then there is the question of governance. One proposal to run small tests in Scandinavia ran into opposition from a community that protested that the experiments would offer a distraction from other reduction efforts. It appears that some people seem to think that with just a little effort this problem will go away. It won’t. One of the reasons for obstructing research is that the project will affect the whole planet. Yes, well so does burning coal in thermal generators, but I have never heard of the rest of the planet being consulted on that.

Is it a solution? I don’t know. It most definitely is not THE solution but it may be the only solution that acts quickly enough to compensate for a general inability to get moving, and in my opinion we badly need experiments to show what can be achieved. I understand there was once one such experiment, although not an intentional one. Following the grounding of aircraft over the US due to the Twin Tower incident, I gather the temperatures the following two days went up by over a degree. That was because the ice particles due to jet exhausts were no longer being generated. The advantages of an experiment using ice particles in the upper atmosphere is you can measure what happens quickly, but it quickly goes away so there will be no long term damage. So, is it possible? My guess is that technically it can be managed, but the practical issues of getting general consent to implement it will take so long it becomes more or less irrelevant. You can always find someone who opposes anything.

Ebook discount: Athene’s Prophecy

From May 20 – May 27, Athene’s Prophecy will be discounted to 99c/99p on Amazon and Amazon UK. Science fiction with some science you can try your hand at. The young Roman Gaius Claudius Scaevola receives a vision from Pallas Athene, who asks him to do three things so he can save humanity from total destruction. He must learn the art of war, he must make a steam engine, and while using only knowledge available in the first century he must show the Earth goes around the sun. Can you do it? Try your luck. I suspect you will fail, and to stop cheating, the answer is in the following ebook. 

Scaevola is in Egypt for the anti-Jewish riots, then he is sent to Syria as Tribunis laticlavius in the Fulminata. Soon he must prevent a rebellion when Caligulae orders a statue of himself in the temple of Jerusalem. You will get a different picture of Caligulae than what you normally see, supported by a transcription of a report of the critical meeting regarding the statue by Philo of Alexandria. (Fortunately, copyright has expired.). First of a series. http://www.amazon.com/dp/B00GYL4HGW

Fighting Global Warming Naturally?

In a recent edition of Nature (593, pp191-4) it was argued that to combat global warming, removing carbon from the atmosphere, which is often the main focus, should instead be reviewed in light of how to lower global temperatures. They note that the goal of reducing warming from pre-industrial times to two Centigrade degrees cannot be met solely through cuts to emissions so carbon dioxide needs to be removed from the atmosphere. The rest of the article more or less focused on how nature could contribute to that, which was a little disappointing bearing in mind they had made the point that this was not the main objective. Anyway, they went on to claim nature-based solutions could lower the temperature by a total of 0.4 degrees by 2100. Then came the caveats. If plants are to absorb carbon dioxide, they become less effective if the temperatures rise too high for them.  

Some statistics: 70% of Earth’s land surface has been modified by humanity; since 1960 we have modified 20% of it. Since 1960 changes include (in million square kilometers): urban area  +0.26; cropland +1.0; pasture +0.9; forestry -0.8.

The proposal involves three routes. The first is to protect current ecosystems, which includes stopping deforestation. This is obvious, but the current politicians, such as in Brazil, suggest this receives the comment, “Good luck with that one.” This might be achieved in some Western countries, but then again they have largely cut their forests down already. If we are going to rely on this we have a problem.

The second is to restore ecosystems so they can absorb more carbon. Restoration of forest cover is an obvious place to start. However, they claim plantation forests do not usually give the same benefits as natural forest. The natural forest has very dense undergrowth. Unless there are animals to eat that you may end up generating fuel to start forest fires, which wipe out all progress. Wetlands are particularly desirable because they are great at storing carbon, and once underway, they act rather quickly. However, again this is a problem. Wetlands, once cleared and drained are somewhat difficult to restore because the land tends to have been altered in ways to avoid the land reverting, so besides stopping current use, the alterations have to be removed. 

Notwithstanding the difficulties, there is also a strong reason to create reed beds that also grow algae. The reason is they also take up nitrogen and phosphate in water. Taking up ammoniacal wastes is very important because if part of the nitrogen waste is oxidized, or nitrates have been used as fertilizer, ammonium nitrate will decompose to form nitrous oxide, which is a further greenhouse gas that is particularly difficult because it absorbs in quite a different part of the infrared spectrum, and it has no simple decay route or anything that will absorb it. Consequently, what we put in the air could be there for some time. 

 The third is to improve land management, for timber, crops and grazing, Thus growing a forest on the borders of a stream should add carbon storage, but also reduce flooding and enhance fish life. An important point is that slowing runoff also helps prevent soil loss. All of this is obvious, except, it seems, to officials. In Chile, the government apparently gave out subsidies for pine and eucalyptus planting that led to 1.3 million hectares being planted. What actually happened was that the land so planted had previously been occupied with original forest and it is estimated that the overall effect was to emit 0.05 million t of stored carbon rather than sequester the 5.6 million t claimed. A particular piece of “politically correct” stupidity occurred here. The Department of Conservation, being green inclined, sent an electric car for its people to drive around Stewart Island, the small southern island of New Zealand. It has too small a population to warrant an electric cable to connect it to the national grid, so all the electricity there is generated by burning diesel!

The paper claims it is possible to remove 10 billion tonne of CO2 fairly quickly, and 20 billion tonne by 2055. However, my feeling is that is a little like dreaming because I assume it requires stopping the Amazon and similar burnoffs. On the other hand, even if it does not work out exactly right, it still has benefits. Stopping flooding and erosion while having a more pleasant environment and better farming practices might not save the planet, but it might make your local bit of planet better to live in.