Climate Change: A Space Solution”?

By now, many around the world will have realized we are experiencing climate change, thanks to our predilection for burning fossil fuels. The politicians have made their usual platitudinous statements that this problem will be solved, say twenty years out. It is now thirty years since these statements started being made, and we find ourselves worse off than when the politicians started. Their basic idea seems to be that the crisis gets unmanageable in, say, sixty years, so we can leave it for now. What actually happens is, er, nothing in the immediate future. It can be left for politicians thirty years out from now. Then, when the thirty years has passed it is suddenly discovered that it is all a little harder than expected, but they can introduce things like carbon trading, which employs people like themselves, and they can exhort people to buy electric cars. (If you live somewhere like Calgary and want to go skiing at Banff, it appears you need to prepare your car four hours before using it, or maintain battery warmers because the batteries do not like the cold one bit.)

Bromley et al. in PLOS Climate ( have a solution. To overcome the forcing of the greenhouse gases currently in the atmosphere, according to this article all you have to do is to reduce the solar input by 1.8%. What could be simpler? This might be easier than increasing the albedo.

The question then is, how to do this? The proposed answer is to take fine fluffy dust from the Moon and propel it to the Earth-Sun L1 position. This will provide several days of shading, while the solar winds and radiation slowly clear this dust away. How much such dust? About ten billion kg, which is about a thousand times more mass than humans have currently ever sent into space. Over a ten year period, this corresponds to a sphere of radius roughly 200 m, which corresponds to the annual excavation from many open pit mines on Earth. The advantage of using the Moon, of course is that the gravitational force is about 17% that of Earth so you need much less energy to eject the dust. The difficulty is that you have to put sufficient equipment on the Moon’s surface to gather and eject the dust. One difficulty I see here is that while there is plenty of dust on the Moon, it is not in a particularly deep layer, which mean the equipment has to keep moving. Larger fluffy particles are apparently preferred, but fluffy particles would probably be formed in a fluid eruption, and as far as we know, that is less likely on the Moon.

Then there are problems. The most obvious one, apart from the cost of the whole exercise, is the need for accuracy. If the dust is outside the lines from the edges of the Sun-Earth, then the scattering can increase the solar radiation to Earth. Oops. The there is another problem. Unlike L4 and L5, which are regions, L1 really is a point where an object will corotate. If a particle is even 1 km off the point, it could drift away by up to 1000 km in a year, and if it does that, perforce it will drift out of the Sun-Earth line, in which case the dust will be enhancing the illumination of Earth. Again, oops. Added to this are a small number of further effects, the most obvious being solar wind and radiation pressure which will push objects away from L1.

The proposed approach is to launch dust at 4.7 km/s towards L1, and do it from the Moon when the Moon is close to being in line, so that the dust, as it streams towards L1 continues to provide shielding while it is in-flight. The launching would require 10^17 J, which is roughly the energy generated by a few square km of solar panels. One of the claimed advantages of this is that the dust could be sent in pulses, timed to cool places with major heat problems. It is probably unsurprising that bigger particles are less efficient at shading sunlight, at least on a per mass scale, simply because there is mass behind the front surface doing nothing. Particles too small neither last very long in the required position, nor do they offer as much shielding. As it happens, somewhat fortuitously, the best size is 0.2 μm, and that happens to be the average size of lunar regolith dust.

One of the advantages claimed for this method is that once a week or so is over, there are no long-term consequences from that dust. One of the disadvantages is that which goes for any planetary engineering proposal: What is the minimum agreement required from the world population, how do you get it, and what happens if someone does it anyway? Would you vote for it?


To the Centre of the Earth

One of the interesting comments from a recent Physics World is that we do not know much about what the middle of our planet is like. Needless to say, that is hardly surprising – we cannot exactly go down there and fossick around. Not that you would want to. The centre of the Earth is about 6000 km down. About half-way down (3,000 km) we run into a zone that is believed to be molten iron. That, perforce, is hot. Then, further down, we find a solid iron core. You might wonder why it would be solid underneath the liquid? I shall come to that, but here is a small test if you are interested. Try to find an answer to that question before you get down to it.

In the meantime, how do we get any information of what is going on down there? Basically, from earthquakes. What earthquakes do is send extremely powerful shockwaves through the planet. These are effectively sound waves, although the frequency may not be in the hearing range. What we get are two wave velocities: compression and shear, and from these we can estimate the density of the materials, and isolate where there is a division between layers. That works because if we have a boundary of different composition on each side, waves will travel at different velocities through the materials. If there is a reasonably sharp boundary, the waves striking it are either transmitted or reflected, according to the velocities of the sound in each of the media, while the velocity of a sound wave is proportional to the square root of the shear modulus and inversely proportional to the density. Now, as you can see, by obtaining shear and compression velocities, we are able to sort out what is going on, again, assuming a sharp boundary. Boundaries between different phases, such as solid and liquid are usually sufficiently sharp. However, because of the number of phases, and the fact we get reflections and transmission at each boundary, there is more than a little work required to sort out what is going on from the wave patterns. To add to the problem, while the waves take multiple routes, and therefore take multiple times to get there, earthquakes are notorious for going on for some time.

Anyway, what has happened is that the physicists have worked out what these wave patterns should be like, and what we see is not quite what we expected from a nickel/iron core. Basically, the core is not quite as dense as expected. That means there must be something else there. That raises the question, what is it? It also raises the question, are the expectations realistic?

This question arises from the fact that the temperatures and pressures at the centre of the Earth convey unknown properties to materials. We can makes a good estimate of the pressure because that is the weight of the rock etc above a point and we know the mass of Earth. The temperature we can only really guess. The pressure on the surface of Earth is about 100,000 Pascals. The pressure at the centre of Earth is about 364 Gpa, or over 3.5 million times greater. If you did go there, you would be squashed. To give you an idea, the density of iron is a little over 7.87 times that of water. The density of iron at that pressure is 13.87 times that of water, or about 57% of the volume for the same mass. When iron was squeezed in a diamond anvil to a similar volume, it was found that for the Earth’s core the compressive sound velocity was 4% slower, and the shear velocity about 36% slower. They therefore concluded that the inner core had lighter elements, such as about 3% silicon and 3% sulphur.

Which raises the question, why those elements? The authors say these elements came through the growth of the inner core from the outer core. There is no real way of knowing, but for those who follow the mechanism of planetary formation outlined in my ebook “Planetary Formation and Biogenesis” other possible elements might be nitrogen and carbon. The reason lies in the problem of how the metal core separated out under the huge pressures, which slows separation greatly. My answer is that the metals separated out in the accretion disk, and the iron-cored meteorites we see now are residues of that process. The nickel-iron arrived pre-separated, and so was easier to separate out. At the same time, the temperatures were ideal for making iron nitride and iron carbide contaminants.

Now, why is the core a solid? The answer comes from how a liquid works. To be a liquid it has to flow. Heat is simply random kinetic energy, and in a liquid when a molecule strikes another, it slips past it, so there is no structure. When you cool a liquid at atmospheric pressure, the molecules form interactions that hold them in a configuration where they do not slip past each other, hence they form a crystal. However, at the extreme pressures of the Earth’s centre, the reason for a solid is quite different: they do not slip past each other because there is simply not enough room. They cannot push anything out of the way because there is nowhere for it to go.

More on Disruptive Science

No sooner do I publish a blog on disruptive science than what does Nature do? It publishes an editorial questioning whether it is (which is fair enough) and then, rather bizarrely, does it matter? Does the journal not actually want science to advance, but merely restrict itself to the comfortable? Interestingly, their disruptive examples they cite come from Crick (1953, structure of DNA) and the first planet orbiting another star. In my opinion, these are not disruptive. In Crick’s case, he merely used Rosalind Franklin’s data, and in the second case, this had been expected for years, and indeed I had seen a claim about twenty years earlier for a Jupiter-style planet around Epsilon Eridani. (Unfortunately, I did not write down the reference because I was not involved in that yet.) This result was rubbished because it was claimed the data was too inaccurate, yet the result that I wrote down complied quite well with what we now accept. I am always suspicious of discounting a result when it not only got a good value for the semimajor axis but also proposed a significantly eccentric orbit. For me, these two papers are merely obvious advances on previous theory or logic.

The proposed test by Nature for a disruptive paper is based on citations, the idea being that if a disruptive paper is cited, it is less likely for its predecessors to be cited. If the paper is consolidating, the previously disruptive papers continue to be cited. If this were to be a criterion, probably one of the most disruptive papers would be on the EPR paradox (Einstein, A., Podolsky, B., Rosen, N. 1935. Can quantum-mechanical description of physical reality be considered complete?  Phys. Rev. 47: 777-780.) Yet the remarkable thing about this paper is that people fall over themselves to point out that Einstein “got it wrong”. (That they do not actually answer Einstein’s point seems to be irrelevant to them.)

Nature spoke to a number of scholars who study science and innovation. Some were worried by Park’s paper, one of the worries being that declining disruptiveness could be linked to sluggish productivity and economic growth being seen in many parts of the world. Sorry, but I find that quite strange. It is true that an absence of discoveries is not exactly helpful, but economic use of a scientific discovery usually takes decades after the discovery. There is prolonged engineering, and if it is novel, a market for the product has to be developed. Then they usually displace something else. Very little economic growth follows quickly from scientific discovery. No need to go down this rabbit hole.

Information overload was considered a reason, and it was suggested that artificial intelligence sift and sort useful information, to identify projects with potential for a breakthrough. I completely disagree with this regarding disruption. Anyone who has done a computer search of scientific papers will know that unless you have a very clear idea of what you are looking for, you get a bewildering amount of irrelevant stuff. Thus, if I want to know the specific value of some measurement, the computer search will give me in seconds what previously could have taken days. But if the search constraints are abstract, almost anything can come out, including the erroneous material, examples being in my previous post. The computer, so far, cannot make value judgments because it has no criteria for so doing. What it will do is to comply with established thinking because they will be the constraints for the search. Disruption is something that you did not expect. How can a computer search for what is neither expected nor known? Particularly if that which is unexpected is usually mentioned as an uncomfortable aside in papers and not mentioned in abstracts or keywords. The computer will have to thoroughly understand the entire subject to appreciate the anomaly, and artificial intelligence is still a long way from that.

In a similar vein, Nature published a news item dated January 18. Apparently, people have been analysing advertisements and have come across something both remarkable and depressing: there are apparently hundreds of advertisements offering the sale of authorship in a reputable journal for a scientific paper. Prices range from hundreds to thousands of USD depending on the research area and the journal’s prestige, and the advertisement often cites the title of the paper, the journal, when it will be published (how do they know that) and the position of the authorship slots. This is apparently a multimillion-dollar industry. Interestingly, this advertising that specifies what title in what journal immediately raises suspicion, and a number of papers have been retracted. Another flag is that following peer review, further authors are added. If the authors actually contributed to the paper, they should have been known at the start. The question then is, why would anyone pay good coin for that? Unfortunately, the reason is depressingly simple: you need more citations to get more money, promotion, prizes, tenure, etc. It is a scheme to make money from those whose desire for position exceeds their skill level. And it works because nobody ever reads these papers anyway. The chances of being asked by anyone for details is so low it would be extremely unlucky to be caught out that way. Such an industry, of course, will be anything but disruptive. It only works as long as nobody with enough skill to recognize an anomaly actually reads the papers, because then the paper becomes famous, and thoroughly examined. This industry works because of the citation-counting but not understanding the content is the method of evaluating science. In short, evaluation by ignorant committee.