Galactic Collisions

As some may know, the Milky Way galaxy and the Andromeda galaxy are closing together and will “collide” in something like 4 – 5 billion years. If you are a good distance away, say in a Magellenic Cloud, this would look really spectacular, but what about if you were on a planet like Earth, right in the middle of it, so to speak? Probably not a lot of difference from what we see. There could be a lot more stars in the sky (and there should be if you use a good telescope) and there may be enhanced light from a dust cloud, but basically, a galaxy is a lot of empty space. As an example, light takes 8 minutes and twenty seconds to get from the sun to Earth. Light from the nearest star takes 4.23 years to get here. Stars are well-spaced.

As we understand it, stars orbit the galactic centre. The orbital velocity of our sun is about 828,000 km/hr, a velocity that makes our rockets look like snails, but it takes something like 230,000,000 years to make an orbit, and we are only about half-way out. As I said, galaxies are rather large. So when the galaxies merge, there will be stars going in a lot of different directions until things settle down. There is a NASA simulation in which, over billions of years, the two pass through each other, throwing “stuff” out into interstellar space, then they turn around and repeat the process, except this time the centres merge, and a lot more “stuff” is thrown out into space. The meaning of “stuff” here is clusters of stars. Hundreds of millions of stars get thrown out into space, many of which turn around and come back, eventually to join the new galaxy. 

Because of the distance between stars the chances of stars colliding comes pretty close to zero, however, it is possible that a star might pass by close enough to perturb planetary orbits. It would have to come quite close to affect Earth, as, for example, if it came as close as Saturn, it would only make a minor perturbation to Earth’s orbit. On the other hand, if that close it could easily rain down a storm of comets, etc, from further out, and seriously disrupt the Kuiper Belt, which could lead to extinction-type collisions. As for the giant planets, it would depend on where they were in their orbit. If a star came that close, it could be travelling at such a speed that if Saturn were on the other side of the star it could know little of the passage.

One interesting point is that such a galactic merger has already happened for the Milky Way. In the Milky Way, the sun and the majority of stars are all in orderly near-circular orbits around the centre, but in the outer zones of the galaxy there is what is called a halo, in which many of the stellar orbits are orbiting in the opposite direction. A study was made of the stars in the halo directly out from the sun, where it was found that there are a number of the stars that have strong similarities in composition, suggesting they formed in the same environment, and this was not expected. (Apparently how active star formation is alters their composition slightly. These stars are roughly similar to those in the Large Magellenic Cloud.)  This suggests they formed from different gas clouds, and the ages of these different stars run from 13 to 10 billion years ago. Further, it turned out that the majority of the stars in this part of the halo appeared to have come from a single source, and it was proposed that this part of the halo of our galaxy largely comprises stars from a smaller galaxy, about the size of the Large Magellenic Cloud that collided with the Milky Way about ten billion years ago. There were no comments on other parts of the halo, presumably because parts on the other side of the galactic centre are difficult to see.

It is likely, in my opinion, that such stars are not restricted to the halo. One example might be Kapteyn’s star. This is a red dwarf about eleven light years away and receding. It, too, is going “the wrong way”, and is about eleven billion years old. It is reputed to have two planets in the so-called habitable zone (reputed because they have not been confirmed) and is of interest in that since the star is going the wrong way, presumably as a consequence of a galactic merger, this shows the probability running into another system sufficiently closely to disrupt the planetary system is of reasonably low probability.

A Planet Destroyer

Probably everyone now knows that there are planets around other stars, and planet formation may very well be normal around developing stars. This, at least, takes such alien planets out of science fiction and into reality. In the standard theory of planetary formation, the assumption is that dust from the accretion disk somehow turns into planetesimals, which are objects of about asteroid size and then mutual gravity brings these together to form planets. A small industry has sprung up in the scientific community to do computerised simulations of this sort of thing, with the output of a very large number of scientific papers, which results in a number of grants to keep the industry going, lots of conferences to attend, and a strong “academic reputation”. The mere fact that nobody knows how to get to their initial position appears to be irrelevant and this is one of the things I believe is wrong with modern science. Because those who award prizes, grants, promotions, etc have no idea whether the work is right or wrong, they look for productivity. Lots of garbage usually easily defeats something novel that the establishment does not easily understand, or is prepared to give the time to try.

Initially, these simulations predicted solar systems similar to ours in that there were planets in circular orbits around their stars, although most simulations actually showed a different number of planets, usually more in the rocky planet zone. The outer zone has been strangely ignored, in part because simulations indicate that because of the greater separation of planetesimals, everything is extremely slow. The Grand Tack simulations indicate that planets cannot form further than about 10 A.U. from the star. That is actually demonstrably wrong, because giants larger than Jupiter and very much further out are observed. What some simulations have argued for is that there is planetary formation activity limited to around the ice point, where the disk was cold enough for water to form ice, and this led to Jupiter and Saturn. The idea behind the NICE model, or Grand Tack model (which is very close to being the same thing) is that Uranus and Neptune formed in this zone and moved out by throwing planetesimals inwards through gravity. However, all the models ended up with planets being in near circular motion around the star because whatever happened was more or less happening equally at all angles to some fixed background. The gas was spiralling into the star so there were models where the planets moved slightly inwards, and sometimes outwards, but with one exception there was never a directional preference. That one exception was when a star came by too close – a rather uncommon occurrence. 

Then, we started to see exoplanets, and there were three immediate problems. The first was the presence of “star-burners”; planets incredibly close to their star; so close they could not have formed there. Further, many of them were giants, and bigger than Jupiter. Models soon came out to accommodate this through density waves in the gas. On a personal level, I always found these difficult to swallow because the very earliest such models calculated the effects as minor and there were two such waves that tended to cancel out each other’s effects. That calculation was made to show why Jupiter did not move, which, for me, raises the problem, if it did not, why did others?

The next major problem was that giants started to appear in the middle of where you might expect the rocky planets to be. The obvious answer to that was, they moved in and stopped, but that begs the question, why did they stop? If we go back to the Grand Tack model, Jupiter was argued to migrate in towards Mars, and while doing so, throw a whole lot of planetesimals out, then Saturn did much the same, then for some reason Saturn turned around and began throwing planetesimals inwards, which Jupiter continued the act and moved out. One answer to our question might be that Jupiter ran out of planetesimals to throw out and stopped, although it is hard to see why. The reason Saturn began throwing planetesimals in was that Uranus and Neptune started life just beyond Saturn and moved out to where they are now by throwing planetesimals in, which fed Saturn’s and Jupiter’s outwards movement. Note that this does depend on a particular starting point, and it is not clear to me  that since planetesimals are supposed to collide and form planets, if there was an equivalent to the masses of Jupiter and Saturn, why did they not form a planet?

The final major problem was that we discovered that the great bulk of exoplanets, apart from those very close to the star, had quite significant elliptical orbits. If you draw a line through the major axis, on one side of the star the planet moves faster and closer to it than the other side. There is a directional preference. How did that come about? The answer appears to be simple. The circular orbit arises from a large number of small interactions that have no particular directional preference. Thus the planet might form from collecting a huge number of planetesimals, or a large amount of gas, and these occur more or less continuously as the planet orbits the star. The elliptical orbit occurs if there is on very big impact or interaction. What is believed to happen is when planets grow, if they get big enough their gravity alters their orbits and if they come quite close to another planet, they exchange energy and one goes outwards, usually leaving the system altogether, and the other moves towards the star, or even into the star. If it comes close enough to the star, the star’s tidal forces circularise the orbit and the planet remains close to the star, and if it is moving prograde, like our moon the tidal forces will push the planet out. Equally, if the orbit is highly elliptical, the planet might “flip”, and become circularised with a retrograde orbit. If so, eventually it is doomed because the tidal forces cause it to fall into the star.

All of which may seem somewhat speculative, but the more interesting point is we have now found evidence this happens, namely evidence that the star M67 Y2235 has ingested a “superearth”. The technique goes by the name “differential stellar spectroscopy”, and what happens is that provided you can realistically estimate what the composition should be, which can be done with reasonable confidence if stars have been formed in a cluster and can reasonably be assigned as having started from the same gas. M67 is a cluster with over 1200 known members and it is close enough that reasonable details can be obtained. Further, the stars have a metallicity (the amount of heavy elements) similar to the sun. A careful study has shown that when the stars are separated into subgroups, they all behave according to expectations, except for Y2235, which has far too high a metallicity. This enhancement corresponds to an amount of rocky planet 5.2 times the mass of the earth in the outer convective envelope. If a star swallows a planet, the impact will usually be tangential because the ingestion is a consequence of an elliptical orbit decaying through tidal interactions with the star such that the planet grazes the external region of the star a few times before its orbital energy is reduced enough for ingestion. If so, the planet should dissolve in the stellar medium and increase the metallicity of the outer envelope of the star. So, to the extent that these observations are correctly interpreted, we have the evidence that stars do ingest planets, at least sometimes.

For those who wish to go deeper, being biased I recommend my ebook “Planetary Formation and Biogenesis.” Besides showing what I think happened, it analyses over 600 scientific papers, most of which are about different aspects.

Gravitational Waves, or Not??

On February 11, 2016 LIGO reported that on September 14, 2015, they had verified the existence of gravitational waves, the “ripples in spacetime” predicted by General Relativity. In 2017, LIGO/Virgo laboratories announced the detection of a gravitational wave signal from merging neutron stars, which was verified by optical telescopes, and which led to the award of the Nobel Prize to three physicists. This was science in action and while I suspect most people had no real idea what this means, the items were big news. The detectors were then shut down for an upgrade to make them more sensitive and when they started up again it was apparently predicted that dozens of events would be observed by 2020, and with automated detection, information could be immediately relayed to optical telescopes. Lots of scientific papers were expected. So, with the program having been running for three months, or essentially half the time of the prediction, what have we found?

Er, despite a number of alerts, nothing has been confirmed by optical telescopes. This has led to some questions as to whether any gravitational waves have actually been detected and led to a group at the Neils Bohr Institute at Copenhagen to review the data so far. The detectors at LIGO correspond to two “arms” at right angles to each other running four kilometers from a central building. Lasers are beamed down each arm and reflected from a mirror and the use of wave interference effects lets the laboratory measure these distances to within (according to the LIGO website) 1/10,000 the width of a proton! Gravitational waves will change these lengths on this scale. So, of course, will local vibrations, so there are two laboratories 3,002 km apart, such that if both detect the same event, it should not be local. The first sign that something might be wrong was that besides the desired signals, a lot of additional vibrations are present, which we shall call noise. That is expected, but what was suspicious was that there seemed to be inexplicable correlations in the noise signals. Two labs that far apart should not have the “same” noise.

Then came a bit of embarrassment: it turned out that the figure published in Physical Review Letters that claimed the detection (and led to Nobel prize awards) was not actually the original data, but rather the figure was prepared for “illustrative purposes”, details added “by eye”.  Another piece of “trickery” claimed by that institute is that the data are analysed by comparison with a large database of theoretically expected signals, called templates. If so, for me there is a problem. If there is a large number of such templates, then the chances of fitting any data to one of them is starting to get uncomfortably large. I recall the comment attributed to the mathematician John von Neumann: “Give me four constants and I shall map your data to an elephant. Give me five and I shall make it wave its trunk.” When they start adjusting their best fitting template to fit the data better, I have real problems.

So apparently those at the Neils Bohr Institute made a statistical analysis of data allegedly seen by the two laboratories, and found no signal was verified by both, except the first. However, even the LIGO researchers were reported to be unhappy about that one. The problem: their signal was too perfect. In this context, when the system was set up, there was a procedure to deliver artificially produced dummy signals, just to check that the procedure following signal detection at both sites was working properly. In principle, this perfect signal could have been the accidental delivery of such an artifical signal, or even the deliberate insertion by someone. Now I am not saying that did happen, but it is uncomfortable that we have only one signal, and it is in “perfect” agreement with theory.

A further problem lies in the fact that the collision of two neutron stars as required by that one discovery and as a source of the gamma ray signals detected along with the gravitational waves is apparently unlikely in an old galaxy where star formation has long since ceased. One group of researchers claim the gamma ray signal is more consistent with the merging of white dwarfs and these should not produce gravitational waves of the right strength.

Suppose by the end of the year, no further gravitational waves are observed. Now what? There are three possibilities: there are no gravitational waves; there are such waves, but the detectors cannot detect them for some reason; there are such waves, but they are much less common than models predict. Apparently there have been attempts to find gravitational waves for the last sixty years, and with every failure it has been argued that they are weaker than predicted. The question then is, when do we stop spending increasingly large amounts of money on seeking something that may not be there? One issue that must be addressed, not only in this matter but in any scientific exercise, is how to get rid of the confirmation bias, that is, when looking for something we shall call A, and a signal is received that more or less fits the target, it is only so easy to say you have found it. In this case, when a very weak signal is received amidst a lot of noise and there is a very large number of templates to fit the data to, it is only too easy to assume that what is actually just unusually reinforced noise is the signal you seek. Modern science seems to have descended into a situation where exceptional evidence is required to persuade anyone that a standard theory might be wrong, but only quite a low standard of evidence to support an existing theory.

Cold Fusion

My second post-doc. was at Southampton University, and one of the leading physical chemists there was Martin Fleischmann, who had an excellent record for clever and careful work. There would be no doubt that if he measured something, it would be accurate and very well done. In the academic world he was a rising star until he scored a career “own goal”. In 1989, he and Stanley Pons claimed to have observed nuclear fusion through a remarkably simple experiment: they passed electricity through samples of deuterium oxide (heavy water) using palladium electrodes. They reported the generation of net heat in significant excess of what would be expected from the power loss due to the resistance of the solution. Whatever else happened, I have no doubt that Fleischmann correctly measured and accounted for the heat. From then on, the story gets murky. Pons and Fleischmann claimed the heat had to come from nuclear fusion, but obviously there was not very much of it. According to “Physics World”, they also claimed the production of neutrons and tritium. I do not recall any actual detection of neutrons, and I doubt the equipment they had would have been at all suitable for that. Tritium might seem to imply neutron production, thus a neutron hitting deuterium might well make tritium, but tritium (even heavier hydrogen) could well have been a contaminant in their deuterium, or maybe they never detected it.

The significance, of course, was that deuterium fusion would be an inexhaustible source of clean energy. You may notice that the Earth has plenty of water, and while the fraction that is deuterium is small, it is nevertheless a very large amount in total, and the energy in going to 4-helium is huge. The physicists, quite rightly, did not believe this. The problem is the nuclei strongly repel each other due to the positive electric fields until they get to about 1,000 – 10,000 times closer than they are in molecules. Nuclear fusion usually works by either extreme pressure squeezing the nuclei together, or extreme temperature giving the nuclei sufficient energy that they overcome the repulsion, or both.

What happened next was that many people tried to reproduce the experiment, and failed, with the result this became considered an example of pathological science. Of course, the problem always was that if anything happened, it happened only very slightly, and while heat was supposedly obtained and measured by a calorimeter, that could happen from extremely minute amounts of fusion. Equally, if it were that minute, it might seem to be useless, however, experimental science doesn’t work that way either. As a general rule, if you can find an effect that occurs, quite often once you work out why, you can alter conditions and boost the effect. The problem occurs when you cannot get an effect.

The criticisms included there were no signs of neutrons. That in itself is, in my opinion, meaningless. In the usual high energy, and more importantly, high momentum reactions, if you react two deuterium nuclei, some of the time the energy is such that the helium isotope 3He is formed, plus a neutron. If you believe the catalyst is squeezing the atoms closer together in a matrix of metal, that neutron might strike another deuterium nucleus before it can get out and form tritium. Another reason might be that the mechanism in the catalyst was that the metal brought the nuclei together in some form of metal hydride complex, and the fusion occurred through quantum tunnelling, which, being a low momentum event, might not eject a neutron. 4He is very stable. True, getting the deuterium atoms close enough is highly improbable, but until you know the structure of the hydride complex, you cannot be absolutely sure. As it was, it was claimed that tritium was found, but it might well have been that the tritium was always there. As to why it was not reproducible, normally palladium absorbs about 0.7 hydrogen atoms per palladium atom in the metal lattice. The claim was that fusion required a minimum of 0.875 deuterium atoms per palladium atom. The defensive argument was the surface of the catalyst was not adequate, and the original claim included the warning that not all electrodes worked, and they only worked for so long. We now see a problem. If the electrode does not absorb and react with sufficient deuterium, you do not expect an effect. Worse, if a special form of palladium is required, that rectifying itself during hydridization could be the source of the heat, i.e.the heat is real, but it is of chemical origin and not nuclear.

I should add at this point I am not advocating that this worked, but merely that the criticisms aimed at it were not exactly valid. Very soon the debate degenerated into scoffing and personal insults rather than facts. Science was not working at all well then. Further, if we accept that there was heat generated, and I am convinced that Martin Fleischmann, whatever his other faults, was a very careful and honest chemist and would have measured that heat properly, then there is something we don’t understand. What it was is another matter, and it is an unfortunate human characteristic that the scientific community, rather than try to work out what had happened, preferred to scoff.

However, the issue is not entirely dead. It appears that Google put $10 million of its money to clear the issue up. Now, the research team that has been using that money still have not found fusion, but they have discovered that the absorption of hydrogen by palladium works in a way thus far unrecognised. At first that may not seem very exciting, nevertheless getting hydrogen in and out of metals could be an important aspect of a hydrogen fuel system as the hydrogen is stored at more moderate pressures than in a high-pressure vessel. The point here, of course, is that understanding what has happened, even in a failed experiment, can be critically important. Sure, the actual initial objective might never be reached, but sometimes it is the something else that leads to real benefits. Quite frequently, in science, success stories actually started out as something else although, through embarrassment, it is seldom admitted.

Finally, there is another form of cold fusion that really works. If the electrons around deuterium and tritium are replaced with muons, the nuclei in a molecule come very much closer together, and nuclear fusion does occur through quantum tunnelling and the full fusion energy is generated. There are, unfortunately, three problems. The first is to maintain a decent number of muons. These are made through the decay of pions, which in turn are made in colliders. This means very considerable amounts of energy are spent getting your muons. The second is that muons have a very short life – about 2 microseconds. The third is if they lose some energy they fall into the helium atom and stay there, thus taking themselves out of play. Apparently a muon can catalyse up to 150 fusions, which looks good, but the best so far is that to get 1 MW of net energy, you have to put 4 MW in to make the muons. Thus to get really large amounts of energy, extremely huge generators are required just to drive the generation. Yes, you get net power but the cost is far too great. For the moment, that is not a productive source.

The Ice Giants’ Magnetism

One interesting measurement made from NASA’S sole flyby of Uranus and Neptune is that they have complicated magnetic fields, and seemingly not the simple dipolar field as found on Earth. The puzzle then is, what causes this? One possible answer is ice.

You will probably consider ice as not particularly magnetic nor particularly good at conducting electric current, and you would be right with the ice you usually see. However, there is more than one form of ice. As far back as 1912, the American physicist Percy Bridgman discovered five solid phases of water, which were obtained by applying pressure to the ice. One of the unusual properties of ice is that as you add pressure, the ice melts because the triple point (the temperature where solid, liquid and gas are in equilibrium) is at a lower temperature than the melting point of ice at room pressure (which is 0.1 MPa. A pascal is a rather small unit of pressure; the M mean million, G would mean billion). So add pressure and it melts, which is why ice skates work. Ices II, III and V need 200 to 600 MPa of pressure to form. Interestingly, as you increase the pressure, Ice III forms at about 200 Mpa, and at about -22 degrees C, but then the melting point rises with extra pressure, and at 350 MPa, it switches to Ice V, which melts at – 18 degrees C, and if the pressure is increased to 632.4 MPa, the melting point is 0.16 degrees C. At 2,100 MPa, ice VI melts at just under 82 degrees C. Skates don’t work on these higher ices. As an aside, Ice II does not exist in the presence of liquid, and I have no idea what happened to Ice IV, but my guess is it was a mistake.

As you increase the pressure on ice VI the melting point increases, and sooner or later you expect perhaps another phase, or even more. Well, there are more, so let me jump to the latest: ice XVIII. The Lawrence Livermore National Laboratory has produced this by compressing water to 100 to 400 GPa (1 to 4 million times atmospheric pressure) at temperatures of 2,000 to 3,000 degrees K (0 degrees centigrade is about 273 degrees K, and the scale is the same) to produce what they call superionic ice. What happens is the protons from the hydroxyl groups of water become free and they can diffuse through the empty sites of the oxygen lattice, with the result that the ice starts to conduct electricity almost as well as a metal, but instead of moving electrons around, as happens in metals, it is assumed that it is the protons that move.

These temperatures and pressures were reached by placing a very thin layer of water between two diamond disks, following which six very high power lasers generated a sequence of shock waves that heated and pressurised the water. They deduced what they got by firing 16 additional high powered lasers that delivered 8 kJ of energy in a  one-nanosecond burst on a tiny spot on a small piece of iron foil two centimeters away from the water a few billionths of a second after the shock waves. This generated Xrays, and from the way they diffracted off the water sample they could work out what they generated. This in itself is difficult enough because they would also get a pattern from the diamond, which they would have to subtract.

The important point is that this ice conducts electricity, and is a possible source of the magnetic fields of Uranus and Neptune, which are rather odd. For Earth, Jupiter and Saturn, the magnetic poles are reasonably close to the rotational poles, and we think the magnetism arises from electrically conducting liquids rotating with the planet’s rotation. But Uranus and Neptune have quite odd magnetic fields. The field for Uranus is aligned at 60 degrees to the rotational axis, while that for Neptune is aligned at 46 degrees to the rotational axis. But even odder, the axes of the magnetic fields of each do not go through the centre of the planet, and are displaced quite significantly from it.

The structure of these planets is believed to be, from outside inwards, first an atmosphere of hydrogen and helium, then a mantle of water, ammonia and methane ices, then interior to that a core of rock. My personal view is that there will also be carbon monoxide and nitrogen ices in the mantle, at least of Neptune. The usual explanation for the magnetism has been that magnetic fields are generated by local events in the icy mantles, and you see comments that the fields may be due to high concentrations of ammonia, which readily forms charged species. Such charges would produce magnetic fields due to the rapid rotation of the planets. This new ice is an additional possibility, and it is not beyond the realms of possibility that it might contribute to the other giants.

Jupiter is found from our spectroscopic analyses to be rather deficient in oxygen, and this is explained as being due to the water condensing out as ice. The fact that these ices form at such high temperatures is a good reason to believe there may be such layers of ice. This superionic ice is stable as a solid at 3000 degrees K, and that upper figure simply represents the highest temperature the equipment could stand. (Since water reacts with carbon, I am surprised it got that high.) So if there were a layer of such ice around Jupiter’s core, it too might contribute to the magnetism. Whatever else Jupiter lacks down there, pressure is not one of them.

Marsquakes

One of the more interesting aspects of the latest NASA landing on Mars is that the rover has dug into the surface, inserted a seismometer, and is looking for marsquakes. On Earth, earthquakes are fairly common, especially where I live, and they are generated through the fact that our continents are gigantic lumps of rock moving around over the mantle. They can slide past each other or pull themselves down under another plate, to disappear deep into the mantle, while at other places, new rock emerges to take their place, such as at the mid-Atlantic ridge. Apparently the edges of these plates move about 5 – 10 cm each year. You probably do not notice this because the topsoil, by and large, does not move with the underlying crust. However, every now and again these plates lock and stop moving there. The problem is, the rest of the rock is moving, so considerable strain energy is built up, the lock gives way, very large amounts of energy are released, and the rock moves, sometimes be several meters. The energy is given out as waves, similar in many ways as sound waves, through the rock. If you see waves in the sea, you will note that while the water itself stays more or less in the same place on average, in detail something on the surface, like a surfer, goes up and down, and in fact describes what is essentially a circle if far enough out. Earthquake waves do the same thing. The rock moves, and the shaking can be quite violent. Of course, the rock moves where the actual event occurred, and sometimes the waves trigger a further shift somewhere else.

Such waves travel out in all directions through the rock. Now another feature of all waves is that when they strike a medium through which they will travel with a different velocity, they undergo partial reflection and refraction. There is an angle of incidence when only reflection occurs, and of course, on a curved surface, the reflected waves start spreading as the angles of incidence vary. A second point is that the bigger the difference in wave speed between the two media, the more reflection there is. On Earth, this has permitted us to gather information on what is going on inside the Earth. Of course Earth has some big advantages. We can record seismic events from a number of different places, and even then the results are difficult to interpret.

The problem for Mars is there will be one seismometer that will measure wave frequency, amplitude, and the timing. The timing will give a good picture of the route taken by various waves. Thus the wave that is reflected off the core will come back much sooner than the wave that travels light through and is reflected off the other side, but it will have the same frequency pattern on arrival, so from such patterns and timing you can sort out, at least in principle, what route they took and from the reflection/refraction intensities, what different materials they passed through. It is like a CT scan of the planet. There are further complications because wave interference can spoil patterns, but waves are interesting that they only create that effect at the site where they interfere. Otherwise, they pass right through other waves and are unchanged when they emerge, apart from intensity changes if energy was absorbed by the medium. There is an obvious problem in that with only one seismometer it is much harder to work out where the source was but the scientists believe over the lifetime of the rover they will detect at least a couple of dozen quakes.

Which gets to the question, why do we expect quakes? Mars does not have plate tectonics, possibly because its high level of iron oxide means eclogite cannot form, and it is thought that the unusually high density of eclogite leads to pull subduction. Accordingly the absence of plate tectonics means we expect marsquakes to be of rather low amplitude. However, minor amplitude quakes are expected. One reason is that as the planet cools, there is contraction in volume. Accordingly, the crust becomes less well supported and tends to slip. A second cause could be magma moving below the surface. We know that Mars has a hot interior, thanks to nuclear decay going on inside, and while Mars will be cooler than Earth, the centre is thought to be only about 200 Centigrade degrees cooler than Earth’s centre. While Earth generates more heat, it also loses more through geothermal emissions. Finally, when meteors strike, they also generate shockwaves. Of course the amplitude of these waves is tiny compared with that of even modest earthquakes.

It is hard to know what we shall learn. The reliance on only one seismometer means the loss of directional analysis, and the origin of the quake will be unknown, unless it is possible to time reflections from various places. Thus if you get one isolated event, every wave that comes must have originated from that source, so from the various delays, paths can be assigned. The problem with this is that low energy events might not generate enough reflections of sufficient amplitude to be detected. The ideal method, of course, is to set off some very large explosions at known sites, but it is rather difficult to do that from here.

What do we expect? This is a bit of guesswork, but for me we believe the crust is fairly thick, so we would expect about 60 km of solid basalt. If we get significantly different amounts, this would mean we would have to adjust our thoughts on the Martian thermonuclear reactions. I expect a rather tiny (for a planet) iron core, the clue here being the overall density of Mars is 3.8, its surface is made of basalt, and basalt has a density of 3.1 – 3.8. There just is not room for a lot of iron in the form of the metal. It is what is in between that is of interest. Comments from some of the scientists say they think they will get clues on planetary formation, which could come from deep structures. Thus if planets really formed from the combination of planetesimals, which are objects of asteroid, size, then maybe we shall see the remains in the form of large objects of different sonic impedance. On the other hand, the major shocks to the system by events such as the Hellas impactor may mean that asymmetries were introduced by such shock waves melting parts. My guess is the observations will not be unambiguous in terms of their meaning, and it will be interesting to see how many different scenarios are considered.

The Roman “Invisibility” Cloak – A Triumph for Roman Engineering

I guess the title of this post is designed to be a little misleading, because you might be thinking of Klingons and invisible space ships, but let us stop and consider what an “invisibility” cloak actually means. In the case of Klingons, light does not come from somewhere else and be reflected off their ship back to your eyes. One way to do that is to construct metamaterials, which involve creating structures in them to divert waves. The key involves matching wavelengths to structural variation, and it is easier to do this with longer wavelengths, which is why a certain amount of fuss has been made when microwaves have been diverted around objects to get the “invisibility” cloak. As you might gather, there is a general problem with overall invisibility because electromagnetic radiation has a huge range of wavelengths.

Sound is also a wave, and here it is easier to generate “invisibility” because we only generate sound over a reasonably narrow range of wavelengths from most sources. So, time for an experiment. In 2012 Stéphane Brûlé et al. demonstrated the potential by drilling a two-dimensional array of boreholes into topsoil, each 5 m deep. They then placed an accoustic source nearby, and found that much of the waves’ energy was reflected back towards the source by the first two rows of holes. What happens is that, depending on the spacing of the holes, when waves within a certain range of wavelengths pass through the lattice, there are multiple reflections. (Note this is of no value to Klingons, because you have just amplified the return radar signal.)

The reason is that when waves strike a different medium, some are reflected and some are refracted, and reflection tends to be more likely as the angle of incidence increases, and of course, the angle of incidence equals the angle of reflection. A round hole provides quite chaotic reflections, especially recalling that during refraction there is also a change of angle, and of course a change of medium occurs when the wave strikes the hole, and when it tries to leave the hole. If the holes are spaced properly with respect to the wavelength, there is considerable destructive wave interference. The net result of that is that in Brûlé’s experiment much of the wave energy was reflected back towards the source by the first two rows of holes. It is not necessary to have holes; it is merely necessary to have objects that have different wave impedance, i.e.the waves travel at different speeds through the different media, and the bigger the differences in such speeds, the better the effect. Brûlé apparently played around with holes, etc, and found the best positioning to get maximum reflection.

So, what has this got to do with Roman engineering? Apparently Brûlé went on holiday to Autun in central France, and while being touristy he saw a photograph of the foundations of a Gallo-Roman theatre, and while the image provided barely discernible foundation features, he had a spark of inspiration and postulated that the semi-circular structure bore an uncanny resemblance to half of an invisibility cloak. So he got a copy of the photo and superimposed it on one of his photos and found there was indeed a very close match.

The same thing apparently applied to the Coliseum in Rome, and a number of other amphitheatres. He found that the radii of neighbouring concentric circles (or more generally, ellipses) followed the required pattern very closely.

The relevance? Well, obviously we are not trying to defend against stray noise, but earthquakes are also wave motion. The hypothesis is that the Romans may have arrived at this structure by watching which structures survived in earthquakes and which did not, and then came up with the design most likely to withstand such earthquakes. The ancients did have surprising experience with earthquake design. The great temple at Karnak was built on materials that when sodden, which happened with the annual floods and was sufficient to hold the effect for a year, absorbed/reflected such shaking and acted as “shock absorbers”. The thrilling part of this study is that just maybe we could take advantage of this to design our cities such that they too reflect seismic energy away. And if you think earthquake wave reflection is silly, you should study the damage done in the Christchurch earthquakes. The quake centres were largely to the west, but the waves were reflected off Banks Peninsula, and there was significant wave interference. In places where the interference was constructive the damage was huge, but nearby, where interference was destructive, there was little or no damage. Just maybe we can still learn something from Roman civil engineering.