Martian Fluvial Flows, Placid and Catastrophic

Image

Despite the fact that, apart localized dust surfaces in summer, the surface of Mars has had average temperatures that never exceeded about minus 50 degrees C over its lifetime, it also has had some quite unexpected fluid systems. One of the longest river systems starts in several places at approximately 60 degrees south in the highlands, nominally one of the coldest spots on Mars, and drains into Argyre, thence to the Holden and Ladon Valles, then stops and apparently dropped massive amounts of ice in the Margaritifer Valles, which are at considerably lower altitude and just north of the equator. Why does a river start at one of the coldest places on Mars, and freeze out at one of the warmest? There is evidence of ice having been in the fluid, which means the fluid must have been water. (Water is extremely unusual in that the solid, ice, floats in the liquid.) These fluid systems flowed, although not necessarily continuously, for a period of about 300 million years, then stopped entirely, although there are other regions where fluid flows probably occurred later. To the northeast of Hellas (the deepest impact crater on Mars) the Dao and Harmakhis Valles change from prominent and sharp channels to diminished and muted flows at –5.8 k altitude that resemble terrestrial marine channels beyond river mouths.

So, how did the water melt? For the Dao and Harmakhis, the Hadriaca Patera (volcano) was active at the time, so some volcanic heat was probably available, but that would not apply to the systems starting in the southern highlands.

After a prolonged period in which nothing much happened, there were catastrophic flows that continued for up to 2000 km forming channels up to 200 km wide, which would require flows of approximately 100,000,000 cubic meters/sec. For most of those flows, there is no obvious source of heat. Only ice could provide the volume, but how could so much ice melt with no significant heat source, be held without re-freezing, then be released suddenly and explosively? There is no sign of significant volcanic activity, although minor activity would not be seen. Where would the water come from? Many of the catastrophic flows start from the Margaritifer Chaos, so the source of the water could reasonably be the earlier river flows.

There was plenty of volcanic activity about four billion years ago. Water and gases would be thrown into the atmosphere, and the water would ice/snow out predominantly in the coldest regions. That gets water to the southern highlands, and to the highlands east of Hellas. There may also be geologic deposits of water. The key now is the atmosphere. What was it? Most people say it was carbon dioxide and water, because that is what modern volcanoes on Earth give off, but the mechanism I suggested in my “Planetary Formation and Biogenesis” was the gases originally would be reduced, that is mainly methane and ammonia. The methane would provide some sort of greenhouse effect, but ammonia on contact with ice at minus 80 degrees C or above, dissolves in the ice and makes an ammonia/water solution. This, I propose, was the fluid. As the fluid goes north, winds and warmer temperatures would drive off some of the ammonia so oddly enough, as the fluid gets warmer, ice starts to freeze. Ammonia in the air will go and melt more snow. (This is not all that happens, but it should happen.)  Eventually, the ammonia has gone, and the water sinks into the ground where it freezes out into a massive buried ice sheet.

If so, we can now see where the catastrophic flows come from. We have the ice deposits where required. We now require at least fumaroles to be generated underneath the ice. The Margaritifer Chaos is within plausible distance of major volcanism, and of tectonic activity (near the mouth of the Valles Marineris system). Now, let us suppose the gases emerge. Methane immediately forms clathrates with the ice (enters the ice structure and sits there), because of the pressure. The ammonia dissolves ice and forms a small puddle below. This keeps going over time, but as it does, the amount of water increases and the amount of ice decreases. Eventually, there comes a point where there is insufficient ice to hold the methane, and pressure builds up until the whole system ruptures and the mass of fluid pours out. With the pressure gone, the remaining ice clathrates start breaking up explosively. Erosion is caused not only by the fluid, but by exploding ice.

The point then is, is there any evidence for this? The answer is, so far, no. However, if this mechanism is correct, there is more to the story. The methane will be oxidised in the atmosphere to carbon dioxide by solar radiation and water. Ammonia and carbon dioxide will combine and form ammonium carbonate, then urea. So if this is true, we expect to find buried where there had been water, deposits of urea, or whatever it converted to over three billion years. (Very slow chemical reactions are essentially unknown – chemists do not have the patience to do experiments over millions of years, let alone billions!) There is one further possibility. Certain metal ions complex with ammonia to form ammines, which dissolve in water or ammonia fluid. These would sink underground, and if the metal ions were there, so might be the remains of the ammines now. So we have to go to Mars and dig.

 

 

 

 

 

Ebook Discount

From October 21 – 28, “Red Gold” will be discounted to 99c/99p.

Mars is to be colonized. The hype is huge, the suckers will line up, and we will control the floats. There is money to be made, and the beauty is, nobody on Earth can check what is really going on on Mars. Meanwhile, on Mars we shall be the only ones with guns. This can’t lose.

Except that there is one person who will try to stop the fraud. Which means he cannot be allowed to live. Partly inspired by the 1988 crash, Red Gold shows the anatomy of one sort of fraud. Then there is the problem that fraudsters with guns cannot permit anyone to expose them. One side must kill the other.

If you liked The Martian where science allowed one person to survive then Red Gold is a thriller that has a touch of romance, a little economics and enough science to show how Mars might be colonised and the colonists survive indefinitely.http://www.amazon.com/dp/B009U0458Y

Nanotech Antivirals

Most by now will have heard of nanotechnology, although probably rather few actually understand its full implications. There are strict limits to what can be done, and the so-called nanotech that was to be put into vaccines to allow Bill Gates to know where you were is simply not on. (Why anyone would think Bill Gates would care where you are also eludes me.) However, nanotechnology has some interesting uses in the fight against viruses. The Pfizer and Moderna vaccines that use messenger RNA to develop cell resistance, but the RNA is delivered to the cells by being encased in lipid nanoparticles. Lipids are technically and substance from living organisms that are soluble in organic solvents and insoluble in water, but they are often just fats. The lipid breaks open the cell wall, allowing the messenger RNA to get in, and of course that is the method of the virus as well: it is RNA encased in lipid. This technique can be used in other ways, thus such nanoparticles are showing promise for acting as delivery vehicles for other drugs and vaccines.

However, there may be an even more interesting use, as outlined in Nature Biotech. 39: 1172 – 4. The idea is that such nanomaterials could engage with viruses directly, either disrupting them or binding them. A route to disruption may involve nothing more than breaking apart the virus’ outer membrane. The binding approach works because many viruses rely on glycoproteins on their surface to bind to host cells. Anything that can mimic these cellular attachment points can bind the virus, effectively “nanosponges” for mopping them up. One way to make such

 “sponges” something like red blood cells have their contents removed then the remaining membrane is broken into thousands of tiny vesicles about 100 nanometers wide. They then get these vesicles to encase a biocompatible and biodegradable polymer, with the result that each such piece of polymer is coated with genuine cell membrane. Viruses recognize the cell membrane, attach and try to enter the cell, but for them the contents are something of a disappointment and they can’t get back out.

Such membranes obtained from human lung epithelial type II cells, or from human macrophages have angiotensin-converting enzyme 2 (ACE 2)and CD147, both of which SARS-C0V-2 binds to. Potentially we have a treatment that will clean up a Covid-19 infection. According to a study with mice it “showed efficacy” against the virus and showed no evidence of toxicity. Of course, there remains a way to go.

A different approach that shows promise is to construct nano-sized particles that are coated with something that will bind the virus. One example was used in a nasal spray for mice that led to a 99% reduction in viral load when treated with SARS-CoV-2 laden air. It is claimed the particles are not absorbed by the body, although so far the clinical study has not been peer reviewed. The advantage of this approach is that it can in principle be applied to a reasonably wide range of viruses. A further approach was to make “shells” out of DNA, and coat the inner side of these with something that will bind viruses. With several attachment sites, the virus cannot get out, and because of the bulk of the shell cannot bind to a cell and hence cannot infect. In this context, it is not clear whether the other approaches that bind viruses can still infect if the bound virus can attack from its other side.

Many viruses have an outer membrane that is a phospholipid bilayer, and this is essential for the virus to be able to fuse with cell membranes. A further approach is to disrupt the viral membrane, thus stop the fusing. One example is to form a nano-sized spherical surfactant particle and coat it with entities such as peptides that bind to viral glycoproteins. The virus attaches, then the surfactant simply destroys the viral membrane. As can be seen, there are a wide range of possible approaches. Unfortunately, as yet they are still at the preliminary stages and while efficacy has been shown in vitro and in mice, it is unclear what the long-term effects will be. Of course, if the patient is dying of a viral attack, long-term problems are not on his/her mind. One of the side-effects of SARS-CoV-2 may be that it has stimulated genuine research into the topic. This the Biden administration is firing $3 billion at research. It is a pity it takes a pandemic to get us into action, though.

Plastics and Rubbish

In the current “atmosphere” of climate change, politicians are taking more notice of the environment, to which as a sceptic I notice they are not prepared to do a lot about it. Part of the problem is following the “swing to the right” in the 1980s, politicians have taken notice of Reagan’s assertion that the government is the problem, so they have all settled down to not doing very much, and they have shown some skill at doing very little. “Leave it to the market” has a complication: the market is there to facilitate trade in which all the participants wish to offer something that customers want and they make a profit while doing it. The environment is not a customer in the usual sense and it does not pay, so “the market” has no direct interest in it.

There is no one answer to any of these problems. There is no silver bullet. What we have to do is chip away at these problems, and one that indicates the nature of the problem is plastics. In New Zealand the government has decided that plastic bags are bad for the environment, so the single use bags are no longer used in supermarkets. One can argue whether that is good for the environment, but it is clear that the wilful throwing away of plastics and their subsequent degradation is bad for it. And while the disposable bag has been banned here, rubbish still has a lot of plastics in it, and that will continue to degrade. If it were buried deep in some mine it probably would not matter, but it is not. So why don’t we recycle them?

Then first reason is there are so many variations of them and they do not dissolve in each other. You can emulsify a mix, but the material has poor strength because there is very little binding at the interface of the tiny droplets. That is because they have smooth surfaces, like the interface between oil and water. If the object is big enough this does not matter so much, thus you can make reasonable fence posts out of recycled plastics, but there really is a limit to the market for fence posts.

The reason they do not dissolve in each other comes from thermodynamics. For something to happen, such as polymer A dissolving in polymer B, the change (indicated by the symbol Δ) in what is called the free energy ΔG has to be negative. (The reason it is negative is convention; the reason it is called “free” has nothing to do with price – it is not free in that sense.) To account for the process, we use an equation

            ΔG = ΔH -T ΔS

ΔH reflects the change of energy between each molecule in its own material and in solution of the other material. As a general rule, molecules favour having their own kind nearby, especially if they are longer because the longer they are the interactions per atom are constant for other molecules of the same material, but other molecules do not pack as well. Thinking of oil and water, the big problem for solution is that water, the solvent, has hydrogen bonds that make water molecules stick together. The longer the polymer, per molecule that enhances the effect. Think of one polymer molecule has to dislodge a very large number of solvent molecules. ΔS is the entropy and it increases as the degree of randomness increases. Solution is more random per molecule, so whether something dissolves is a battle between whether the randomness per molecule can overcome the attractions between the same kind. The longer the polymer, the less randomness is introduced and the greater any difference in energy between same and dissolved. So the longer the polymers, the less likely they are to dissolve in each other which, as an aside, is why you get so much variety in minerals. Long chain silicates that can alter their associate ions like to phase separate.

So we cannot recycle, and they are useless? Well, no. At the very least we can use them for energy. My preference is to turn them, and all the organic material in municipal refuse, into hydrocarbons. During the 1970s oil crises the engineering was completed to build a demonstration plant for the city of Worcester in Massachusetts. It never went ahead because as the cartel broke ranks and oil prices dropped, converting wastes to hydrocarbon fuels made no economic sense. However, if we want to reduce the use of fossil fuels, it makes a lot of sense to the environment, IF we are prepared to pay the extra price. Every litre of fuel from waste we make is a litre of refined crude we do not have to use, and we will have to keep our vehicle fleet going for quite some time. The basic problem is we have to develop the technology because the engineering data for that previous attempt is presumably lost, and in any case, that was for a demonstration plant, which is always built on the basis that more engineering questions remain. As an aside, water at about 360 degrees Centigrade has lost its hydrogen bonding preference and the temperature increase means oil dissolves in water.

The alternative is to burn it and make electricity. I am less keen on this, even though we can purchase plants to do that right now. The reason is simple. The combustion will release more gases into the atmosphere. The CO2 is irrelevant as both do that, but the liquefaction approach sends nitrogen containing material out as water soluble material which could, if the liquids were treated appropriately, be used as a fertilizer, whereas in combustion they go out the chimney as nitric oxide or even worse, as cyanides. But it is still better to do something with it than simply fill up local valleys.

One final point. I saw an item where some environmentalist was condemning a UK thermal plant that used biomass arguing it put out MORE CO2 per MW of power than coal. That may be the case because you can make coal burn hotter and the second law of thermodynamics means you can extract more energy in the form of work. (Mind you, I have my doubts since the electricity is generated from steam.) However, the criticism shows the inability to understand calculus. What is important is not the emissions right now, but those integrated over time. The biomass got its carbon from the atmosphere say forty years ago, and if you wish to sustain this exercise you plant trees that recover that CO2 over the next forty years. Burn coal and you are burning carbon that has been locked away from the last few million years.

Interstellar Travel Opportunities.

As you may have heard, stars move. The only reason we cannot see this is because they are so far away, and it takes so long to make a difference. Currently, the closest star to us is Proxima Centauri, which is part of the Alpha Centauri grouping. It is 4.2 light years away, and if you think that is attractive for an interstellar voyage, just wait a bit. In 28,700 years it will be a whole light year closer. That is a clear saving in travelling time, especially if you do not travel close to light speed.

However, there have been closer encounters. Sholz’s star, which is a binary; a squib of a red dwarf plus a brown dwarf, came within 0.82 light years 78,000 years ago. Our stone age ancestors would probably have been unaware of it, because it is so dim that even when that close it was still a hundred times too dim to be seen by the naked eye. There is one possible exception to that: occasionally red dwarfs periodically emit extremely bright flares, so maybe they would see a star appear from nowhere, then gradually disappear. Such an event might go down in their stories, particularly if something dramatic happened. There is one further possible downside for our ancestors: although it is unclear whether such a squib of a star was big enough, it might have exerted a gravitational effect on the Oort cloud, thus generating a flux of comets coming inwards. That might have been the dramatic event.

That star was too small to do anything to disrupt our solar system, but it is possible that much closer encounters in other solar systems could cause all sorts of chaos, including stealing a planet, or having one stolen. They could certainly disrupt a solar system, and it is possible that some of the so-called star-burning giants were formed in the expected places and were dislodged inwards by such a star. That happens when the dislodged entity has a very elliptical orbit that takes it closer to the star where tidal effects with the star circularise it. That did not happen in our solar system. Of course, it does not take a passing star to do that; if the planets get too big and too close their gravity can do it.

It is possible that a modestly close encounter with a star did have an effect on the outer Kuiper Belt, where objects like Eris seem to be obvious Kuiper Belt Objects, but they are rather far out and have very elliptical orbits. It would be expected that would arise from one or more significant gravitational interactions.

The question then is, if a star passed closely should people take advantage and colonise the new system? Alternatively, would life forms there have the same idea if they were technically advanced? Since if you had the technology to do this, presumably you would also have the technology to know what was there. It is not as if you do not get warning. For example, if you are around in 1.4 million years, Gliese 710 will pass within 10,000 AU of the sun, well within the so-called Oort Cloud. Gliese 710 is about 60% the mass of the sun, which means its gravity could really stir up the comets in the Oort cloud, and our star will do exactly the same for the corresponding cloud of comets in their system. In a really close encounter it is not within the bounds of possibility that planetary bodies could be exchanged. If they were, the exchange would almost certainly lead to a very elliptical orbit, and probably at a great distance. You may have heard of the possibility of a “Planet 9” that is at a considerable distance but with an elliptical orbit has caused highly elliptical orbits in some trans Neptunian objects. Either the planet, if it exists at all, or the elliptical nature of the orbits of bodies like Sedna, could well have arisen from a previous close stellar encounter.

As far as I know, we have not detected planets around this star. That does not mean there are not any because if we do not lie on the equatorial plane of that star we would not see much from eclipsing observations (and remember Kepler only looks at a very small section of the sky, and Gliese 710 is not in the original area examined) and at that distance, any astronomer with our technology there would not see us. Which raises the question, if there were planets there, would we want to swap systems? If you accept the mechanism of how planets form in my ebook “Planetary Formation and Biogenesis”, and if the rates of accretion, after adjusting for stellar mass for both were the same, then any rocky planet in the habitable zone is likely to be the Mars equivalent. It would be much warmer and it may well be much bigger than our Mars, but it would not have plate tectonics because its composition would not permit eclogite to form, which is necessary for pull subduction. With that knowledge, would you go?

Unexpected Astronomical Discoveries.

This week, three unexpected astronomical discoveries. The first relates to white dwarfs. A star like our sun is argued to eventually run out of hydrogen, at which point its core collapses somewhat and it starts to burn helium, which it converts to carbon and oxygen, and gives off a lot more energy. This is a much more energetic process than burning hydrogen to helium, so although the core contracts, the star itself expands and becomes a red giant. When it runs out of that, it has two choices. If it is big enough, the core contracts further and it burns carbon and oxygen, rather rapidly, and we get a supernova. If it does not have enough mass, it tends to shed its outer matter and the rest collapses to a white dwarf, which glows mainly due to residual heat. It is extremely dense, and if it had the mass of the sun, it would have a volume roughly that of Earth.

Because it does not run fusion reactions, it cannot generate heat, so it will gradually cool, getting dimmer and dimmer, until eventually it becomes a black dwarf. It gets old and it dies. Or at least that was the theory up until very recently. Notice anything wrong with what I have written above?

The key is “runs out”. The problem is that all these fusion reactions occur in the core, but what is going on outside. It takes light formed in the core about 100,000 years to get to the surface. Strictly speaking, that is calculated because nobody has gone to the core of a star to measure it, but the point is made. It takes that long because it keeps running into atoms on the way out, getting absorbed and re-emitted. But if light runs into that many obstacles getting out, why do you think all the hydrogen would work its way to the core? Hydrogen is light, and it would prefer to stay right where it is. So even when a star goes supernova, there is still hydrogen in it. Similarly, when a red giant sheds outer matter and collapses, it does not necessarily shed all its hydrogen.

The relevance? The Hubble space telescope has made another discovery, namely that it has found white dwarfs burning hydrogen on their surfaces. A slightly different version of “forever young”. They need not run out at all because interstellar space, and even intergalactic space, still has vast masses of hydrogen that, while thinly dispersed, can still be gravitationally acquired. The surface of the dwarf, having such mass and so little size, will have an intense gravity to make up for the lack of exterior pressure. It would be interesting to know if they could determine the mechanism of the fusion. I would suspect it mainly involves the CNO cycle. What happens here is that protons (hydrogen nuclei) in sequence enter a nucleus that starts out as ordinary carbon 12 to make the element with one additional proton, which then decays to produce a gamma photon, and sometimes a positron and a neutrino until it gets to nitrogen 15 (having been in oxygen 15) after which if it absorbs a proton it spits out helium 4 and returns to carbon 12. The gamma spectrum (if it is there) should give us a clue.

The second is the discovery of a new Atira asteroid, which orbits the sun every 115 days and has a semi-major axis of 0.46 A.U. The only known object in the solar system with a smaller semimajor axis is Mercury, which orbits the sun in 89 days. Another peculiarity of its orbit is that it can only be seen when it is away from the line of the sun, and as it happens, these times are very difficult to see it from the Northern Hemisphere. It would be interesting to know its composition. Standard theory has it that all the asteroids we see have been dislodged from the asteroid belt, because the planets would have cleaned out any such bodies that were there from the time of the accretion disk. And, of course, we can show that many asteroids were so dislodged, but many does not mean all. The question then is, how reliable is that proposed cleanout? I suspect, not very. The idea is that numerous collisions would give the asteroids an eccentricity that would lead them to eventually collide with a planet, so the fact they are there means they have to be resupplied, and the asteroid belt is the only source. However, I see no reason why some could not have avoided this fate. In my ebook “Planetary Formation and Biogenesis” I argue that the two possibilities would have clear compositional differences, hence my interest. Of course, getting compositional information is easier said than done.

The third “discovery” is awkward. Two posts ago I wrote how the question of the nature of dark energy might not be a question because it may not exist. Well, no sooner had I posted, than someone came up with a claim for a second type of dark energy. The problem is, if the standard model is correct, the Universe should be expanding 5 – 10% faster than it appears to be doing. (Now, some would say that indicates the standard model is not quite right, but that is apparently not an option when we can add in a new type of “dark energy”.) This only applied for the first 300 million years or so, and if true, the Universe has suddenly got younger. While it is usually thought to be 13.8 billion years old, this model has it at 12.4 billion years old. So while the model has “invented” a new dark energy, it has also lost 1.4 billion years in age. I tend to be suspicious of this, especially when even the proposers are not confident of their findings. I shall try to keep you posted.

Thorium as a Nuclear Fuel

Apparently, China is constructing a molten salt nuclear reactor to be powered by thorium, and it should be undergoing trials about now. Being the first of its kind, it is, naturally, a small reactor that will produce 2 megawatt of thermal energy. This is not much, but it is important when scaling up technology not to make too great of leaps because if something in the engineering has to be corrected it is a lot easier if the unit is smaller. Further, while smaller is cheaper, it is also more likely to create fluctuations, especially with temperature, and when smaller they are far easier to control. The problem with a very large reactor is if something is going wrong it takes a long time to find out, but then it also becomes increasingly difficult to do anything about it.

Thorium is a weakly radioactive metal that has little current use. It occurs naturally as thorium-232 and that cannot undergo fission. However, in a reactor it absorbs neutrons and forms thorium-233, which has a half-life of 22 minutes and β-decays to protactinium-233. That has a half-life of 27 days, and then β-decays to uranium-233, which can undergo fission. Uranium-233 has a half-life of 160,000 years so weapons could be made and stored.  

Unfortunately, 1.6 tonne of thorium exposed to neutrons and if appropriate chemical processing were available, is sufficient to make 8 kg of uranium-233, and that is enough to produce a weapon. So thorium itself is not necessarily a form of fuel that is free of weapons production. However, to separate Uranium-233 in a form to make a bomb, major chemical plant is needed, and the separation needs to be done remotely because apparently contamination with Uranium-232 is possible, and its decay products include a powerful gamma emitter. However, to make bomb material, the process has to be aimed directly at that. The reason is, the first step is to separate the protactinium-233 from the thorium, and because of the short half-life, only a small amount of the thorium gets converted. Because a power station will be operating more or less continuously, it should not be practical to use it to make fissile material for bombs.

The idea of a molten salt reactor is that the fissile material is dissolved in a liquid salt in the reactor core. The liquid salt also takes away the heat which, when the salt is cycles through heat exchangers, converts water to steam, and electricity is obtained in the same way as any other thermal station. Indeed, China says it intends to continue using its coal-fired generators by taking away the furnaces and replacing them with a molten salt reactor. Much of the infrastructure would remain. Further, compared with the usual nuclear power stations, the molten salt reactors operate at a higher temperature, which means electricity can be generated more efficiently.

One advantage of a molten salt reactor is it operates at lower pressures, which greatly reduces the potential for explosions. Further, because the fuel is dissolved in the salt you cannot get a meltdown. That does not mean there cannot be problems, but they should be much easier to manage. The great advantage of the molten salt reactor is it burns its reaction products, and an advantage of a thorium reactor is that most of the fission products have shorter half-lives, and since each fission produces about 2.5 neutrons, a molten salt reactor further burns larger isotopes that might be a problem, such as those of neptunium or plutonium formed from further neutron capture. Accordingly, the waste products do not comprise such a potential problem.

The reason we don’t directly engage and make lots of such reactors is there is a lot of development work required. A typical molten salt mix might include lithium fluoride, beryllium fluoride, the thorium tetrafluoride and some uranium tetrafluoride to act as a starter. Now, suppose the thorium or uranium splits and produces, say, a strontium atom and a xenon atom. At this point there are two fluorine atoms as surplus, and fluorine is an extraordinarily corrosive gas. As it happens, xenon is not totally unreactive and it will react with fluorine, but so will the interior of the reactor. Whatever happens in there, it is critical that pumps, etc keep working. Such problems can be solved, but it does take operating time to be sure such problems are solved. Let’s hope they are successful.

The Universe is Shrinking

Dark energy is one of the mysteries of modern science. It is supposed to amount to about 68% of the Universe, yet we have no idea what it is. Its discovery led to Nobel prizes, yet it is now considered possible that it does not even exist. To add or subtract 68% of the Universe seems a little excessive.

One of the early papers (Astrophys. J., 517, pp565-586) supported the concept. What they did was to assume type 1A supernovae always gave out the same light so by measuring the intensity of that light and comparing it with the red shift of the light, which indicates how fast it is going away, they could assess whether the rate of expansion of the universe was even over time. The standard theory at the time was that it was, and it was expanding at a rate given by the Hubble constant (named after Edwin Hubble, who first proposed this). What they did was to examine 42 type 1a supernovae with red shifts between 0.18 and 0.83, and compared their results on a graph with what they expected from the line drawn using the Hubble constant, which is what you expect with zero acceleration, i.e. uniform expansion. Their results at a distance were uniformly above the line, and while there were significant error bars, because instruments were being operated at their extremes, the result looked unambiguous. The far distant ones were going away faster than expected from the nearer ones, and that could only arise if the rate of expansion were accelerating.

For me, there was one fly in the ointment, so to speak. The value of the Hubble constant they used was 63 km/s/Mpc. The modern value is more like 68 or 72; there are two values, and they depend on how you measure them, but both are somewhat larger than this. Now it follows that if you have the speed wrong when you predict how far it travelled, it follows that the further away it is, the bigger the error, which means you think it has speeded up.

Over the last few years there have been questions as to exactly how accurate this determination of acceleration really is. There has been a question (arXiv:1912.04903) that the luminosity of these has evolved as the Universe ages, which has the effect that measuring the distance this way leads to overestimation of the distance. Different work (Milne et al. 2015.  Astrophys. J. 803: 20) showed that there are at least two classes of 1A supernovae, blue and red, and they have different ejecta velocities, and if the usual techniques are used the light intensity of the red ones will be underestimated, which makes them seem further away than they are.

My personal view is there could be a further problem. The type 1A occurs when a large star comes close to another star and begins stripping it of its mass until it gets big enough to ignite the supernova. That is why they are believed to have the same brightness: they ignite their explosion at the same mass so there are the same conditions, so there should be the same brightness. However, this is not necessarily the case because the outer layer, which generates the light we see, comes from the non-exploding star, and will absorb and re-emit energy from the explosion. Hydrogen and helium are poor radiators, but they will absorb energy. Nevertheless, the brightest light might be expected to come from the heavier elements, and the amount of them increases as the Universe ages and atoms are recycled. That too might lead to the appearance that the more distant ones are further away than expected, which in turn suggests the Universe is accelerating its expansion when it isn’t.

Now, to throw the spanner further into the works, Subir Sarkar has added his voice. He is unusual in that he is both an experimentalist and a theoretician, and he has noted that the 1A supernovae, while taken to be “standard candles”, do not all emit the same amount of light, and according to Sarkar, they vary by up to a factor of ten. Further, previously the fundamental data was not available, but in 1915 it became public. He did a statistical analysis and found that the data supported a cosmic acceleration but only with a statistical significance of three standard deviations, which, according to him, “is not worth getting out of bed for”.

There is a further problem. Apparently the Milky Way is heading off in some direction at 600 km/s, and this rather peculiar flow extends out to about a billion light years, and unfortunately most of the supernovae studied so far are in this region. This drops the statistical significance for cosmic expansion to two standard deviations. He then accuses the previous supporters of this cosmic expansion as confirmation bias: the initial workers chose an unfortunate direction to examine, but the subsequent ones “looked under the same lamppost”.

So, a little under 70% of what some claim is out there might not be. That is ugly. Worse, about 27% is supposed to be dark matter, and suppose that did not exist either, and the only reason we think it is there is because our understanding of gravity is wrong on a large scale? The Universe now shrinks to about 5% of what it was. That must be something of a record for the size of a loss.

Asteroid (16) Psyche – Again! Or Riches Evaporate, Again

Thanks to my latest novel “Spoliation”, I have had to take an interest in asteroid mining. I discussed this in a previous post (https://ianmillerblog.wordpress.com/2020/10/28/asteroid-mining/) in which I mentioned the asteroid (16) Psyche. As I wrote, there were statements saying the asteroid had almost unlimited mineral resources. Initially, it was estimated to have a density (g/cc) of about 7, which would make it more or less solid iron. It should be noted this might well be a consequence of extreme confirmation bias. The standard theory has it that certain asteroids differentiated and had iron cores, then collided and the rock was shattered off, leaving the iron cores. Iron meteorites are allegedly the result of collisions between such cores. If so, it has been estimated there have to be about 75 iron cores floating around out there, and since Psyche had a density so close to that of iron (about 7.87) it must be essentially solid iron. As I wrote in that post, “other papers have published values as low as 1.4 g/cm cubed, and the average value is about 3.5 g/cm cubed”. The latest value is 3.78 + 0.34.

These varied numbers show how difficult it is to make these observations. Density is mass per volume. We determine the volume by considering the size and we can measure the “diameter”, but the target is a very long way away, it is small, so it is difficult to get an accurate “diameter”. The next point is it is not a true sphere, so there are extra “bits” of volume with hills, or “bits missing” with craters. Further, the volume depends on a diameter cubed, so if you make a ten percent error in the “diameter” you have a 30% error overall. The mass has to be estimated from its gravitational effects on something else. That means you have to measure the distance to the asteroid, the distance to the other asteroid, and determine the difference from expected as they pass each other. This difference may be quite tiny. Astronomers are working at the very limit of their equipment.

A quick pause for some silicate chemistry. Apart from granitic/felsic rocks, which are aluminosilicates, most silicates come in two classes of general formula: A – olivines X2SiO4 or B – pyroxenes XSiO3, where X is some mix of divalent metals, usually mainly magnesium or iron (hence their name, mafic, the iron being ferrous). However, calcium is often present. Basically, these elements are the most common metals in the output of a supernova, with magnesium being the most. For olivines, if X is only magnesium, the density for A (forsterite) is 3.27 and for B (enstatite) 3.2. If X is only iron, the density for A (fayalite) is 4.39 and for B (ferrosilite) 4.00. Now we come to further confirmation bias: to maintain the iron content of Psyche, the density is compared to enstatite chondrites, and the difference made up with iron. Another way to maintain the concept of “free iron” is the proposition that the asteroid is made of “porous metal”. How do you make that? A porous rock, like pumice, is made by a volcano spitting out magma with water dissolved in it, and as the pressure drops the water turns to steam. However, you do not get any volatile to dissolve in molten iron.

Another reason to support the iron concept was that the reflectance spectrum was “essentially featureless”. The required features come from specific vibrations, and a metal does not have any. Neither does a rough surface that scatters light. The radar albedo (how bright it is with reflected light) is 0.34, which implies a surface density of 3.5, which is argued to indicate either metal with 50% porosity, or solid silicates (rock). It also means no core is predicted. The “featureless spectrum” was claimed to have an absorption at 3 μm, indicating hydroxyl, which indicates silicate. There is also a signal corresponding to an orthopyroxene. The emissivity indicates a metal content greater than 20% at the surface, but if this were metal, there should be a polarised emission, and that is completely absent. At this point, we should look more closely at what “metal” means. In many cases, while it is used to convey what we would consider as a metal, the actual use includes chemical compounds with a  metallic element. The iron levels may be as iron sulphide, the oxide, or, as what I believe the answer is, the silicate. I think we are looking at the iron content of average rock. Fortune does not await us there.

In short, the evidence is somewhat contradictory, in part because we are using spectroscopy at the limits of its usefulness. NASA intends to send a mission to evaluate the asteroid and we should wait for that data.

But what about iron cored asteroids? We know there are metallic iron meteorites so where did they come from? In my ebook “Planetary Formation and Biogenesis”, I note that the iron meteorites, from isotope dating, are amongst the oldest objects in the solar system, so I argue they were made before the planets, and there were a large number of them, most of which ended up in planetary cores. The meteorites we see, if that is correct, never got accreted, and finally struck a major body for the first time.

Food on Mars

Settlers on Mars will have needs, but the most obvious ones are breathing and eating, and both of these are likely to involve plants. Anyone thinking of going to Mars should think about these, and if you look at science fiction the answers vary. Most simply assume everything is taken care of, which is fair enough for a story. Then there is the occasional story with slightly more detail. Andy Weir’s “The Martian” is simple. He grows potatoes. Living on such a diet would be a little spartan, but his hero had no option, being essentially a Robinson Crusoe without a Man Friday. The oxygen seemed to be a given. The potatoes were grown in what seemed to be a pressurised plastic tent and to get water, he catalytically decomposed hydrazine to make hydrogen and then he burnt that. A plastic tent would not work. The UV radiation would first make the tent opaque so the necessary light would not get in very well, then the plastic would degrade. As for making water, burning hydrazine as it was is sufficient, but better still, would they not put their base where there was ice?

I also have a novel (“Red Gold”) where a settlement tries to get started. Its premise is there is a main settlement with fusion reactors and hence have the energy to make anything, but the main hero is “off on his own” and has to make do with less, but can bring things from the main settlement. He builds giant “glass houses” made with layers of zinc-rich glass that shield the inside from UV radiation. Stellar plasma ejections are diverted by a superconducting magnet at the L1 position between Mars and the sun (proposed years before NASA suggested it) and the hero lives in a cave. That would work well for everything except cosmic radiation, but is that going to be that bad? Initially everyone lives on hydroponically grown microalgae, but the domes permit ordinary crops. The plants grow in treated soil, but as another option a roof is put over a minor crater and water provided (with solar heating from space) in which macroalgae grow and marine microalgae, as well as fish and other species, like prawns. The atmosphere is nitrogen, separated from the Martian atmosphere, and some carbon dioxide, and the plants make oxygen. (There would have to be some oxygen to get started, but plants on Earth grew without oxygen initially.)

Since then there have been other quite dramatic proposals from more official sources that assume a lot of automation to begin with. One of the proposals involves constructing huge greenhouses by covering a crater or valley. (Hey, I suggested that!) but the roof is flat and made of plastic, the plastic being made from polyethylene 2,5-furandicarboxylate, a polyester made from carbohydrates grown by the plants. This is used as a bonding agent to make a concrete from Martian rock. (In my novel, I explained why a cement is very necessary, but there are limited uses.) The big greenhouse model has some limitations. In this, the roof is flat, and in essentially two layers, and in between are vertical stacks of algae growing in water. The extra value here is that water filters out the effect of cosmic rays, although you need several meters of it. Now we have a problem. The idea is that underneath this there is a huge habitat, and for every cubic meter of water, we have one tonne mass, and on Mars, about 0.4 tonne of force on the lower flat deck. If this bottom deck is the opaque concrete, then something bound by plastic adhesion will slip. (Our concrete on bridges is only inorganic, and the binding is chemical, not physical, and further there is steel reinforcing.) Below this there would need to be many weight-bearing pillars. And there would need to be light generation between the decks (to get the algae to grow) and down below. Nuclear power would make this easy. Food can be grown as algae in between decks, or in the ground down below.

As I see it, construction of this would take quite an effort and a huge amount of materials. The concept is the plants could be grown to make the cement to make the habitat, but hold on, where are the initial plants going to grow, and who/what does all the chemical processing? The plan is to have that in place from robots before anyone gets there but I think that is greatly overambitious. In “Red Gold” I had the glass made from regolith processed with the fusion energy. The advantage of glass over this new suggestion is weight; even on Mars with its lower gravity millions of tonnes remains a serious weight. The first people there will have to live somewhat more simply.

Another plan that I have seen involves finding a frozen lake in a crater, and excavating an “under-ice” habitat. No shortage of water, or screening from cosmic rays, but a problem as I see it is said ice will melt from the heat, erode the bottom of the sheet, and eventually it will collapse. Undesirable, that is.

All of these “official” options use artificial lighting. Assuming a nuclear reactor, that is not a problem in itself, although it would be for the settlement under the ice because heat control would be a problem. However, there is more to getting light than generating energy. What gives off the light, and what happens when its lifetime expires? Do you have to have a huge number of spares? Can they be made on Mars?

There is also the problem with heat. In my novel I solved this with mirrors in space focussing more sunlight on selected spots, and of course this provides light to help plants grow, but if you are going to heat from fission power a whole lot more electrical equipment is needed. Many more things to go wrong, and when it could take two years to get a replacement delivered, complicated is what you do not want. It is not going to be that easy.

The Fusion Energy Dream

One of the most attractive options for our energy future is nuclear fusion, where we can turn hydrogen into helium. Nuclear fusion works, even on Earth, as we can see when a hydrogen bomb goes off. The available energy is huge. Nuclear fusion will solve our energy crisis, we have been told, and it will be available in forty years. That is what we were told about 60 years ago, and you will usually hear the same forty year prediction now!

Nuclear fusion, you will be told, is what powers the sun, however we won’t be doing what the sun does any time soon. You may guess there is a problem in that the sun is not a spectacular hydrogen bomb. What the sun does is to squeeze hydrogen atoms together to make the lightest isotope of helium, i.e. 2He. This is extremely unstable, and the electric forces will push the protons apart in an extremely short time, like a billionth of a billionth of a second might be the longest it can last, and probably not that long. However, if it can acquire an electron, or eject a positron, before it decays it turns into deuterium, which is a proton and a neutron. (The sun also uses a carbon-oxygen cycle to convert hydrogen to helium.) The difficult thing that a star does, and what we will not do anytime soon, is to make neutrons (as opposed to freeing them).

The deuterium can then fuse to make helium, usually first with another proton to make 3He, and then maybe with another to make 4He. Each fusion makes a huge amount of energy, and the star works because the immense pressure at the centre allows the occasional making of deuterium in any small volume. You may be surprised by the use of the word “occasional”; the reason the sun gives off so much energy is simply that it is so big. Occasional is good. The huge amount of energy released relieves some of the pressure caused by the gravity, and this allows the star to live a very long time. At the end of a sufficiently large star’s life, the gravity allows the material to compress sufficiently that carbon and oxygen atoms fuse, and this gives of so much energy that the increase in pressure causes  the reaction  to go out of control and you have a supernova. A bang is not good.

The Lawrence Livermore National Laboratory has been working on fusion, and has claimed a breakthrough. Their process involves firing 192 laser beams onto a hollow target about 1 cm high and a diameter of a few millimeters, which is apparently called a hohlraum. This has an inner lining of gold, and contains helium gas, while at the centre is a tiny capsule filled with deuterium/tritium, the hydrogen atoms with one or two neutrons in addition to the required proton. The lasers heat the hohlraum so that the gold coating gives off a flux of Xrays. The Xrays heat the capsule causing material on the outside to fly off at speeds of hundreds of kilometers per second. Conservation of momentum leads to the implosion of the capsule, which gives, hopefully, high enough temperatures and pressures to fuse the hydrogen isotopes.

So what could go wrong? The problem is the symmetry of the pressure. Suppose you had a spherical-shaped bag of gel that was mainly water, and, say, the size of a football and you wanted to squeeze all the water out to get a sphere that only contained the gelling solid. The difficulty is that the pressure of a fluid inside a container is equal in all directions (leaving aside the effects of gravity). If you squeeze harder in one place than another, the pressure relays the extra force per unit area to one where the external pressure is weaker, and your ball expands in that direction. You are fighting jelly! Obviously, the physics of such fluids gets very complicated. Everyone knows what is required, but nobody knows how to fill the requirement. When something is unequal in different places, the effects are predictably undesirable, but stopping them from being unequal is not so easy.

The first progress was apparently to make the laser pulses more energetic at the beginning. The net result was to get up to 17 kJ of fusion energy per pulse, an improvement on their original 10 kJ. The latest success produced 1.3 MJ, which was equivalent to 10 quadrillion watts of fusion power for a 100 trillionth of a second. An energy generation of 1.3 MJ from such a small vessel may seem a genuine achievement, and it is, but there is further to go. The problem is that the energy input to the lasers was 1.9 MJ per pulse. It should be realised that that energy is not lost. It is still there so the actual output of a pulse would be 3.2 MJ of energy. The problem is that the output includes the kinetic energy of the neutrons etc produced, and it is always as heat whereas the input energy was from electricity, and we have not included the losses of power when converting electricity to laser output. Converting that heat to electricity will lose quite a bit, depending on how it is done. If you use the heat to boil water the losses are usually around 65%. In my novels I suggest using the magnetohydrodynamic effect that gets electricity out of the high velocity of the particles in the plasma. This has been made to work on plasmas made by burning fossil fuels, which doubles the efficiency of the usual approach, but controlling plasmas from nuclear fusion would be far more difficult. Again, very easy to do in theory; very much less so in practice. However, the challenge is there. If we can get sustained ignition, as opposed to such a short pulse, the amount of energy available is huge.

Sustained fusion means the energy emitted from the reaction is sufficient to keep it going with fresh material injected as opposed to having to set up containers in containers at the dead centre of a multiple laser pulse. Now, the plasma at over 100,000,000 degrees Centigrade should be sufficient to keep the fusion going. Of course that will involve even more problems: how to contain a plasma at that temperature; how to get the fuel into the reaction without melting then feed tubes or dissipating the hydrogen; how to get the energy out in a usable form; how to cool the plasma sufficiently? Many questions; few answers.