Hack and be Hacked

Much has been made of hacking over the last few months, so for two reasons I cannot resist commenting. The first is obvious, while the second will become clearer later. However, the issue for me is that while there has been a lot of noise, we are strangely short of light, i.e. evidence. So what can we accept? Obviously, everyone will have their own criteria, but here is my view. The first thing to accept is that spying has been going on from time immemorial. Hacking is simply a more recent addition to the spying (if they are doing it) or intelligence gathering (if you are doing it) toolkit.

The first accusation was that the Russians hacked the Democrats and swung the election, thus appointing Trump instead of Clinton. Apparently there is a document around produced by various intelligence agencies, including the FBI, that says they have high confidence this occurred, although interestingly, the NSA gave it only moderate confidence. Given the political status and the positions of the other agencies, that probably means the NSA doubts it, and the NSA is probably the agency most capable of assessing hacking.

Do you see what is wrong with the accusation? Basically it is a multiple statement, and the simplest error is that if one part is believed, people believe it all. The first statement is, “The Democrats were hacked”. Strangely, there is very little real evidence that this happened, but I am reasonably convinced it probably did. One fact that swings me this way is that an accusation came that their security was so lax that a child could have hacked them. How did the accuser know if he did not try? The second statement is, “Some Russians did it.” Some hacker’s IDs have been published, and while this is hardly proof, I can accept it as quite possible. Another implied statement is, “Putin ordered it.” There is absolutely no evidence for that at all. Maybe he did, although two of the named hackers were more like private individuals, and why would he use them?

However, then we get to the really crunch bit: “the Russians then swung the election.” To me, this is highly implausible, and the only evidence produced is that some unknown hacker provided information to Wikileaks. My question is, even if the Russians hacked the Democrats, how did that affect the election? Is the average American voter a devoted fan of Wikileaks? What did the Wikileaks document say? I don’t know, and if I don’t know and I am reasonably interested, why does the average voter who probably does not care a toss over hacked emails care? My guess is, the Russians are busy collecting whatever intelligence they can, as are the US agencies. They are not trying to influence internal politics, because they will backfire in a big way; instead they simply want to know what to expect. I could be wrong on that.

The next accusation we have is that those dastardly Russians hacked Angela Merkel. Probably true, but then again, the main evidence we have is an admission the NSA did that some time before. Sounds like life in government. Following that, we have Trump accusing Obama of having hacked, or spied, on him during the election campaign. Again, not a shred of evidence has been produced. And again, we have the problem, did it happen, and if so, who did it? My personal view is it is highly unlikely President Obama did that.

The latest accusation is that the Russians hacked Yahoo. Here we at least have evidence of part of the multiple statement: Yahoo confirms it was hacked. The Americans have accused four Russians, two of whom are private sector criminals, and two were part of the FSB, the Russian state security service. This is where it gets interesting. The Russian government had apparently arrested at least one of the FSB men for illegal hacking of Putin. This sounds to me that the accused Russians may well have done that, but they were not acting on behalf of the Russian government, other than that two of them were drawing FSB pay.

The following is a good example why you need firm facts. For those who know nothing about rugby, admittedly a minor sport, the All Blacks, New Zealand’s national team, recently played the Australian national team. The All Blacks arrived at the site of their next game in Australia about 6 days ahead of the game, and apparently they found that the room allocated for team talks was bugged. Most people would jump to the conclusion that the Australians did this, because the Australians would seem to be those with the obvious motive, but seemingly they are wrong. The Australian police, after some serious investigation, found that the perpetrator was the man the All Blacks had hired to monitor security. So you see, jumping to conclusions can lead to quite erroneous conclusions. That is why I argue we need evidence.

So where does that leave me? Actually enthused. After I published ‘Bot War, I needed another project, and I decided to write about espionage and hacking. The trouble was, I didn’t really know much about it, and some time after I started I was seriously questioning whether this was a sensible project. After all the disclosure over these hacking activities, I have been provided with a whole lot of free research. Of course I don’t know the techniques of hacking, but there is enough information out there to at least make the background sort of plausible. So there is some good that comes out of this, at least for me.

Settling Mars and High Energy Solar Particles

Recently, the US government announced that sending people to Mars was a long-term objective, and accordingly it is worth looking at some of the hazards. One that often gets airing is the fact that the sun sends out bursts of high-energy particles that are mainly protons, i.e. hydrogen atoms with their electrons stripped off. If these strike living matter, they tend to smash the molecules, as they have energy far greater than the energy of the chemical bond. These are of little hazard to us usually, though, because they are diverted by the earth’s magnetic field. It is this solar wind that is the primary cause of auroras. The solar wind particles knock electrons out of gas molecules, and the light is generated when electrons return. As you might guess, if these particles can knock out enough electrons from molecules to generate that light show, the particle flux would be quite undesirable for DNA, and a high cancer rate would be expected if some form of protection could not be provided.

The obvious method is to divert the particles, and electromagnetism provides a solution. When a charged particle is moving and it strikes a magnetic field, there is a force that causes the path of the charged particle to bend. The actual force is calculated through something called a vector cross product, but in simple terms the bending force increases with the velocity of the particle, the strength of the magnetic field, and the angle between the path and the magnetic field. The force is maximum when the path is at right angles to the magnetic field, and is actually zero when the particle motion is parallel to the field. The question then is, can we do anything about the solar particles with this?

The first option would be to generate a magnetic field in Mars. Unfortunately, that is not an option, because we have no idea how to generate a dynamo within the planet, nor do we know if it is actually possible. The usual explanation for the earth’s magnetic field is that it is generated through the earth’s rotation and the iron core. Obviously, there is more to it than that, but one thing we know is that the density of Mars is about 3.9 whereas Earth is about 5.5. Basalt, the most common mix of metal silicates, has a density ranging from 3 to 3.8, but of course density also increases with compression. This suggests that Mars does not have much of an iron core. As far as I am aware, it is also unclear whether the core of Mars is solid or liquid. Accordingly, it appears clear there is no reasonable hope of magnetizing Mars.

The alternative is to put an appropriate magnetic field on the line between Mars and the sun. To do that, we have to put a properly aligned strong magnetic field between Mars and the sun. The problem is, bodies orbiting the sun generally only have the same angular rotation about the sun as Mars if they are at the same distance from the sun as Mars, or on average if they are orbiting Mars, in which case they cannot be between, and if they are not between all the time, they are essentially useless.

However, for the general case where a medium sized body orbits a much larger one, such as a planet around a star, or the Moon around Earth, there are five points where a much smaller object can orbit in a fixed configuration with respect to the other two. These are known as Lagrange points, named after the French mathematician who found them, and the good news is that L1, the first such point, lies directly between the planet and the star. Thus on Mars, a satellite at L1 would always seem to “eclipse” the sun, although of course it would be too small to be noticed.

Accordingly, a solution to the problem of high-energy solar particles on settlers on Mars would be to put a strong enough magnetic field at the Mars sun L1 position, so as to bend the path of the solar particles away from Mars. What is interesting is that very recently Jim Green, NASA Planetary Science Division Director, made a proposal of putting such a magnetic shield at the Mars-Sun-L1 position. For a summary of Green’s proposal, see http://www.popularmechanics.com/space/moon-mars/a25493/magnetic-shield-mars-atmosphere/ .

The NASA proposal was focused more on reducing the stripping of the atmosphere by the solar wind. If so, according to Green, such a shield could help Mars achieve half the atmospheric pressure of Earth in a matter of years, on the assumption that frozen CO2 would sublimate, thus starting the process of terraforming. I am not so sure of that, because stopping radiation hitting Mars should not lead to particularly rapid sublimation. It is true that stopping such charged particles would help in stopping gas being knocked off the outer atmosphere, but the evidence we have is that such stripping is a relatively minor effect.

The other point about this is that I made this suggestion in my ebook novel Red Gold, published in 2011, which is about the colonization of Mars. My idea there was to put a satellite at L1 with solar panels and superconducting magnets. If the magnet coils can be shielded from sunlight, even the high temperature superconductors we have now should be adequate, in which case no cooling might be required. Of course the novel is science fiction, but it is always good to see NASA validate one of your ideas, so I am rather pleased with myself.

Trappist-1, and Problems for a Theoretician

In my previous post, I outlined the recently discovered planets around Trappist-1. One interesting question is, how did such planets form? My guess is, the standard theory will have a lot of trouble explaining this, because what we have is a very large number of earth-sized rocky planets around a rather insubstantial star. How did that happen? However, the alternative theory outlined in my ebook, Planetary Formation and Biogenesis, also has a problem. I gave an equation that very approximately predicts what you will get based on the size of the star, and this equation was based on the premise that chemical or physical chemical interactions that lead to accretion of planets while the star is accreting follow the temperatures in various parts of the accretion disk. In turn, the accretion disk around Trappist-1 should not have got hot enough where any of the rocky planets are, and more importantly, it should not have happened over such a wide radial distance. Worse still, the theory predicts different types of planets in different places, and while we cannot eliminate this possibility for trappist-1, it seems highly likely that all the planets located so far are rocky planets. So what went wrong?

This illustrates an interesting aspect of scientific theory. The theory was developed in part to account for our solar system, and solar systems around similar stars. The temperature in the initial accretion disk where the planets form around G type stars is dependent on two major factors. The first is the loss of potential energy as the gas falls towards the star. The temperature at a specific distance due to this is due to the gravitational potential at that point, which in turn is dependent on the mass of the star, and the rate of gas flowing through that point, which in turn, from observation, is very approximately dependent on the square of the mass of the star. So overall, that part is very approximately proportional to the cube of the stellar mass. The second dependency is on the amount of heat radiated to space, which in turn depends on the amount of dust, the disk thickness, and the turbulence in the disk. Overall, that is approximately the same for similar stars, but it is difficult to know how the Trappist-1 disk would cool. So, while the relationship is too unreliable for predicting where a planet will be, it should be somewhat better for predicting where the others will be, and what sort of planets they will be, if you can clearly identify what one of them is. Trappist-1 has far too many rocky planets. So again, what went wrong?

The answer is that in any scientific theory, very frequently we have to make approximations. In this case, because of the dust, and because of the distance, I assumed that for G type stars the heat from the star was irrelevant. For example, in the theory Earth formed from material that had been processed to at least 1550 degrees Centigrade. That is consistent with the heat relationship where Jupiter forms where water ice is beginning to think about subliming, which is also part of the standard theory. Since the dust should block much of the star’s light, the star might be adding at most a few tens of degrees to Earth’s temperature while the dust was still there at its initial concentration, and given the uncertainties elsewhere, I ignored that.

For Trappist -1 it is clear that such an omission is not valid. The planets would have accreted from material that was essentially near the outer envelope of the actual star during accretion, the star would appear large, the distance involving dust would be small, the flow through would be much more modest, and so the accreting star would now be a major source of heat.

Does this make sense? First, there are no rocky bodies of any size closer to our sun than Mercury. The reason for that, in this theory, is that by this point the dust started to get so hot it vaporized and joined the gas that flowed into the star. It never got that hot at Trappist-1. And that in turn is why Trappist-1 has so many rocky planets. The general coolness due to the small amount of mass falling inwards (relatively speaking) meant that the necessary heat for rocky planets only occurred very close to the star, but because of the relative size of the stellar envelope that temperature was further out than mass flow would predict, and furthermore the fact that the star could not be even vaguely considered as a point source meant that the zone for the rocky planets was sufficiently extended that a larger number of rocky planets was possible.

There are planets close to other stars, and they are usually giants. These almost certainly did not form there, and the usual explanation for them is that when very large planets get too close together, their orbits become unstable, and in a form of gravitational billiards, they start throwing each other around, some even being thrown from the solar system, and some end up very close to the star.

So, what does that mean for the planets of Trappist-1? From the densities quoted in the Nature paper, if they are right, and the authors give a wide range of uncertainty, the fact that the sixth one out has a density approaching that of Earth means they have surprisingly large iron cores, which may mean there is a possibility most of them accreted more or less the same way Mercury or Venus did, i.e. they accreted at relatively high temperatures, in which case they will have very little water on them. Furthermore, it has also been determined that these planets will be experiencing a rather uncomfortable amount of Xrays and extreme ultraviolet radiation. Do not book a ticket to go to them.

Trappist-1

By now, I suspect everybody has heard of Trappist-1, a totally non-spectacular star about 39 light years from Earth, and in terms of astronomy, a really close neighbour. I have seen a number of people on the web speculating about going there some time in the not too distant future. Suppose you could average a speed of 50,000 kilometers per hr, by my calculation (hopefully not hopelessly wrong) it would take about 850,000 years to get there. Since chemical rockets cannot get significantly more velocity, don’t book your ticket. Is it possible for a person to get to such stars? It would be if you could get to a speed sufficiently close to light speed. Relativity tells us that as you approach light speed your aging process slows down, and if you went at light speed (theoretically impossible if you have mass) you would not age, even though it would take 39 years as seen by an observer on earth. (Of course, assuming an observer could see your craft, it would seem to take 78 years at light speed because the signal has to get back.) It is not just aging; everything you do slows down the same way, so if you were travelling at light speed you would think the star was surprisingly close.

The chances are you will also have seen the comment that Trappist-1 is only a little bit bigger than Jupiter. In terms of mass, Trappist-1 is about 8% the mass of the sun, and that certainly makes it a small star as stars go, but it is about 84 times the mass of Jupiter. In my book, 84 times as big is not exactly “a little bit bigger”. Trappist-1 is certainly not as hot as the sun; its surface temperature is about 40% that of the sun. The power output of the star is also much lower, because power radiated per unit area is proportional to the fourth power of the temperature, and of course the area is much less. In this context, there are a lot of planets bigger than Jupiter, many of them about 18 times as big, but they are also too small to ignite thermonuclear reactions.

Nevertheless the system has three “earth-sized” planets in the “habitable” zone, and one that would be too hot for water to be in the liquid state, with a surface temperature predicted to be about 127 degree Centigrade provided it is simply in equilibrium with incoming stellar radiation. Of course, polar temperatures could be significantly cooler. The next three out would have surface temperatures of about 68 degrees C, 15 degrees C (which is rather earth-like) and minus 22 degrees C. Such temperatures do not take into account any greenhouse effect from any atmosphere, and it may be that the planet with a temperature of 68 degrees could equally end up something like a Venus. Interestingly, in the Nature paper describing them, it is argued that it is the planets e, f and g that could have water oceans, despite having temperatures without any greenhouse effect of minus 22, minus 54, and minus 74 degrees C. This arises from certain modeling, which I find unexpected. The planets are likely to be tidally locked, i.e. like the moon, the same face will always be directed towards the star.

So, there is excitement: here we have potential habitable planets. Or do we?

In terms of size, yes we do. The planetary radii for many are quite close to Earth’s, although d, the one with the most earth-like temperatures has a radius of about 0.77 Earth’s. Most of the others are a shade larger than earth, at least in terms of radius.

Another interesting thing is there are estimates of the planetary masses. How they get these is interesting, given the complexity of the system. The planets were detected by their transiting over the face of the star, and such transits have a periodic time, or what we would call a year, i.e how long it takes to get the next transit. Thus the closest, b, has a periodic time of 1.51087081 days. The furthest out has a period of 20 days. Now, the masses can be determined by mutual gravitational effects. Thus since the planets are close, suppose one is being chased by the other around transit time. The one behind will be pulled along a bit and the one in front retarded a bit, and that will lead to the transits being not quite on time. Unfortunately, the data set meant that because of the rather significant uncertainties in just about every variable, the masses are somewhat uncertain, thus the mass of the inner one is 0.85 + 0.72 earth masses. The second one is calculated to have a density of 1.17 times that of earth, which means it has a huge iron core. However, with the exception of the outer one, they all have densities that strongly suggest rocky planets, most with iron cores.

Suppose we went there. On our most “earth-like” planet we might have trouble growing plants. The reason is the light intensity is very low, and is more like on earth just after sunset. The reason the temperatures are adequate is that the star puts out much of its energy in the form of infrared radiation, and in general that is not adequate to power any obvious photochemistry, although it is good for warming things. The web informs us that astronomers are excited by this discovery because they give us the best chance of analyzing the atmospheres of an alien planet.

The reason is the planets orbit in a plane that means they pass in front of the star from our observation point, and that gives us an excellent chance to measure their size, but eventually also to analyse their atmospheres if they have certain sorts of gases. The reason for this is that as infrared radiation passes through material, the energies corresponding to the energies of molecular vibrations get absorbed. So, if we record the spectrum of the stellar radiation, when a planet passes in front of it, besides the main part of the planet lowering the intensity of all the radiation, where there is an energy corresponding to a molecular vibration, there would be a further absorption, so there would be little spikes on the overall dip. Such absorption spectra are often used by chemists to help identify what they have. It only identifies the class of compounds, because all compounds with the same functional group will absorb the same sort of radiation, but as far as gases go, there are not very many of them and we should be able to identify the with quite a degree of confidence, with one exception. Gases such as nitrogen and oxygen do not absorb in the infrared.

So, where does that leave us? We have a system that in principle lets us analyze things in greater detail than for most other planetary systems. However, I suspect this might also be misleading. This system is quite unlike others we have seen, mainly because it is around a much smaller star, and the planets may also be different due to the different conditions around a smaller star during planetary formation.

The future did not seem to work!

When I started my career in chemistry as an undergraduate, chemists were an optimistic bunch, and everyone supported this view. Eventually, so it was felt, chemists would provide a substance to do just about anything people wanted, provided the laws of physics and chemistry permitted it. Thus something that was repelled by gravity was out, but a surprising lot was in. There was a shortage of chemists, and good paying jobs were guaranteed.

By the time I had finished my PhD, governments and businesses everywhere decided they had enough chemists for the time being thank you. The exciting future could be put on hold. For the time being, let us all stick to what we have. Of course there were still jobs; they were just an awfully lot harder to find. The golden days for jobs were over; as it happened, that was not the only thing that was over. In some people’s eyes, chemicals were about to become national villains.

There was an element of unthinking optimism from some. I recall in one of my undergraduate lectures where the structure of penicillin was discussed. Penicillin is a member of a class of chemicals called beta lactams, and the question of bacterial tolerance was discussed. The structure of penicillin is (https://en.wikipedia.org/wiki/Penicillin) where R defines the carboxylic acid to that amide. The answer to bacterial tolerance was simple: there is almost an infinite number of possible carboxylic acids (the variation is changing R) so chemists could always be a step ahead of the bugs. You might notice a flaw in that argument. Suppose the enzymes of the bug attacked the lactam end of the molecule and ignored the carboxylic acid amide? Yes, when bacteria learned to do that, the effectiveness of all penicillins disappears. Fortunately for us, this seems to be a more difficult achievement, and penicillins still have their uses.

The next question is, why did this happen? The answer is simple: stupidity. People stopped restricting the use to countering important infections. They started to be available “over the counter” in some places, and they were used intermittently by some, or as prophylactics by others. Not using the full course meant that some bacteria were not eliminated, and since they were the most resistant ones, thanks to evolution when they entered the environment, they conveyed some of the resistance. This was made worse by agricultural use where low levels were used to promote growth. If that was not a recipe to breed resistance, what was?

The next “disaster” to happen was the recognition of ozone depletion, caused by the presence of chlorofluorocarbons, which on photolysis in the upper atmosphere created free radicals that destroyed ozone. The chlorofluorocarbons arose from spray cans, essential for hair spray and graffiti. This problem appears to have been successfully solved, not by banning spray cans, not by requesting restraint from users, but rather by replacing the chlorofluorocarbons with hydrocarbon propellant.

One problem we have not addressed, despite the fact that everyone knows it is there, is rubbish in the environment. What inspired this post was the announcement that rubbish has been found in the bottom of the Marianna trench. Hardly surprising; heavy things sink. But some also floats. The amounts of waste plastic in the oceans is simply horrendous, and only too much of it is killing fish and sea mammals. What starts off as a useful idea can end up generating a nightmare if people do not treat it properly. One example that might happen comes from a news report this week: a new type of plastic bottle has been developed that is extremely slippery, and hence you can more easily get out the last bit of ketchup. Now, how will this be recycled? I once developed a reasonably sophisticated process for recycling plastics, and the major nightmare is the multi-layered plastics with hopelessly incompatible properties. This material has at least three different components, and at least one of them appears to be just about totally incompatible with everything else, which is where the special slipperiness comes from. So, what will happen to all these bottles?

Then last problem to be listed here is climate change. The problem is that some of the more important people, such as some politicians, do not believe in it sufficiently to do anything about it. The last thing a politician wants to do is irritate those who fund his election campaign. Accordingly, that problem may be insoluble in practice.

The common problem here is that things tend to get used without thinking of the consequences of what is likely to happen. Where things have gone wrong is people. The potential failure of antibiotics is simply due to greed from the agricultural sector; there was no need for its use as a growth promoter when the downside is the return of bacterial dominance. The filling of the oceans with plastic bags is just sloth. Yes, the bag is useful, but the bag does not have to end in the sea. Climate change is a bit more difficult, but again people are the problem, this time in voting for politicians that announce they don’t believe in it. If everybody agreed not to vote for anyone who refused to take action, I bet there would be action. But people don’t want to do that, because action will involve increased taxes and a requirement to be better citizens.

Which raises the question, do we need more science? In the most recent edition of Nature there was an interesting comment: people pay taxes for one of two reasons, namely they feel extremely generous and want to good in the world, or alternatively, they pay them because they will go to jail if they don’t. This was followed by the comment to scientists: do you feel your work is so important someone should be thrown into jail if they don’t fund it? That puts things into perspective, doesn’t it? What about adding if they question who the discovery will benefit.

Cancer: the problem.

I read an interesting blog recently entitled “The War on Cancer” (http://sten.astronomycafe.net/the-war-on-cancer/). Apparently, in the US a little under 600,000 people die of it each year. The author, Dr Sten Odenwald, then set out to illustrate that funding for cancer research is far too low. I think it was President Nixon who coined the phrase, “war on cancer”, and set it as an objective, in the same way Kennedy had set the Moon landing as an objective, but this was doomed to fail, at least in the spectacular way. The reason is the nature of cancer, which, as an aside, is not one disease. We have been trying to cure this for a very long time, but with mixed results. Gaius Plinius Secundus recommended a poultice of broccoli for breast cancer, and asserted it works. There are indeed agents in broccoli that will deal with some breast cancers, but by no means all, and even then, the cancer would need to be near the surface. There are at least twenty different types of breast cancer. Drugs like tamoxifen stop the growth of at least one type, monoclonal antibodies help in some others. So we have made some progress, but there are still severe problems, especially if the tumour metastasizes (dislodges cells to other parts of the body).

It is the nature of cancer that is the problem. Cells grow around nucleic acid, and nucleic acids reproduce by base pairing, then splitting, each strand now being the frame for the production of more nucleic acid. Thus after splitting, when a new double helix is finished being assembled, the amount of nucleic acid has doubled, so a new pair of cells is possible, the old cell having been destroyed. So what can go wrong? You will usually read that copying is not correct, or something is added to the double helix, but I don’t believe that. It is the peculiar nature of the hydrogen bonding that either the correct nucleic acid goes onto the growing strand or nothing does. That is why reproduction is so accurate. In the double helix, the reactive sites are protected, as they are in the interior of the helix, and the outside is the phosphate. A further substitution on the phosphate to make a tri-ester would be a nuisance, but it would not be very stable, and it would repair itself. Further, it would require a highly reactive reagent to do this, as it is exceedingly difficult to make phosphate esters in cold water other than through enzymatic catalysis. No, I think the problem probably arises during the splitting stage when the reactive sites become exposed. If something happens to the nitrogen functions, then that will block the formation of the next double helix at that point.

At that stage, the body will attack the nucleic acid at that point, and the next usual outcome will be that the various parts of the strand will be degraded, and the bits reused or excreted. But if the problem occurred in certain places, it may be that what is left can start reproducing. If that happens you have something growing that has no function for you, BUT it looks like it is part of your body, because up to a point it is. The growth just keeps growing, and reproducing itself. The reason there are so many different cancers is there are so many places where a nucleic acid could go wrong, and each different place that can reproduce will lead to a growth that is slightly different from any others. Because it looks like part of your body, your natural defences ignore it.

So far, we have largely relied on surgery, radiation or drugs. So, how is progress? In some cases, such as leukemia, progress is good, and it is often curable. In other cases, life can be extended, but according to Wikipedia, since Nixon declared war on cancer, the US alone has spent $200 billion on research. Between 1950 and 2005, the death rate, adjusted for population size and age has declined by five per cent. On the other hand, while in remission many patients have had life extended.

However, we should ask, are we doing anything wrong? I think we are, and one problem relates to intellectual property rights. Here is an example of what I mean. In the 1980s I was involved in a project to extract an active material from a marine sponge. My company developed some scale-up technology and made a few grams of this material, which, from reports I received, if the odd microgram was introduced to a solid tumour, the tumour blistered and died, leaving a well-repaired skin outside wherever the organ was. This property was limited to studies on rats, probably with external carcinoma. Anyway, the company that hired us ran into difficulty with its source of funds and went bankrupt, however, somehow ownership of the intellectual property lived on. At the time, there was no known technique of introducing a material as reactive as this to internal tumours, nor did we know whether that would even be beneficial. Essentially, the project was in an early stage, and maybe the material would not be beneficial. Who knows? The problem is, now we don’t know and nobody is likely to work further because the patents have expired. Any company working on that will have all the expense, and then somebody else can come in and take the benefits. In my opinion, this is not a desirable outcome. We should not have a situation where promising knowledge simply gets lost because of formal procedure.

Equally, we should not have the situation where drugs become ridiculously expensive. Why should the unfortunates who get a rather rare cancer have to pay the huge prices of drug companies? I am not saying drug companies should not get a fair return, but I think society should pay for this. Think of it as compulsory insurance. The alternative is a family might have to decide whether to bankrupt themselves, kill the grandchildren’s education prospects to buy a year or so for grandmother, or whether to just let her die. What sort of society is it that allows this?

Cancer is one of those diseases that everybody comes into contact with one way or another. In my case, my father died of pancreatic cancer, and I am a widower because of cancer. Yes, these things happen, but isn’t it in everybody’s interest to try and do what we can to at least minimize the harsh effects?

Evidence that the Standard Theory of Planetary Formation is Wrong.

Every now and again, something happens that makes you feel both good and depressed at the same time. For me it was last week, when I looked up the then latest edition of Nature. There were two papers (Nature, vol 541 (Dauphas, pp 521 – 524; Fischer-Gödde and Kleine, pp 525 – 527) that falsified two of the most important propositions in the standard theory of planetary formation. What we actually know is that stars accrete from a disk of gas and dust, the disk lingers on for between a million years and 30 million years, depending on the star, then the dust clears out the dust and gas. Somewhere in there, planets form. We can see evidence of gas giants growing, where the gas is falling into the giant planet, but the process by which smaller planets or the cores of giants form is unobservable because the bodies are too small, and the dust too opaque. Accordingly, we can only form theories to fill in the intermediate process. The standard theory, also called oligarchic growth, explains planetary formation in terms of dust accreting to planetesimals by some unknown mechanism, then these collide to form embryos, which in turn formed oligarchs or protoplanets (Mars sized objects) and these collided to form planets. If this happened, they would do a lot of bouncing around and everything would get well-mixed. Standard computer simulations argue that Earth would have formed from a distribution of matter from further out than Mars to inside Mercury’s orbit. Earth the gets its water from a “late veneer” from carbonaceous chondrites from the far side of the asteroid belt.

It is also well known that certain elements in bodies in the solar system have isotopes that vary their ratio depending on the distance from the star. Thus meteorites from Mars have different isotope ratios from meteorites from the asteroid belt, and again both are different from rocks from Earth and Moon. The cause of this isotope difference is unclear, but it is an established fact. This is where those two papers come in.

Dauphas showed that Earth accreted from a reasonably narrow zone throughout its entire accretion time. Furthermore, that zone was the same as that which formed enstatite chondrites, which appear to have originated from a region that was much hotter than the material that, say, formed Mars. Thus enstatite chondrites are reduced. What that means is that their chemistry was such that there was less oxygen. Mars has only a small iron core, and most of its iron is as iron oxide. Enstatite chondrites have free iron as iron, and, of course, Earth has a very large iron core. Enstatite chondrites also contain silicates with less magnesium, which will occur when the temperatures were too hot to crystallize out forsterite. (Forsterite melts at 1890 degrees C, but it will also dissolve to some extent in silica melts at lower temperatures.) Enstatite chondrites also are amongst the driest, so they did not provide Earth’s water.

Fischer-Gödde and Kleine showed that most of Earth’s water did not come from carbonaceous chondrites, the reason being, if it did, the non-water part would have added about 5% to the mass of Earth, and the last 5% is supposed to be from where the bulk of elements that dissolve in hot iron would have come from. The amounts arriving earlier would have dissolved in the iron and gone to the core. One of those elements is ruthenium, and the isotope ratios of Earth’s ruthenium rule out an origin from the asteroid belt.

Accordingly, this evidence rules out oligarchic growth. There used to be an alternative theory of planetary accretion called monarchic growth, but this was soon abandoned because it cannot explain first why we have the number of planets we have where they are, and second where our water came from. Calculations show it is possible to have three to five planets in stable orbit between Earth and Mars, assuming none are larger than Earth, and more out to the asteroid belt. But they are not there, so the question is, if planets only grow from a narrow zone, why are these zones empty?

This is where I felt good. A few years ago I published an ebook called “Planetary Formation and Biogenesis” and it required monarchic growth. It also required the planets in our solar system to be roughly where they are, at least until they get big enough to play gravitational billiards. The mechanism is that the planets accreted in zones where the chemistry of the matter permitted accretion, and that in turn was temperature dependent, so specific sorts of planets form in zones at specific distances from the star. Earth formed by accretion of rocks formed during the hot stage, and being in a zone near that which formed enstatite chondrites, the iron was present as a metal, which is why Earth has an iron core. The reason Earth has so much water is that accretion occurred from rocks that had been heat treated to about 1550 degrees Centigrade, in which case certain aluminosilicates phase separated out. These, when they take up water, form cement that binds other rocks to form a concrete. As far as I am aware, my theory is the only current one that requires these results.

So, why do I feel depressed? My ebook contained a review of over 600 references from journals until a few months before the ebook was published. The problem is, these references, if properly analysed, provided papers with plenty of evidence that these two standard theories were wrong, but each of the papers’ conclusions were ignored. In particular, there was a more convincing paper back in 2002 (Drake and Righter, Nature 416: 39-44) that came to exactly the same conclusions. As an example, to eliminate carbonaceous chondrites as the source of water, instead of ruthenium isotopes, it used osmium isotopes and other compositional data, but you see the point. So why was this earlier work ignored? I firmly believe that scientists prefer to ignore evidence that falsifies their cherished beliefs rather than change their minds. What I find worse is that neither of these papers cited the Drake and Righter paper. Either they did not want to admit they were confirming a previous conclusion, or they were not interested in looking thoroughly at past work other than that which supported their procedures.

So, I doubt these two papers will change much either. I might be wrong, but I am not holding my breath waiting for someone with enough prestige to come out and say enough to change the paradigm.