The M 87 Black Hole

By now, unless you have been living under a flat rock somewhere, you have probably seen an image of a black hole. This image seems to be in just about as many media outlets as possible, so you know what the black hole and its environs look like, right? Not necessarily. But, you say, you have seen a photograph. Well, actually, no you haven’t. That system is so far away that to get the necessary resolution you need to gather light over a very wide array, so the image was obtained from a very large number of radio telescopes and the image was reconstructed by a sequence of mathematical processes. Nevertheless, the black sphere and the ring will represent fairly accurately part of what is there.

No radiation can escape from a black hole so the black bit in the middle is fair, however the image as presented gives no idea of its size. Its radius is about 19 billion km, which is a little under five times the distance from the sun to Pluto. This is really a monster. Ever wondered what happens to photons that are emitted at right angles to the gravitational field? Well, at 28 billion km or thereabouts they go into orbit around the black hole and would do that for an infinite time unless they get absorbed by dust falling in. The bright stuff you see is outside the rotating photons, and is travelling clockwise at about half light speed.

The light is obviously not orange and the signals were received as radio waves, but when emitted they would be extremely high energy photons. We see them as radio waves because they have lost that much energy climbing out of the black hole’s gravitational field. One way of looking at this is to think of light as a wave. The more energy the light has, the greater the frequency of wave crests passing by. As the energy lowers due to gravity lowering the energy of the light, the wave gets “stretched” and the number of crests passing by lowers. At the edge of the black hole the wave is so stretched it takes an infinite amount of time for a second crest to appear, which means no light can escape. Just outside the event horizon the gravity is not quite strong enough to stop it, but a gamma ray wave might take 100,000 of our years for the next crest to pass when it gets to us. The wave is moving but it is so red-shifted we could not see it. Further away from the event horizon the light is a little less stretched, so we see it as radio waves, which is what we were looking at in this image, even if it still started as gamma rays or Xrays.

It has amused me to see the hagiolatry bestowed upon Einstein regarding this image. One quote: “Albert Einstein’s towering genius is on display yet again.” As a comment, I am NOT trying to run down Einstein, but let us be consistent here. You may note Newton also predicted a mass at which light could not escape. In Newtonian mechanics the energy of the light would be given by E =mc^2/2, while the gravitational potential energy would be GMm/R. This permits us to calculate a radius where light cannot escape as R = 2GM/c^2, which happens to be exactly the same as the Schwarzchild radius from General Relativity.

Then we see statements such as “General relativity describes gravity as a consequence of the warping of space-time.” Yes, but that implies something that should not be there. General Relativity is a geometric theory, and describes the dynamics of particles in geometric terms. The phrase “as a consequence of” should be replaced with “in terms of”. The use of “consequence” implies cause, and this leads to statements involving cosmic fabric being bent, and you get images of something like a trampoline sheet, which is at best misleading. Here is another quote that annoys me: “Massive objects create a sort of dent or well in the cosmic fabric, which passing bodies fall into because they’re following curved contours (not as a result of some mysterious force at a distance, which had been the prevailing view before Einstein came along)”. No! Both theories are done a great disservice. Einstein gave a geometric description of how bodies move, but there is no physical cause, and it has the same problem, only deeper, than the Newtonian description had, because you must then ask, how does one piece of space-time know exactly how much to distort? Meanwhile, Newton gave a description of the dynamics of particles essentially in terms of calculus. Whereas Einstein describes effects in terms of a number of tensors, which most people do not understand, Newton invented the term “force”.

Now you will often see the argument that light is bent around the sun and that “proves” General Relativity is correct. Actually, Newtonian physics predicted  the same effect, but general Relativity bends it twice as much as predicted by Newtonian physics, so yes, in that sense General relativity is correct if the bend is correctly found to be twice that of Newton. You will then see statements along the lines this proves the bent path is “due to the warping of spacetime”. That is, of course, nonsense. The reason is that in Einstein’s relativity E = mc^2, which is twice that of the Newtonian energy, as you can see from the above. The reason for the difference appears to be the cosmic speed limit of light speed, which Newton may or may not have considered, but had no reason to go further. Why do I say Newton might have considered it? Because as a postulate, the fundamental nature of the speed of light goes all the way back to Empedocles. Of course, he did not make much of it.

Finally, I saw one statement that “the circular nature of the black hole again confirms the correctness of Einstein’s theory of General Relativity. Actually, Aristotle provided one of the first recorded reasons why gravity leads to a sphere. Newton would certainly have predicted a basic sphere, and of course the algorithm used to make the image would not have led to any other result unless there were something really dramatically non spherical. The above is not intended to downplay Einstein, but I am not a fan of the hagiolatry that accompanies him either.

Advertisements

Some Shortcomings of Science

In a previous post, in reference to the blog repost, I stated I would show some of the short-comings of science, so here goes.

One of the obvious failings is that people seem happy to ignore what should convince them. The first sign I saw of this type of problem was in my very early years as a scientist. Sir Richard Doll produced a report that convincingly (at least to me) linked smoking to cancer. Out came a number of papers rubbishing this, largely from people employed by the tobacco industry. Here we have a clear conflict, and while it is ethically correct to show that some hypothesis is wrong, it should be based on sound logic. Now I believe that there are usually a very few results, and maybe as few as one specific result, that makes the conclusion unassailable. In this case, chemists isolated the constituents of cigarette smoke and found over 200 suspected carcinogens, and trials with some of these on lab rats were conclusive: as an example one dab of pure 3,4-benzopyrene gave an almost 100% probability of inducing a tumour. Now that is a far greater concentration than any person will get smoking, and people are not rats, nevertheless this showed me that on any reasonable assessment, smoking is a bad idea. (It was also a bad idea for a young organic chemist: who needs an ignition source a few centimeters in front of the face when handling volatile solvents?) Yet fifty years or so later, people continue to smoke. It seems to be a Faustian attitude: the cancer will come decades later, or for some lucky ones, not at all, so ignore the warning.

A similar situation is occurring now with climate change. The critical piece of information for me is that during the 1990s and early 2000s (the period of the study) it was shown there is a net power input to the oceans of 0.64 W/m2. If there is a continuing net energy input to the oceans, they must be warming. Actually, the Tasman has been clearly warming, and the evidence from other oceans supports that. So the planet is heating. Yet there are a small number of “deniers” who put their head in the sand and refuse to acknowledge this, as if by doing so, the problem goes away. Scientists seem unable to make people fact up to the fact that the problem must be dealt with now but the price is not paid until much later. As an example, in 2014 US Senate majority leader Mitch McConnell said: “I am not a scientist. I’m interested in protecting Kentucky’s economy.” He forgot to add, now.

The problem of ignoring what you do not like is general and pervasive, as I quickly learned while doing my PhD. My PhD was somewhat unusual in that I chose the topic and designed the project. No need for details here, but I knew the department, and my supervisor, had spent a lot of effort establishing constants for something called the Hammett equation. There was a great debate going on whether the cyclopropane ring could delocalise electronic charge in the same way as a double bond, only mre weakly. This equation would actually address that question. The very limited use of it by others at the start of my project was inconclusive, for reasons we need not go into here. Anyway, by the time I finished, my results showed quite conclusively that it did not, but the general consensus, based essentially on the observation that positive electric charge was strongly stabilised by it, and on molecular orbital theory (which assumes it initially, so was hardly conclusive on this question) was that it did. My supervisor made one really good suggestion as to what to do when I ran into trouble, and this was the part that showed the effect the most. But when it became clear that everyone else was agreeing the opposite and he had moved to a new position, he refused to publish that part.

This was an example of what I believe is the biggest failing. The observation everyone clung to was unexpected and needed a new explanation, and what they came up with most certainly gave the right answer for that specific case. However, many times there is more than one possible explanation, and I came up with an alternative based on classical electric field theory, that also predicted positive charge would be stabilized, and by how much, but it also predicted negative charge would be destabilized. The delocalization concept required bothto be stabilised. So there was a means of distinguishing them, and there was a very small amount of clear evidence that negative charge was destabilised. Why a small amount of evidence. Well, most attempts at making such compounds failed outright, which is in accord with the compounds being unstable but it is not definitive.

So what happened? A review came out that “convincingly showed” the answer was yes. The convincing part was that it cited a deluge of “me too” work on the stabilization of positive charge. It ignored my work, and as I later found out when I wrote a review, it ignored over 60 different types of evidence that showed results that contradicted the “yes” answer. My review was not published because it appears chemistry journals do not publish logic analyses. I could not be bothered rewriting, although the draft document is on the web if anyone is interested.

The point this shows is that once a paradigm is embedded, even if on shaky grounds, it is very hard to dislodge, in accord with what Thomas Kuhn noted in “The structure of scientific revolutions”. One of the points Kuhn noted was if the paradigm had evidence, scientists would rush to write papers confirming the paradigm by doing minor variations on what worked. That happened above: they were not interested in testing the hypothesis; they were interested in getting easy papers published to advance their careers. Kuhn also noted that observations that contradict the paradigm are ignored as long as they can be. Maybe over 60 different types of observations that contradict, or falsify, the paradigm is a record? I don’t know, but I suspect the chemical community will not be interested in finding out.

Processing Minerals in Space

I have seen some recent items on the web that state that asteroids are full of minerals and fortunes await. My warning is, look deeper. The reason is, most asteroids have impact craters, and from basic physics but some rather difficult calculations you can show these were formed from very energetic collisions. That the asteroid did not fly to bits indicates it is a solid with considerable mechanical strength. That implies the original dust either melted to form a solid, or a significant chemical reaction took place. For those who have read my “Planetary Formation and Biogenesis” you will know why they melted, assuming I am right. So what has that got to do with things? Quite simply, leaving aside metals like gold, the metal oxides in molten silica form the olivine or pyroxene families, or aluminosilicates. That is they form rocks. To give an example of the issue, I recently read a paper where various chondrites were analysed, and the method of analysis recorded the elements separately. The authors were making much of the fact that the chondrites contained 19% iron. Yikes! But wait. Fayalite contains almost 55% iron by weight, but it is useless as an ore. The olivine and pyroxene structures have tetrahedral silicon oxides (the pyroxene as a strand polymer) where the other valence of the oxygen is bound to a divalent cation, mostly magnesium because magnesium is the most common divalent element in the supernova dust. What these authors had done was to analyse rock.

If you read my previous post you will see that I have uncovered yet another problem with science: the authors were very specialized but they went outside their sphere of competency, quite accidentally. They cited numbers because so much in science depends on numbers. But it is also imperative to know what the numbers mean.

On Earth, most of the metals we obtain come from ores, which have formed through various forms of geochemical processing. Thus to get iron, we usually process haematite, which is an iron oxide, but the iron almost certainly started as an average piece of basalt that got weathered. It is most unlikely that good deposits of haematite will be found on asteroids, although it is possible on Mars where small amounts have been found. If Mars is to be settled, processing rocks will be mandatory for survival but the problems are different from those of asteroids. For this post, I wish to restrict myself to discussing asteroids as a source of metals. Let us suppose an asteroid is collected and brought to a processing site, the question is, what next?

The first problem is size-reduction, i.e.breaking it down to more manageable pieces. How do you do that? If you hit it with something, you immediately separate, following Newton’s third law. If you want to see the difficulties, stand on a small raft and try to keep on hitting something. Ah, you say, anchor yourself. How? You have to put something like a piton into solid rock, and how do you do that without some sort of impact? Of course it can be done, but it is not easy. Now you start smashing it. What happens next is bits of asteroid fly off into space. Can you collect all of the pieces? If not, you are a menace because the asteroid’s velocity v, which will be in the vicinity of 30 km/s if near Earth, has to be added to whatever is given to the fragments. Worse, they take on the asteroid’s eccentricity ε(how much difference there is between closest and farthest distance from the sun) and whatever eccentricity has been added by the fragmentation. This is important because the relative velocity of impact assuming the target is on a circular orbit is proportional to εv. Getting hit by a rock at these sort of velocities is no joke.

However, suppose you collect all the rock, you have two choices: you can process the rock as is, or you can try to refine it. If you adopt the latter idea, how do you do it? On Earth, such processing arises through millions of years of action with fluids, or through superheated fluids passing through high temperature rock. That does not sound attractive. Now some asteroids are argued to have iron cores so the geochemical processing has been done for you. Of course you still have to work your way through the rock, and then you have to size reduce the iron, which again raises the question, how? There is also a little less good news awaiting you: iron cores are almost certainly not pure iron. The most likely composition is iron with iron silicide, iron phosphide, iron carbide and a lot of iron sulphide. There will also be some nickel, together with corresponding compounds, and (at last joy?) certain high value metals that dissolve in iron. So what do you do with this mess?

Then, supposing you separate out a pure chemical compound, how do you get the metal out? The energy input required can be very large. Currently, there is a lot of effort being put into removing CO2from the atmosphere. The reason we do not pull it apart and dump the carbon is that all the energy liberated from burning it has to be replaced, i.e.a little under 400 kJ/mol. and that is such a lot of energy. Consider that as a reference unit. It takes roughly two such units to get iron from iron oxide, although you do get two iron atoms. It takes about five units to break forsterite into two magnesium atoms and one silicon. It takes ten such units to break down kaolinite to get two aluminium atoms and two silicon atoms. Breaking down rock is very energy intensive.

People say, electrolysis. The problem with electrolysis is the material has to dissolve in some sort of solvent and then be separated into ions. Thus when making aluminium, bauxite, an aluminium oxide is used. Clays, which are aluminosilicates such as kaolinite or montmorillinite, are not used, despite being much cheaper and more easily obtained. In asteroids any aluminium will almost certainly be in far more complicated aluminosilicates. Then there is the problem of finding a solvent for electrolysis. For the least active metals, such as copper, water is fine, but that will not work for the more active ones, such as aluminium. Titanium would be even more difficult to make, as it is made from the reduction of titanium tetrachloride with magnesium. You have to make all the starting materials!

On Earth, many oxides are reduced to metal by heating with carbon (usually very pure coal) and allow the carbon to take the oxygen and disappear as a gas. The problem with that, in space, is there is no readily available source of suitable carbon. Carbonaceous chondrites have quite complicated molecules. The ancients used charcoal, and while this is NOT pure carbon, it is satisfactory because the only other element there in volume tends to be oxygen. (Most charcoal is about 35% oxygen.) The iron in meteors could certainly be useful, but for some other valuable elements, such as platinum, while it may be there as the element, it will probably be scattered through the matrix and be very dilute.

Undoubtedly there will be ways to isolate such elements, but such methods will probably be somewhat different from what we use. In some of my novels I have had fusion power tear the molecules to atoms, ionise them, and separate out the elements in a similar way to how a mass spectrometer works, that is they are accelerated and then bent with powerful electromagnetic fields. The “bend” in the subsequent trajectory depends on the mass of the ions, so each isotope is separated. Yes, that is fiction, but whatever is used would probably seem like fiction now. Care should be taken with any investment!

Repost from Sabine Hossenfelder’s blog “Backreaction”

Two posts this week. The first is more for scientists, but I think it mentions points that people reading about science should recognise as possibly there. Sabine has been somewhat critical of some of modern science, and I feel she has a point. I shall do a post of my own on this topic soon, but it might be of interest to read the post following this to see what sort of things can go wrong.
Both bottom-up and top-down measures are necessary to improve the current situation. This is an interdisciplinary problem whose solution requires input from the sociology of science, philosophy, psychology, and – most importantly – the practicing scientists themselves. Details differ by research area. One size does not fit all. Here is what you can do to help.

As a scientist:

  • Learn about social and cognitive biases: Become aware of what they are and under which circumstances they are likely to occur. Tell your colleagues.
  • Prevent social and cognitive biases: If you organize conferences, encourage speakers to not only list motivations but also shortcomings. Don’t forget to discuss “known problems.” Invite researchers from competing programs. If you review papers, make sure open questions are adequately mentioned and discussed. Flag marketing as scientifically inadequate. Don’t discount research just because it’s not presented excitingly enough or because few people work on it.
  • Beware the influence of media and social networks: What you read and what your friends talk about affects your interests. Be careful what you let into your head. If you consider a topic for future research, factor in that you might have been influenced by how often you have heard others speak about it positively.
  • Build a culture of criticism: Ignoring bad ideas doesn’t make them go away, they will still eat up funding. Read other researchers’ work and make your criticism publicly available. Don’t chide colleagues for criticizing others or think of them as unproductive or aggressive. Killing ideas is a necessary part of science. Think of it as community service.
  • Say no: If a policy affects your objectivity, for example because it makes continued funding dependent on the popularity of your research results, point out that it interferes with good scientific conduct and should be amended. If your university praises its productivity by paper counts and you feel that this promotes quantity over quality, say that you disapprove of such statements.

As a higher ed administrator, science policy maker, journal editor, representative of funding body:

  • Do your own thing: Don’t export decisions to others. Don’t judge scientists by how many grants they won or how popular their research is – these are judgements by others who themselves relied on others. Make up your own mind, carry responsibility. If you must use measures, create your own. Better still, ask scientists to come up with their own measures.
  • Use clear guidelines: If you have to rely on external reviewers, formulate recommendations for how to counteract biases to the extent possible. Reviewers should not base their judgment on the popularity of a research area or the person. If a reviewer’s continued funding depends on the well-being of a certain research area, they have a conflict of interest and should not review papers in their own area. That will be a problem because this conflict of interest is presently everywhere. See next 3 points to alleviate it.
  • Make commitments: You have to get over the idea that all science can be done by postdocs on 2-year fellowships. Tenure was institutionalized for a reason and that reason is still valid. If that means fewer people, then so be it. You can either produce loads of papers that nobody will care about 10 years from now, or you can be the seed of ideas that will still be talked about in 1000 years. Take your pick. Short-term funding means short-term thinking.
  • Encourage a change of field: Scientists have a natural tendency to stick to what they know already. If the promise of a research area declines, they need a way to get out, otherwise you’ll end up investing money into dying fields. Therefore, offer reeducation support, 1-2 year grants that allow scientists to learn the basics of a new field and to establish contacts. During that period they should not be expected to produce papers or give conference talks.
  • Hire full-time reviewers: Create safe positions for scientists specialized in providing objective reviews in certain fields. These reviewers should not themselves work in the field and have no personal incentive to take sides. Try to reach agreements with other institutions on the number of such positions.
  • Support the publication of criticism and negative results: Criticism of other people’s work or negative results are presently underappreciated. But these contributions are absolutely essential for the scientific method to work. Find ways to encourage the publication of such communication, for example by dedicated special issues.
  • Offer courses on social and cognitive biases: This should be mandatory for anybody who works in academic research. We are part of communities and we have to learn about the associated pitfalls. Sit together with people from the social sciences, psychology, and the philosophy of science, and come up with proposals for lectures on the topic.
  • Allow a division of labor by specialization in task: Nobody is good at everything, so don’t expect scientists to be. Some are good reviewers, some are good mentors, some are good leaders, and some are skilled at science communication. Allow them to shine in what they’re good at and make best use of it, but don’t require the person who spends their evenings in student Q&A to also bring in loads of grant money. Offer them specific titles, degrees, or honors.

As a science writer or member of the public, ask questions:

  • You’re used to asking about conflicts of interest due to funding from industry. But you should also ask about conflicts of interest due to short-term grants or employment. Does the scientists’ future funding depend on producing the results they just told you about?
  • Likewise, you should ask if the scientists’ chance of continuing their research depends on their work being popular among their colleagues. Does their present position offer adequate protection from peer pressure?
  • And finally, like you are used to scrutinize statistics you should also ask whether the scientists have taken means to address their cognitive biases. Have they provided a balanced account of pros and cons or have they just advertised their own research?

You will find that for almost all research in the foundations of physics the answer to at least one of these questions is no. This means you can’t trust these scientists’ conclusions. Sad but true.


Reprinted from Lost In Math by Sabine Hossenfelder. Copyright © 2018. Available from Basic Books, an imprint of Perseus Books, a division of PBG Publishing, LLC, a subsidiary of Hachette Book Group, Inc

Rocky Planets and their Atmospheres

The previous post outlined how I consider the rocky planets formed. The most important point was that Earth formed a little inside the zone where calcium aluminosilicates could melt and phase separate while the star was accreting, as when the disk cooled down this would create a dust that, when reacted with the water vapour in the disk, would act as a cement. The concept is that this would bind basaltic rocks together, especially if the dust was formed in the collision between the rocks. The collisions were, by and large, gentle at first, driven by the gas sweeping smaller material closer to bigger material. Within this proposed mechanism, because the planet grows by collisions with objects at low relative velocities, the planet starts with a rather porous structure. It gradually heats up due to gravitational potential energy being converted to heat as more material lands, and eventually, if it gets to 1550 degrees, iron melts and runs down the pores towards the centre, while aluminosilicates, with densities about 0.4 – 1.2 g/cm3less than basalt, move upwards. The water is driven from the cements and also rises through the porous rock to eventually form the sea. The aluminosilicates form the granitic/felsic continents upon which we live.

Earth had the best setting of aluminosilicates because after the accretion disk cooled, it was at a temperature where these absorbed water best. Venus is smaller because it was harder to get started, as the cement was sufficiently warm that water had trouble reacting, but once it got going the density of basaltic and iron-bearing rocks was greater. This predicts Venus will have small granitic/felsic cratons on its surface; we have yet to find them. Mercury probably formed simply by accreting silicates and iron during the stellar accretion stage. Mars did not have a good supply of separated aluminium oxides, so it is very short of granite/felsic rock, although the surface of Syrtis Major appears to have a thin sheet of plagioclase. Because the iron did not melt at Mars, its outer rock would have contained a lot of iron dust or iron oxide. Reaction with water would have oxidised it subsequently. Most Martian rocks have roughly the same levels of calcium as Earth, about half the aluminium content, and about half as much again of iron oxide, which as an aside, may be why Mars does not have plate tectonics: because of the iron levels it cannot make eclogite which is necessary for pull subduction.

However, there is also a lot of chemistry going on in the stage 1 accretion disk in addition to what I have used to make the planets. In the vapour phase, carbon is mainly in the form of carbon monoxide in the rocky planet zone, but this can react catalytically with hydrogen to make methanol and hydrocarbons. These will have a very short lifetime and would be what chemists call reactive intermediates, but they would condense on silicates to make carbonaceous material, and they will react with oxides and metal vapour to make carbides. At the temperatures of at least the inner rocky planet zone, nitrogen reacts with oxides to make nitrides, and with carbides to make cyanamides, and some other materials.

Returning to the planet while it is heating up, the water coming off the cement should be quite reactive. If it meets iron dust it will oxidise it. If it meets a carbide there will be options, although the metal will invariably become an oxide. If the carbide was of the structure of calcium carbide it will make acetylene. If it oxidises anything, it will make hydrogen and the oxide. For many carbides it may make methane and metal oxide, or carbon monoxide, and invariably some hydrogen. Carbon monoxide can be oxidised by water to carbon dioxide, making more hydrogen, but carbon monoxide and hydrogen make synthesis gas, and a considerable variety of chemicals can be made, most of which are obvious contenders to help make life. Nitrides react with water largely to make ammonia, but ammonia is also reactive, and hydrogen cyanide and cyanoacetylene should be made. In the very early stages of biogenesis, hydrogen cyanide is an essential material, even though now it is poisonous.

This explains a little more of what we see in terms of the per centage composition. Mars, as noted above, has extremely little felsic/granitic material, and has a much higher proportion of iron oxide. It has less carbon dioxide than expected, even after allowing for some having escaped to space, and that is because since Mars was cooler, the high temperature carbide formation was slower. It has less water because the calcium silicates absorb less, although there is an issue here of how much is buried under the surface. The nitrogen is a puzzle. Mars has extremely little nitrogen, and the question is, why not. One possibility is that the temperatures were too low for significant nitride production. The other possibility, which I proposed in my novel Red Gold, is that at least some nitrogen was there and was emitted as ammonia. If so, it solves another puzzle: Mars has clear signs of ancient river flows, but all evidence is it was too cold for ice to melt. However, ammonia dissolves in ice and melts it down to minus eighty degrees Centigrade. So, in my opinion, the river flows were ammonia/water solutions. The carbon would have been emitted as methane, but that oxidises to carbon dioxide in the presence of water vapour and UV light.  Ammonia reacts with carbon dioxide first to form ammonium carbonate (which will also lower the melting point of ice) then urea. If I am right, there will be buried deposits of urea, or whatever it converts to after billions of years, in selected places on Mars.

The experts argue that methane and ammonia would only survive for a few years due to the UV radiation. However, smog would tend to protect them, and Titan still has methane. Liquid water also tends to protect ammonia. There are two samples from early Earth. One is of the atmosphere encased in rock at Isua, Greenland. It contains methane (as well as some hydrogen). The other is from Barberton (South Africa) which contains samples of seawater trapped in rock. The concentration of ammonia in seawater at 3.2 Gy BP was such that about 10% of the planet’s nitrogen currently in the atmosphere was in the sea in the form of ammonia.

We finally get to the initial question: why is Venus so different? The answer is simple. It will have had a lower per centage of cement and a high per centage of basalt simply because it formed at a hotter place. Accordingly, it would have much less water than Earth. However, it would have had more carbides and nitrides, and that valuable water got used up making the atmosphere, and in oxidising sulphur to sulphates. Accordingly, I expect Venus to have relatively small deposits of granite on the surface.

There is also the question of the deuterium to hydrogen ratio, which is at least a hundred times higher than solar. If the above mechanism is right, all the oxygen in the oxides, and all the nitrogen in the atmosphere, came from water reacting. My answer is that just about all the water was used up making the atmosphere, sulphates, and whatever. The initial reaction is of the sort:

R – X  + H2O  ->  R –OH + H – X

In this, one hydrogen atom has to transfer from the water to the X (where it will later be dislodged and lost to space). If there is a choice, the atom that is most weakly bonded will move, and deuterium is bonded quite more strongly than hydrogen. The electronic binding is the same, but there are zero point vibrations, and hydrogen, being lighter uses more of this as vibrational energy. In general chemistry, the chemical isotope effect, as it is called, can make the hydrogen between four and twenty-five times more likely to move, depending on the activation energy. Venus did not need to lose the supply of water equivalent to Earth’s oceans to get its high deuterium content; the chemical isotope effect is far more effective.

Further details can be found in my ebook “Planetary Formation and Biogenesis”http://www.amazon.com/dp/B007T0QE6I.

Science that does not make sense

Occasionally in science we see reports that do not make sense. The first to be mentioned here relates to Oumuamua, the “interstellar asteroid” mentioned in my previous post. In a paper (arXiv:1901.08704v3 [astro-ph.EP] 30 Jan 2019) Sekanina suggests the object was the debris of a dwarf interstellar comet that disintegrated before perihelion. One fact that Sekanina thought to be important was that no intrinsically faint long-period comet with a perihelion distance less than about 0.25 AU, which means it comes as close or closer than about two-thirds the distance from the sun as Mercury, have ever been observed after perihelion. The reason is that if the comet gets that close to the star, the heat just disintegrates it. Sekanina proposed that such an interstellar comet entered our system and disintegrated, leaving “a monstrous fluffy dust aggregate released in the recent explosive event, ‘Oumuamua should be of strongly irregular shape, tumbling, not outgassing, and subjected to effects of solar radiation pressure, consistent with observation.” Convinced? My problem: just because comets cannot survive close encounters with the sun does not mean a rock emerging from near the sun started as a comet. This is an unfortunately common logic problem. A statement of the form “if A, then B” simply means what it says. It does NOT mean, there is B therefor there must have been A.

At this point it is of interest to consider what comets are comprised of. The usual explanation is they are formed by ices and dust accreting. The comets are formed in the very outer solar system (e.g.the Oort cloud) by the ices sticking together. The ices include gases such as nitrogen and carbon monoxide, which are easily lost once they get hot. Here, “hot” is still very cold. When the gases volatalise, they tend to blow off a lot of dust, and that dust is what we see as the tail, which is directed away from the star due to radiation pressure and solar wind. The problem with Sekanina’s interpretation is, the ice holds everything together. The paper conceded this when it said it was a monstrous fluffy aggregate, but for me as the ice vaporizes, it will push the dust apart. Further, even going around a star, it will still happen progressively. The dust should spread out, as a comet tail. It did not for Oumuamua.

The second report was from Bonomo, in Nature Astronomy(doi.org/10.1038/s41550-018-0648-9). They claimed the Kepler 107 system provided evidence of giant collisions, as described in my previous post, and the sort of thing that might make an Oumuamua. What the paper claims is there are two planets with radii about fifty per cent bigger than Earth, and the outer planet is twice as dense (relative density ~ 12.6 g/cm^3) than the inner one (relative density ~ 5.3 g/cm^3). The authors argue that this provides evidence for a giant collision that would have stripped off much of the silicates from the outer planet, thus leaving more of an iron core. In this context, that is what some people think is the reason for Mercury having a density almost approaching that of Earth so the authors are simply tagging on to a common theme.

So why do I think this does not make sense? Basically because the relative density of iron is 7.87 g/cm^3. Even if this planet is pure iron, it could not have a density significantly greater than 7.8. (There is an increase in density due to compressibility under gravity, but iron is not particularly compressible so any gain will be small.) Even solid lead would not do. Silicates and gold would be OK, so maybe we should start a rumour? Raise money for an interstellar expedition to get rich quick (at least from the raised money!) However, from the point of view of the composition of dust that forms planets, that is impossible so maybe investors will see through this scam. Maybe.

So what do I think has happened? In two words, experimental error. The mass has to be determined by the orbital interactions with something else. What the Kepler mehod does is determine the orbital characteristics by measuring the periodic times, i.e.the times between various occultations. The size is measured from the width of the occultation signal and the slope of the signal at the beginning and the end. All of these have possible errors, and they include the size of the star and the assumed position re the equator of the star, so the question now is, how big are these errors? I am starting to suspect, very big.

This is of interest to me since I wrote an ebook, “Planetary Formation and Biogenesis”. In this, I surveyed all the knowedge I could find up to the time of writing, and argued the standard theory was wrong. Why? It took several chapters to nail this, but the essence is that standard theory starts with a distribution of planetesimals and lets gravitational interactions lead to their joining up into planets. The basic problems I see with this are that collisions will lead to fragmentation, and the throwing into deep space, or the star, bits of planet. The second problem is nobody has any idea how such planetesimals form. I start by considering chemical interactions, and when I do that, after noting that what happens will depend on the temperatures around where it happens (what happens in chemistry is often highly temperature dependent) you get very selective zoes that differ from each other quite significantly. Our planets are in such zones (if you assume Jupiter formed at the “snow zone”) and have the required properties. Since I wrote that, I have been following the papers on the topic and nothing has been found that contradicts it, except, arguably things like the Kepler 107 “extremely dense planet”. I argue it is impossible, and therefore the results are in error.

Should anyone be interested in this ebook, see http://www.amazon.com/dp/B007T0QE6I

Science in Action – or Not

For my first post in 2019, I wish everyone a happy and prosperous New Year. 2018 was a slightly different year than most for me, in that I finally completed and published my chemical bond theory as an ebook; that is something I had been putting off for a long time, largely because I had no clear idea what to do with the theory. There is something of a story behind this, so why not tell at least part of it in my first blog post for the year? The background to this illustrates why I think science has gone slightly off the rails over the last fifty years.

The usual way to get a scientific thought over is to write a scientific paper and publish it in a scientific journal. These tend to be fairly concise, and primarily present a set of data or make one point. One interesting point about science is that if it is not in accord with what people expect, the odds on are it will be ignored, or the journals will not even accept it. You have to add to what people believe to be accepted. As the great physicist Enrico Fermi once said, “Never underestimate the joy people derive from hearing something they already know.” Or at least think they know. The corollary is that you should never underestimate the urge people have to ignore anything that seems to contradict what they think they know.

My problem was I believed the general approach to chemical bond theory was wrong in the sense it was not useful. The basic equations could not be solved, and progress could only be made through computer modelling, together with as John Pople stated in his Nobel lecture, validation, which involved “the optimization of four parameters from 299 experimentally derived energies”. These validated parameters only worked for a very narrow range of molecules; if they were too different the validation process had to be repeated with a different set of reference molecules. My view of this followed another quote from Enrico Fermi: I remember my friend Johnny von Neumann used to say, “with four parameters I can fit an elephant and with five I can make him wiggle his trunk.” (I read that with the more modern density functional theory, there could be up to fifty adjustable parameters. If after using that many you cannot get agreement with observation, you should most certainly give up.)

Of course, when I started my career, the problem was just plain insoluble. If you remember the old computer print-out, there were sheets of paper about the size of US letter paper, and these would be folded in a heap. I had a friend doing such computations, and I saw him once with such a pile of computer paper many inches thick. This was the code, and he was frantic. He kept making alterations, but nothing worked – he always got one of two answers: zero and infinity. As I remarked, at least the truth was somewhere in between.

The first problem I attacked was the energy of electrons in the free atoms. In standard theory, the Schrödinger equation, when you assume that an electron in a neutral atom sees a charge of one, the binding energy is far too weak. This is “corrected”througha “screening constant”, and each situation had its own “constant”. That means that each value was obtained by multiplying what you expect by something to give the right answer. Physically, this is explained by the electron penetrating the inner electron shells and experiencing greater electric field.

What I came up with is too complicated to go into here, but basically the concept was that since the Schrödinger equation (the basis of quantum mechanics) is a wave equation, assume there was a wave. That is at odds with standard quantum mechanics, but there were two men, Louis de Broglie and David Bohm, who had argued there was a wave that they called the pilot wave. (In a recent poll of physicists regarding which interpretation was most likely to be correct, the pilot wave got zero votes.) I adopted the concept (well before that poll) but I had two additional features, so I called mine the guidance wave.

For me, the atomic orbital was a linear sum of component waves, one of which was the usual hydrogen-like wave, plus a wave with zero nodes, and two additional waves to account for the Uncertainty Principle. It worked to a first order using only quantum numbers. I published it, and the scientific community ignored it.

When I used it for chemical bond calculations, the results are accurate generally to within a few kJ/mol, which is a fraction of 1% frequently. Boron, sodium and bismuth give worse results.  A second order term is necessary for atomic orbital energies, but it cancels in the chemical bond calculations. Its magnitude increases as the distance from a full shell increases, and it oscillates in sign depending on whether the principal quantum number is odd or even, which results when going down a group of elements, that the lines joining them zig zag.

Does it matter? Well, in my opinion, yes. The reason is that first it gives the main characteristics of the wave function in terms only of quantum numbers, free f arbitrary parameters. More importantly, the main term differs depending on whether the electron is paired or not, and since chemical bonding requiresthe pairing of unpaired electrons, the function changes on forming bonds. That means there is a quantum effect that is overlooked in the standard calculations. But you say, surely they would notice that? Recall what I said about assignable parameters? With four of them, von Neumann could use the data to calculate an elephant! Think of what you could do with fifty!

As a postscript, I recently saw a claim on a web discussion that some of the unusual properties of gold, such as its colour, arise through a relativistic effect. I entered the discussion and said that if my paper was correct, gold is reasonably well-behaved, and its energy levels were quite adequately calculated without needing relativity, as might be expected from the energies involved. This drew almost derision – the paper was dated, an authority has spoken since then. A simple extrapolation from copper to silver to gold shows gold is anomalous – I should go read a tutorial. I offered the fact that all energy levels require enhanced screening constants, therefore Maxwell’s equations are not followed. These are the basic laws of electromagnetism. Derision again – someone must have worked that out. If so, what is the answer? As for the colour, copper is also coloured. As for the extrapolation, you should not simply keep drawing a zig to work out where the zag ends. The interesting point here was that this person was embedded in “standard theory”. Of course standard theory might be right, but whether it is depends on whether it explains nature properly, and not on who the authority spouting it is.

Finally, a quote to end this post, again from Enrico Fermi. When asked what characteristics Nobel prize winners had in common: “I cannot think of a single one, not even intelligence.”