# Interpreting Observations

The ancients, with a few exceptions, thought the Earth was the centre of the Universe and everything rotated around it, thus giving day and night. Contrary to what many people think, this was not simply stupid; they reasoned that it could not be rotating. An obvious experiment that Aristotle performed was to throw a stone high into the air so that it reached its maximum height directly above. When it dropped, it landed directly underneath it and its path was vertical to the horizontal. Aristotle recognised that up at that height and dropped, if the Earth was rotating the angular momentum from the height should carry it eastwards, but it did not. Aristotle was a clever reasoner, but he was a poor experimenter. He also failed to consider consequences of some of his other reasoning. Thus he knew that the Earth was a sphere, and he knew the size of it and thanks to Eratosthenes this was a fairly accurate value. He had reasoned correctly why that was, which was that matter fell towards the centre. Accordingly, he should also have realised his stone should also fall slightly to the south. (He lived in Greece; if he lived here it would move slightly northwards.) When he failed to notice that he should have realized his technique was insufficiently accurate. What he failed to do was to put numbers onto his reasoning, and this is an error in reasoning we see all the time these days from politicians. As an aside, this is a difficult experiment to do. If you don’t believe me, try it. Exactly where is the point vertically below your drop point? You must not find it by dropping a stone!

He also realised that Earth could not orbit the sun, and there was plenty of evidence to show that it could not. First, there was the background. Put a stick in the ground and walk around it. What you see is the background moves and moves more the bigger the circle radius, and smaller the further away the object is. When Aristarchus proposed the heliocentric theory all he could do was make the rather unconvincing bleat that the stars in the background must be an enormous distance away. As it happens, they are. This illustrates another problem with reasoning – if you assume a statement in the reasoning chain, the value of the reasoning is only as good as the truth of the assumption. A further example was that Aristotle reasoned that if the earth was rotating or orbiting the sun, because air rises, the Universe must be full of air, and therefore we should be afflicted by persistent easterly winds. It is interesting to note that had he lived in the trade wind zone he might have come to the correct conclusion for entirely the wrong reason.

But if he did he would have a further problem because he had shown that Earth could not orbit the sun through another line of reasoning. As was “well known”, heavy things fall faster than light things, and orbiting involves an acceleration towards the centre. Therefore there should be a stream of light things hurling off into space. There isn’t, therefore Earth does not move. Further, you could see the tail of comets. They were moving, which proves the reasoning. Of course it doesn’t because the tail always goes away from the sun, and not behind the motion at least half the time. This was a simple thing to check and it was possible to carry out this checking far more easily than the other failed assumptions. Unfortunately, who bothers to check things that are “well known”? This shows a further aspect: a true proposition has everything that is relevant to it in accord with it. This is the basis of Popper’s falsification concept.

One of the hold-ups involved a rather unusual aspect. If you watch a planet, say Mars, it seems to travel across the background, then slow down, then turn around and go the other way, then eventually return to its previous path. Claudius Ptolemy explained this in terms of epicycles, but it is easily understood in term of both going around the sun provided the outer one is going slower. That is obvious because while Earth takes a year to complete an orbit, it takes Mars over two years to complete a cycle. So we had two theories that both give the correct answer, but one has two assignable constants to explain each observation, while the other relies on dynamical relationships that at the time were not understood. This shows another reasoning flaw: you should not reject a proposition simply because you are ignorant of how it could work.I went into a lot more detail of this in my ebook “Athene’s Prophecy”, where for perfectly good plot reasons a young Roman was ordered to prove Aristotle wrong. The key to settling the argument (as explained in more detail in the following novel, “Legatus Legionis”) is to prove the Earth moves. We can do this with the tides. The part closest to the external source of gravity has the water fall sideways a little towards it; the part furthest has more centrifugal force so it is trying to throw the water away. They may not have understood the mechanics of that, but they did know about the sling. Aristotle could not detect this because the tides where he lived are miniscule but in my ebook I had my Roman with the British invasion and hence had to study the tides to know when to sail. There you can get quite massive tides. If you simply assume the tide is cause by the Moon pulling the water towards it and Earth is stationary there would be only one tide per day; the fact that there are two is conclusive, even if you do not properly understand the mechanics.

# Venus with a Watery Past?

In a recent edition of Science magazine (372, p1136-7) there is an outline of two NASA probes to determine whether Venus had water. One argument is that Venus and Earth formed from the same material, so they should have started off very much the same, in which case Venus should have had about the same amount of water as Earth. That logic is false because it omits the issue of how planets get water. However, it argued that Venus would have had a serious climatic difference. A computer model showed that when planets rotate very slowly the near absence of a Coriolis force would mean that winds would flow uniformly from equator to pole. On Earth, the Coriolis effect leads to the lower atmosphere air splitting into three  cells on each side of the equator: tropical, subtropical and polar circulations. Venus would have had a more uniform wind pattern.

A further model then argued that massive water clouds would form, blocking half the sunlight, then “in the perpetual twilight, liquid water could have survived for billions of years.”  Since Venus gets about twice the light intensity as Earth does, Venusian “perpetual twilight” would be a good sunny day here. The next part of the argument was that since water is considered to lubricate plates, the then Venus could have had plate tectonics. Thus NASA has a mission to map the surface in much greater detail. That, of course, is a legitimate mission irrespective of the issue of water.

A second aim of these missions is to search for reflectance spectra consistent with granite. Granite is thought to be accompanied by water, although that correlation could be suspect because it is based on Earth, the only planet where granite is known.

So what happened to the “vast oceans”? Their argument is that massive volcanism liberate huge amounts of CO2 into the atmosphere “causing a runaway greenhouse effect that boiled the planet dry.” Ultraviolet light now broke down the water, which would lead to the production of hydrogen, which gets lost to space. This is the conventional explanation for the very high ratio of deuterium to hydrogen in the atmosphere. The concept is the water with deuterium is heavier, and has a slightly higher boiling point, so it would be the least “boiled off”. The effect is real but it is a very small one, which is why a lot of water has to be postulated. The problem with this explanation is that while hydrogen easily gets lost to space there should be massive amounts of oxygen retained. Where is it? Their answer: the oxygen would be “purged” by more ash. No mention of how.

In my ebook “Planetary Formation and Biogenesis” I proposed that Venus probably never had any liquid water on its surface. The rocky planets accreted their water by binding to silicates, and in doing so helped cement aggregate together and get the planet growing. Earth accreted at a place that was hot enough during stellar accretion to form calcium aluminosilicates that make very good cements and would have absorbed their water from the gas disk. Mars got less water because the material that formed Mars had been too cool to separate out aluminosilicates so it had to settle for simple calcium silicate, which does not bind anywhere near as much water. Venus probably had the same aluminosilicates as Earth, but being closer to the star meant it was hotter and less water bonded, and consequently less aluminosilicates.

What about the deuterium enhancement? Surely that is evidence of a lot of water? Not necessarily. How did the gases accrete? My argument is they would accrete as solids such as carbides, nitrides, etc. and the gases would be liberated by reaction with water. Thus on the road to making ammonia from a metal nitride

M – N  + H2O   →  M – OH  +  N-H  ; then  M(OH)2    →  MO + H2O and this is repeated until ammonia is made. An important point is one hydrogen atom is transferred from each molecule of water while one is retained by the oxygen attached to the metal. Now the bond between deuterium and oxygen is stronger than that from hydrogen, the reason being that the hydrogen atom, being lighter, has its bond vibrate more strongly. Therefore the deuterium is more likely to remain on the oxygen atom and end up in further water. This is known as the chemical isotope effect, and it is much more effective at concentrating deuterium. Thus as I see it, too much of the water was used up making gas, and eventually also making carbon dioxide. Venus may never have had much surface water.

# A New Way of Mining?

One of the bigger problems our economies face is obtaining metals. Apparently the price of metals used in lithium-ion batteries is soaring because supply cannot expand sufficiently, and there appears to be no way current methodology can keep up.

Ores are obtained by physically removing them from the subsurface, and this tends to mean that huge volumes of overburden have to be removed. Global mining is estimated to produce 100 billion t of overburden per year, and that usually has to be carted somewhere else and dumped.  This often leads to major disasters, such as mine tailing causing dams, and then collapsing, thus Brazil has had at least two such collapses that led to something like 140 million cubic meters of rubble moving and at least 256 deaths. The better ores are now worked out and we are resorting to poorer ores, most of which contain less than 1% is what you actually want. The rest, gangue, is often environmentally toxic and is quite difficult to dispose of safely. The whole process is energy intensive. Mining contributes about 10% of the energy-related greenhouse gas emissions. Yet if we take copper alone, it is estimated that by 2050 demand will increase by up to 350%. The ores we know about are becoming progressively lower grade and they are found at greater depths.

We have heard of the limits to growth. Well, mining is becoming increasingly looking like becoming unsustainable, but there is always the possibility of new technology to get the benefit from increasingly more difficult sources. One such possible technique involves first inserting acid or lixiviant into the rock to dissolve the target metal in the form of an ion then use a targeted electric field to transport the metal-rich solution to the surface. This is a variant of a technique used to obtain metals from fly ash, sludge, etc.

The objective is to place an electrode either within or surrounding the ore, then the acid is introduced from an external reservoir. There is an alternative reservoir with a second electrode with opposite charge to that of the metal-bearing ion. The metal usually bears a positive charge in the textbooks, so you would have your reservoir electrode negatively charged, but it is important to keep track of your chemistry. For example, if iron were dissolved in hydrochloric acid, the main ion would be FeCl4-, i.e. an anion.

Because transport occurs through electromigration, there is no need for permeability enhancement techniques, such as fracking. About 75% of copper ore reserves are as copper sulphide that lie beneath the water table. The proposed technique was demonstrated on a laboratory scale with a mix of chalcopyrite (CuFeS2) and quartz, each powdered. A solution of ferric chloride was added, and a direct current of 7 V was applied to electrodes at opposite ends of a 0.57 m path, over which there was a potential drop of about 5V, giving a maximal voltage gradient of 1.75 V/cm. The ferric chloride liberated copper as the cupric cation. The laboratory test extracted 57 weight per cent of the available copper from a 4 cm-wide sample over 94 days, although 80% was recovered in the first 50 days. The electric current decreased over the first ten days from 110 mA to 10 mA, suggestive of pore blocking. Computer simulations suggest that in the field, about 70% of the metal in a sample accessed by the electrodes could be recovered over a three year period. The process would have the odd hazard, thus a 5 meter spacing between electrodes employed, in the simulation, a 500 V difference. If the ore is several hundred meters down, this could require quite a voltage. Is this practical? I do not know, but it seems to me that at the moment the amount of dissolved material, the large voltages, the small areas and the time taken will count against it. On the other hand, the price of metals are starting to rise dramatically. I doubt this will be a final solution, but it may be part of one.

# Some Lesser Achievements of Science

Most people probably think that science is a rather dull quest for the truth, best left to the experts, who are all out to find the truth. Well, not exactly. Here is a video link where Sean Carroll points out that most physicists are really uninterested in understanding what quantum mechanics is about: https://youtu.be/ZacggH9wB7Y

This is rather awkward because quantum mechanics is one of the two greatest scientific advances of the twentieth century, and here we find all but a few of its exponents really neither understand what is going on nor do they care. What happens is they have a procedure by which they can get answers, so that is all that matters, is it not? Not in my opinion. What happens thereafter is that many of these are University teachers, and when they don’t care, that gets passed on to the students, so they don’t care. The system is degenerating.

But, you protest, we still get the right answers. That leaves open the question, do we really? From my experience in chemistry, we know that the only theories required to explain chemical observations (apart from maybe what atoms are made of) are electromagnetic theory and quantum mechanics. Those in the know will know there are floods of computational papers published so we must understand? Not at all. Almost all the papers calculate something that is known, and because integrating the differential equations means a number of constants are required, and because it is impossible to solve the equations analytically, the constants can be assigned so the correct answers are obtained. Fortunately, for very similar problems the same constants will suffice. If you find that hard to believe, the process is called validation, and you can read about it in John Pople’s Nobel Prize lecture. Actually, I believe all the computations are wrong except for the hydrogen molecule because everybody uses the wrong wave functions, but that is another matter.

That scientists do not care about their most important theory is bad, but there is worse, as published in Nature ( https://doi.org/10.1038/d41586-021-01436-7) Apparently, in 2005 three PhD students wrote a computer program called SCIgen for amusement. What this program does is write “scientific papers”. The research for them? Who needs that? It cobbles together words with random titles, text and charts and is essentially nonsense. Anyone can write them. (Declaration: I did not use this software for this or any other post!) While the original purpose was for “maximum amusement” and papers were generated for conferences, because the software is freely available various people have sent them to scientific journals , the peer review process failed to spot the gibberish, and the journals published them. There are apparently hundreds of these nonsensical papers floating around. Further, they can be for relatively “big names” because apparently articles can get through under someone’s name without the someone knowing anything about it. Why give someone else an additional paper? A big name is more likely to get through peer review and the writer needs to get it out there because they can be published with genuine references, although of course with no relevance to the submission. The reason for doing this is simple: it pads the number of citations for the cited authors, which is necessary to make their CV look better and to improve the chances when applying for funds. With money at stake, it is hardly surprising that sort of fraud has crept in.

Another unsettling aspect to scientific funding has been uncovered (Nature 593: 490 -491). Funding panels are more likely to give EU early-career grants to applicants connected to the granting panelists’ institutions, in other words the panelists have this tendency to give the money to “themselves”. Oops. A study of the grants showed that “applicants who shared both a home and a host organization with one panellist or more received a grant 40% more often than average” and “the success rate for connected applicants was approximately 80% higher than average in the life sciences and 40% higher in the social sciences and humanities, but there seemed to be no discernible effect in physics and engineering.” Here, physics is clean!  One explanation might be that the best applicants want to go to the most prestigious institutions. Maybe, but would that not apply to physics? An evaluation to test such bias in the life sciences showed “successful and connected applicants scored worse on these performance indicators than did funded applicants without such links, and even some unsuccessful applicants.” You can draw your own conclusions, but they are not good looking.

# Dark Matter Detection

Most people have heard of dark matter. Its existence is clear, at least so many so state. Actually, that is a bit of an exaggeration. All we know is that galaxies do not behave exactly as General Relativity would have us think. Thus the outer parts of galaxies orbit the centre faster than they should and galaxy clusters do not have the dynamics expected. Worse, if we look at gravitational lensing, where light is bent as it goes around a galaxy, it is bent as if there is additional mass there that we just cannot see. There are two possible explanations for this. One is there is additional matter there we cannot see, which we call dark matter. The other alternative is that our understanding of what gravity behaves like is wrong on a large scale. We understand it very well on the scale of our solar system, but that is incredibly small when compared with a galaxy so it is possible we simply cannot detect such anomalies with our experiments. As it happens, there are awkward aspects of each, although the modified gravity does have the advantage that one explanation is that we might simply not understand how it should be modified.

One way of settling this dispute is to actually detect dark matter. If we detect it, case over. Well, maybe. However, so far all attempts to detect it have failed. That is not critical because to detect something, we have to know what it is, or what its properties are. So far all we can say about dark matter is that its gravity affects galaxies. It is rather hard to do an experiment on a galaxy so that is not exactly helpful. So what physicists have done is to make a guess as to what it will be, and. not surprisingly, make the guess in the form they can do something about it if they are correct. The problem now is we know it has to have mass because it exerts a gravitational effect and we know it cannot interact with electromagnetic fields, otherwise we would see it. We can also say it does not clump because otherwise there would be observable effects on close stars. There will not be dark matter stars. That is not exactly much to work on, but the usual approach has been to try and detect collisions. If such a particle can transfer sufficient energy to the molecule or atom, it can get rid of the energy by giving off a photon. So one such detector had huge tanks containing 370 kg of liquid xenon. It was buried deep underground, and from theory massive particles of dark matter could be separated from occasional neutron events because a neutron would give multiple events. In the end, they found nothing. On the other hand, it is far from clear to me why dark matter could not give multiple events, so maybe they saw some and confused it with stray neutrons.

On the basis that a bigger detector would help, one proposal (Leane and Smirnov, Physical Review Letters 126: 161101 (2021) suggest using giant exoplanets. The idea is that as the dark matter particles collide with the planet, they will deposit energy as they scatter and with the scattering eventually annihilate within the planet. This additional energy will be detected as heat. The point of the giant is that its huge gravitational field will pull in extra dark matter.

Accordingly, they wish someone to measure the temperatures on the surface of old exoplanets of mass between Jupiter and 55 times Jupiter’s mass, and temperatures above that otherwise expected can be allocated to dark matter. Further, since dark matter density should be higher near the galactic centre, and collisional velocities higher, the difference in surface temperatures between comparable planets may signal the detection of dark matter.

Can you see problems? To me, the flaw lies in “what is expected?” In my opinion, one is the question of getting sufficient accuracy in the infrared detection. Gravitational collapse gives off excess heat. Once a planet gets to about 16 Jupiter masses it starts fusing deuterium. Another lies in estimating the heat given off by radioactive decay. That should be understandable from the age of the planet, but if it had accreted additional material from a later supernova the prediction could be wrong. However, for me the biggest assumption is that the dark matter will annihilate, as without this it is hard to see where sufficient energy will come from. If galaxies all behave the same way, irrespective of age (and we see some galaxies from a great distance, which means we see them as they were a long time ago) then this suggests the proposed dark matter does not annihilate. There is no reason why it should, and that our detection method needs it to will be totally ignored by nature. However, no doubt schemes to detect dark matter will generate many scientific papers in the near future and consume very substantial research grants. As for me, I would suggest one plausible approach, since so much else has failed by assuming large particles, is to look for small ones. Are there any unexplained momenta in collisions from the large hadron collider? What most people overlook is that about 99% of the data generated is trashed (because there is so much of it), but would it hurt to spend just a little effort on examining if fine detail that which you do not expect to see much?