About ianmillerblog

I am a semi-retired professional scientist who has taken up writing futuristic thrillers, which are being published by myself as ebooks on Amazon and Smashwords, and a number of other sites. The intention is to publish a sequence, each of which is stand-alone, but when taken together there is a further story through combining the backgrounds. This blog will be largely about my views on science in fiction, and about the future, including what we should be doing about it, but in my opinion, are not. In the science area, I have been working on products from marine algae, and on biofuels. I also have an interest in scientific theory, which is usually alternative to what others think. This work is also being published as ebooks under the series "Elements of Theory".

Some Lesser Achievements of Science

Most people probably think that science is a rather dull quest for the truth, best left to the experts, who are all out to find the truth. Well, not exactly. Here is a video link where Sean Carroll points out that most physicists are really uninterested in understanding what quantum mechanics is about: https://youtu.be/ZacggH9wB7Y

This is rather awkward because quantum mechanics is one of the two greatest scientific advances of the twentieth century, and here we find all but a few of its exponents really neither understand what is going on nor do they care. What happens is they have a procedure by which they can get answers, so that is all that matters, is it not? Not in my opinion. What happens thereafter is that many of these are University teachers, and when they don’t care, that gets passed on to the students, so they don’t care. The system is degenerating.

But, you protest, we still get the right answers. That leaves open the question, do we really? From my experience in chemistry, we know that the only theories required to explain chemical observations (apart from maybe what atoms are made of) are electromagnetic theory and quantum mechanics. Those in the know will know there are floods of computational papers published so we must understand? Not at all. Almost all the papers calculate something that is known, and because integrating the differential equations means a number of constants are required, and because it is impossible to solve the equations analytically, the constants can be assigned so the correct answers are obtained. Fortunately, for very similar problems the same constants will suffice. If you find that hard to believe, the process is called validation, and you can read about it in John Pople’s Nobel Prize lecture. Actually, I believe all the computations are wrong except for the hydrogen molecule because everybody uses the wrong wave functions, but that is another matter.

That scientists do not care about their most important theory is bad, but there is worse, as published in Nature ( https://doi.org/10.1038/d41586-021-01436-7) Apparently, in 2005 three PhD students wrote a computer program called SCIgen for amusement. What this program does is write “scientific papers”. The research for them? Who needs that? It cobbles together words with random titles, text and charts and is essentially nonsense. Anyone can write them. (Declaration: I did not use this software for this or any other post!) While the original purpose was for “maximum amusement” and papers were generated for conferences, because the software is freely available various people have sent them to scientific journals , the peer review process failed to spot the gibberish, and the journals published them. There are apparently hundreds of these nonsensical papers floating around. Further, they can be for relatively “big names” because apparently articles can get through under someone’s name without the someone knowing anything about it. Why give someone else an additional paper? A big name is more likely to get through peer review and the writer needs to get it out there because they can be published with genuine references, although of course with no relevance to the submission. The reason for doing this is simple: it pads the number of citations for the cited authors, which is necessary to make their CV look better and to improve the chances when applying for funds. With money at stake, it is hardly surprising that sort of fraud has crept in.

Another unsettling aspect to scientific funding has been uncovered (Nature 593: 490 -491). Funding panels are more likely to give EU early-career grants to applicants connected to the granting panelists’ institutions, in other words the panelists have this tendency to give the money to “themselves”. Oops. A study of the grants showed that “applicants who shared both a home and a host organization with one panellist or more received a grant 40% more often than average” and “the success rate for connected applicants was approximately 80% higher than average in the life sciences and 40% higher in the social sciences and humanities, but there seemed to be no discernible effect in physics and engineering.” Here, physics is clean!  One explanation might be that the best applicants want to go to the most prestigious institutions. Maybe, but would that not apply to physics? An evaluation to test such bias in the life sciences showed “successful and connected applicants scored worse on these performance indicators than did funded applicants without such links, and even some unsuccessful applicants.” You can draw your own conclusions, but they are not good looking.

Dark Matter Detection

Most people have heard of dark matter. Its existence is clear, at least so many so state. Actually, that is a bit of an exaggeration. All we know is that galaxies do not behave exactly as General Relativity would have us think. Thus the outer parts of galaxies orbit the centre faster than they should and galaxy clusters do not have the dynamics expected. Worse, if we look at gravitational lensing, where light is bent as it goes around a galaxy, it is bent as if there is additional mass there that we just cannot see. There are two possible explanations for this. One is there is additional matter there we cannot see, which we call dark matter. The other alternative is that our understanding of what gravity behaves like is wrong on a large scale. We understand it very well on the scale of our solar system, but that is incredibly small when compared with a galaxy so it is possible we simply cannot detect such anomalies with our experiments. As it happens, there are awkward aspects of each, although the modified gravity does have the advantage that one explanation is that we might simply not understand how it should be modified.

One way of settling this dispute is to actually detect dark matter. If we detect it, case over. Well, maybe. However, so far all attempts to detect it have failed. That is not critical because to detect something, we have to know what it is, or what its properties are. So far all we can say about dark matter is that its gravity affects galaxies. It is rather hard to do an experiment on a galaxy so that is not exactly helpful. So what physicists have done is to make a guess as to what it will be, and. not surprisingly, make the guess in the form they can do something about it if they are correct. The problem now is we know it has to have mass because it exerts a gravitational effect and we know it cannot interact with electromagnetic fields, otherwise we would see it. We can also say it does not clump because otherwise there would be observable effects on close stars. There will not be dark matter stars. That is not exactly much to work on, but the usual approach has been to try and detect collisions. If such a particle can transfer sufficient energy to the molecule or atom, it can get rid of the energy by giving off a photon. So one such detector had huge tanks containing 370 kg of liquid xenon. It was buried deep underground, and from theory massive particles of dark matter could be separated from occasional neutron events because a neutron would give multiple events. In the end, they found nothing. On the other hand, it is far from clear to me why dark matter could not give multiple events, so maybe they saw some and confused it with stray neutrons.

On the basis that a bigger detector would help, one proposal (Leane and Smirnov, Physical Review Letters 126: 161101 (2021) suggest using giant exoplanets. The idea is that as the dark matter particles collide with the planet, they will deposit energy as they scatter and with the scattering eventually annihilate within the planet. This additional energy will be detected as heat. The point of the giant is that its huge gravitational field will pull in extra dark matter.

Accordingly, they wish someone to measure the temperatures on the surface of old exoplanets of mass between Jupiter and 55 times Jupiter’s mass, and temperatures above that otherwise expected can be allocated to dark matter. Further, since dark matter density should be higher near the galactic centre, and collisional velocities higher, the difference in surface temperatures between comparable planets may signal the detection of dark matter.

Can you see problems? To me, the flaw lies in “what is expected?” In my opinion, one is the question of getting sufficient accuracy in the infrared detection. Gravitational collapse gives off excess heat. Once a planet gets to about 16 Jupiter masses it starts fusing deuterium. Another lies in estimating the heat given off by radioactive decay. That should be understandable from the age of the planet, but if it had accreted additional material from a later supernova the prediction could be wrong. However, for me the biggest assumption is that the dark matter will annihilate, as without this it is hard to see where sufficient energy will come from. If galaxies all behave the same way, irrespective of age (and we see some galaxies from a great distance, which means we see them as they were a long time ago) then this suggests the proposed dark matter does not annihilate. There is no reason why it should, and that our detection method needs it to will be totally ignored by nature. However, no doubt schemes to detect dark matter will generate many scientific papers in the near future and consume very substantial research grants. As for me, I would suggest one plausible approach, since so much else has failed by assuming large particles, is to look for small ones. Are there any unexplained momenta in collisions from the large hadron collider? What most people overlook is that about 99% of the data generated is trashed (because there is so much of it), but would it hurt to spend just a little effort on examining if fine detail that which you do not expect to see much?

Geoengineering – to do or not to do?

Climate change remains an uncomfortable topic. Politicians continue to state it is such an important problem, and then fail to do anything sufficient to solve it. There seems to be an idea amongst politicians that if everyone drove electric vehicles, all would be well. Leaving aside the question as to whether over the life of a vehicle the electric vehicle actually emits less greenhouse gas (and at best, that is such a close call it is unlikely to have any hope of addressing the problem) as noted in my post https://wordpress.com/post/ianmillerblog.wordpress.com/885

the world cannot sustain then necessary extractions to make it even vaguely possible. If that is the solution, why waste the effort – we are doomed.

As an article in Nature (vol 593, p 167, 2021) noted, we need to evaluate all possible options. As I have remarked in previous posts, it is extremely unlikely there is a silver bullet. Fusion power would come rather close, but we still have to do a number of other things, such as to enable transport, and as yet we do not have fusion power. So what the Nature article said is we should at least consider and analyse properly the consequences of geoengineering. The usual answer here is, horrors, we can’t go around altering the planet’s climate, but the fact is we have already. What do you think those greenhouse gases are doing?

The problem is that while the world has pledged to reduce emissions by 3 billion t of CO2 per year, even if this is achieved, and that is a big if, it remains far too little. Carbon capture will theoretically solve some of the problems, but it costs so much money for no benefit to the saver that you should not bet the house on that one. The alternative, as the Nature article suggests, is geoengineering. The concept is to raise the albedo of the planet, which reflects light back to space. The cooling effect is known: it happens after severe volcanic eruptions.

The basic concept of sending reflective stuff into the upper atmosphere is that it is short-term in nature, so if you get it wrong and there is an effect you don’t like, it does not last all that long. On the other hand, it is also a rapid fix and you get relatively quick results. That means provided you do things with some degree of care you can generate a short-term effect that is mild enough to see what happens, and if it works, you can later amplify it.

The biggest problem is the so-called ethical one: who decides how much cooling, and where do you cool? The article notes that some are vociferously opposed to it as “it could go awry in unpredictable ways”. It could be unpredictable, but the biggest problem would be that the unpredictability would be too small. Another listed reason to oppose it was it would detract from efforts to reduce greenhouse emissions. The problem here is that China and many other places are busy building new coal-fired electricity generation. Exactly how do you reduce emissions when so many places are busy increasing emissions? Then there is the question, how do you know what the effects will be? The answer to that is you carry out short-term mild experiments so you can find out without any serious damage.

The other side of the coin is, if we even stopped emissions right now, the existing levels will continue to heat the planet and nobody knows by how much. The models are simply insufficiently definitive. All we know is that the ice sheets are melting, and when they go, much of our prime agricultural land goes with it. Then there is the question of governance. One proposal to run small tests in Scandinavia ran into opposition from a community that protested that the experiments would offer a distraction from other reduction efforts. It appears that some people seem to think that with just a little effort this problem will go away. It won’t. One of the reasons for obstructing research is that the project will affect the whole planet. Yes, well so does burning coal in thermal generators, but I have never heard of the rest of the planet being consulted on that.

Is it a solution? I don’t know. It most definitely is not THE solution but it may be the only solution that acts quickly enough to compensate for a general inability to get moving, and in my opinion we badly need experiments to show what can be achieved. I understand there was once one such experiment, although not an intentional one. Following the grounding of aircraft over the US due to the Twin Tower incident, I gather the temperatures the following two days went up by over a degree. That was because the ice particles due to jet exhausts were no longer being generated. The advantages of an experiment using ice particles in the upper atmosphere is you can measure what happens quickly, but it quickly goes away so there will be no long term damage. So, is it possible? My guess is that technically it can be managed, but the practical issues of getting general consent to implement it will take so long it becomes more or less irrelevant. You can always find someone who opposes anything.

Ebook discount: Athene’s Prophecy

From May 20 – May 27, Athene’s Prophecy will be discounted to 99c/99p on Amazon and Amazon UK. Science fiction with some science you can try your hand at. The young Roman Gaius Claudius Scaevola receives a vision from Pallas Athene, who asks him to do three things so he can save humanity from total destruction. He must learn the art of war, he must make a steam engine, and while using only knowledge available in the first century he must show the Earth goes around the sun. Can you do it? Try your luck. I suspect you will fail, and to stop cheating, the answer is in the following ebook. 

Scaevola is in Egypt for the anti-Jewish riots, then he is sent to Syria as Tribunis laticlavius in the Fulminata. Soon he must prevent a rebellion when Caligulae orders a statue of himself in the temple of Jerusalem. You will get a different picture of Caligulae than what you normally see, supported by a transcription of a report of the critical meeting regarding the statue by Philo of Alexandria. (Fortunately, copyright has expired.). First of a series. http://www.amazon.com/dp/B00GYL4HGW

Fighting Global Warming Naturally?

In a recent edition of Nature (593, pp191-4) it was argued that to combat global warming, removing carbon from the atmosphere, which is often the main focus, should instead be reviewed in light of how to lower global temperatures. They note that the goal of reducing warming from pre-industrial times to two Centigrade degrees cannot be met solely through cuts to emissions so carbon dioxide needs to be removed from the atmosphere. The rest of the article more or less focused on how nature could contribute to that, which was a little disappointing bearing in mind they had made the point that this was not the main objective. Anyway, they went on to claim nature-based solutions could lower the temperature by a total of 0.4 degrees by 2100. Then came the caveats. If plants are to absorb carbon dioxide, they become less effective if the temperatures rise too high for them.  

Some statistics: 70% of Earth’s land surface has been modified by humanity; since 1960 we have modified 20% of it. Since 1960 changes include (in million square kilometers): urban area  +0.26; cropland +1.0; pasture +0.9; forestry -0.8.

The proposal involves three routes. The first is to protect current ecosystems, which includes stopping deforestation. This is obvious, but the current politicians, such as in Brazil, suggest this receives the comment, “Good luck with that one.” This might be achieved in some Western countries, but then again they have largely cut their forests down already. If we are going to rely on this we have a problem.

The second is to restore ecosystems so they can absorb more carbon. Restoration of forest cover is an obvious place to start. However, they claim plantation forests do not usually give the same benefits as natural forest. The natural forest has very dense undergrowth. Unless there are animals to eat that you may end up generating fuel to start forest fires, which wipe out all progress. Wetlands are particularly desirable because they are great at storing carbon, and once underway, they act rather quickly. However, again this is a problem. Wetlands, once cleared and drained are somewhat difficult to restore because the land tends to have been altered in ways to avoid the land reverting, so besides stopping current use, the alterations have to be removed. 

Notwithstanding the difficulties, there is also a strong reason to create reed beds that also grow algae. The reason is they also take up nitrogen and phosphate in water. Taking up ammoniacal wastes is very important because if part of the nitrogen waste is oxidized, or nitrates have been used as fertilizer, ammonium nitrate will decompose to form nitrous oxide, which is a further greenhouse gas that is particularly difficult because it absorbs in quite a different part of the infrared spectrum, and it has no simple decay route or anything that will absorb it. Consequently, what we put in the air could be there for some time. 

 The third is to improve land management, for timber, crops and grazing, Thus growing a forest on the borders of a stream should add carbon storage, but also reduce flooding and enhance fish life. An important point is that slowing runoff also helps prevent soil loss. All of this is obvious, except, it seems, to officials. In Chile, the government apparently gave out subsidies for pine and eucalyptus planting that led to 1.3 million hectares being planted. What actually happened was that the land so planted had previously been occupied with original forest and it is estimated that the overall effect was to emit 0.05 million t of stored carbon rather than sequester the 5.6 million t claimed. A particular piece of “politically correct” stupidity occurred here. The Department of Conservation, being green inclined, sent an electric car for its people to drive around Stewart Island, the small southern island of New Zealand. It has too small a population to warrant an electric cable to connect it to the national grid, so all the electricity there is generated by burning diesel!

The paper claims it is possible to remove 10 billion tonne of CO2 fairly quickly, and 20 billion tonne by 2055. However, my feeling is that is a little like dreaming because I assume it requires stopping the Amazon and similar burnoffs. On the other hand, even if it does not work out exactly right, it still has benefits. Stopping flooding and erosion while having a more pleasant environment and better farming practices might not save the planet, but it might make your local bit of planet better to live in.

How can we exist?

One of the more annoying questions in physics is why are we here? Bear with me for a minute, as this is a real question. The Universe is supposed to have started with what Fred Hoyle called “The Big Bang”. Fred was being derisory, but the name stuck. Anyway what happened is that a very highly intense burst of energy began expanding, and as it did, perforce the energy became less dense. As that happened, out condensed elementary particles. On an extremely small scale, that happens in high-energy collisions, such as in the Large Hadron Collider. So we are reasonably convinced we know what happened up to this point, but there is a very big fly in the ointment. When such particles condense out we get an equal amount of matter and what we call antimatter. (In principle, we should get dark matter too, but since we do not know what that is, I shall leave that.) 

Antimatter is, as you might guess, the opposite of matter. The most obvious example is the positron, which is exactly the same as the electron except it has positive electric charge, so when a positron is around an electron they attract. In principle, if they were to hit each other they would release an infinite amount of energy, but nature hates the infinities that come out of our equations so when they get so close they annihilate each other and you get two gamma ray photons that leave in opposite directions to conserve momentum. That is more or less what happens when antimatter generally meets matter – they annihilate each other, which is why, when we make antimatter in colliders, if we want to collect it we have to do it very carefully with magnetic traps and in a vacuum.

So now we get to the problem of why we are here: with all that antimatter made in equal proportions to matter, why do we have so much matter? As it happens, the symmetry is violated very slightly in kaon decay, but this is probably not particularly helpful because the effect is too slight. In the previous post on muon decay I mentioned that that could be a clue that there might be physics beyond the Standard Model to be unraveled. Right now, the fact that there is so much matter in the Universe should be a far stronger clue that something is wrong with the Standard Model. 

Or is it? One observation that throws that into doubt was published in the Physical Review, D, 103, 083016 in April this year. But before coming to that, some background. A little over ten years ago, colliding heavy ions made a small amount of anti helium-3, and a little later, antihelium-4. The antihelium has two antiprotons, and one or two antineutrons. To make this, the problem is to get enough antiprotons and antineutrons close enough. To give some idea of the trouble, a billion collisions of gold ions with energies of two hundred billion and sixty-two billion electron volts produced 18 atoms of antihelium 4, with masses of 3.73 billion electron volts. In such a collision, the energy requires a temperature of over 250,000 times that of the sun’s core. 

Such antihelium can be detected through gamma ray frequencies when the atoms decay on striking matter, and apparently also through the Alpha Magnetic Spectrometer on the International Space Station, which tracks cosmic rays. The important point is that antihelium-4 behaves exactly the same as an alpha particle, except that, because the antiprotons have negative charge, their trajectories bend in the opposite direction to ordinary nuclei. These antinuclei can be made through the energies of cosmic rays hitting something, however it has been calculated that the amount of antihelium-3 detected so far is 50 times too great to be explained by cosmic rays, and the amount of antihelium-4 detected is 100,000 times too much.

How can this be? The simple answer is that the antihelium is being made by antistars. If you accept them, gamma ray detection indicates 5787 sources, and it has been proposed that at least fourteen of these are antistars, and if we look at the oldest stars near the centre of the galaxy, then estimates suggest up to a fifth of the stars there could be antistars, possibly with antiplanets. If there were people on these, giving them a hug would be outright disastrous for each of you.Of course, caution here is required. It is always possible that this antihelium was made in a more mundane way that as yet we do not understand. On the other hand, if there are antistars, it solves automatically a huge problem, even if it creates a bigger one: how did the matter and antimatter separate? As is often the case in science, solving one problem creates even bigger problems. However, real antistars would alter our view of the universe and as long as the antimatter is at a good distance, we can accept them.

Much Ado About Muons

You may or may not have heard that the Standard Model, which explains “all of particle physics”, is in trouble and “new physics” may be around the corner. All of this arises from a troublesome result from the muon, a particle that is very similar to an electron except it is about 207 times more massive and has a mean lifetime of 2.2 microseconds. If you think that is not very important for your current lifestyle (and it isn’t) wait, there’s more. Like the electron it has a charge of -1, and a spin of ½, which means it acts like a small magnet. Now, if the particle is placed in a strong magnetic field, the direction of the spin wobbles (technically, precesses) and the strength of this interaction is described by a number called the g factor, which for a classical situation, g = 2. Needless to say, in the quantum world that is wrong. For the electron, it is roughly 2.002 319 304 362, the numbers here stop where uncertainty starts. If nothing else, this shows the remarkable precision achieved by experimental physicists. Why is it not 2? The basic reason is the particle interacts with the vacuum, which is not quite “nothing”. You will see quantum electrodynamics has got this number down fairly precisely, and quantum electrodynamics, which is part of the standard model, is considered to be the most accurate theoretical calculation ever, or the greatest agreement between calculation and observation. All was well, until this wretched muon misbehaved.

Now, the standard model predicts the vacuum comprises a “quantum foam” of virtual particles popping in and out of existence, and these short-lived particles affect the g-factor, causing the muon’s wobble to speed up or slow down very slightly, which in turn leads to what is called an “anomalous magnetic moment”. The standard model should calculate these to the same agreement as with the electron, and the calculations give:

  • g-factor: 2.00233183620
  • anomalous magnetic moment: 0.00116591810

The experimental values announced by Fermilab and Brookhaven are:

  • g-factor: 2.00233184122(82)
  • anomalous magnetic moment: 0.00116592061(41)

The brackets indicate uncertainty. Notice a difference? Would you say it is striking? Apparently there is only a one in 40,000 chance that it will be a statistical error. Nevertheless, apparently they will keep this experiment running at Fermilab for another two years to firm it up. That is persistence, if nothing else.

This result is what has excited a lot of physicists because it means the calculation of how this particle interacts with the vacuum has underestimated the actual effect for the muon. That suggests more physics beyond the standard model, and in particular, a new particle may be the cause of the additional effect. Of course, there has to be a fly in the ointment. One rather fearsome calculation claims to be a lot closer to the observational value. To me the real problem is how can the same theory come up with two different answers when there is no arithmetical mistake?

Anyway, if the second one is right, problem gone? Again, not necessarily. At the Large Hadron collider they have looked at B meson decay. This can produce electrons and positrons, or muons and antimuons. According to the standard model, these two particles are identical other than for mass, which means the rate of production of each should be identical, but it isn’t quite. Again, it appears we are looking at small deviations. The problem then is, hypothetical particles that might explain one experiment fail for the other. Worse, the calculations are fearsome, and can take years. The standard model has 19 parameters that have to be obtained from experiment, so the errors can mount up, and if you wish to give the three neutrinos mass, in come another eight parameters. If we introduce yet another particle, in come at least one more parameter, and probably more. Which raises the question, since adding a new assignable parameter will always answer one problem, how do we know we are even on the right track?

All of which raises the question, is the standard model, which is a part of quantum field theory, itself too complicated, and maybe not going along the right path? You might say, how could I possibly question quantum field theory, which gives such agreeable results for the electron magnetic moment, admittedly after including a series of interactions? The answer is that it also gives the world’s worst agreement with the cosmological constant. When you sum the effects of all these virtual particles over the cosmos, the expansion of the Universe is wrong by 10^120, that is, 10 followed by 120 zeros. Not exceptionally good agreement. To get the agreement it gets, something must be right, but as I see it, to get such a howling error, something must be wrong also. The problem is, what?

How Many Tyrannosaurs Were There?

Suppose you were transported back to the late Cretaceous, what is the probability that you would see a Tyrannosaurus? That depends on a large number of factors, and to simplify, I shall limit myself to T Rex. There were various Tyrannosaurs, but probably in different times and different places. As far as we know, T Rex was limited to what was effectively an island land mass known as Laramidia that has now survived as part of Western North America. In a recent edition of Science, a calculation was made, and it starts with the premise, known as “Damuth’s Law” that population density is negatively correlated with body mass through a power law that involves two assignable constants, plus the body mass. What does that mean? It is an empirical relationship that says the bigger the animal, the fewer will be found in a given area. The reason is obvious: the bigger the animal, the more it will eat, and a given area has only so much food. Apparently one of the empirical constants has been assigned a value of 0.75, more or less, so now we are down to one assignable constant.

If we concentrate on the food requirement, then it depends on what it eats, and what it does with it. To explain the last point, carnivores kill prey, so there has to be enough prey there to supply the food, AND to be able to reproduce. There has to be a stable population of prey, otherwise the food runs out and everyone dies. The bigger the animal, the more food it needs to generate body mass and to provide the energy to move, however mammals have a further requirement over animals like snakes: they burn food to provide body heat, so mammals need more food per unit mass. It also depends on how specialized the food is. Thus pandas, specializing on eating bamboo, depend on bamboo growth rates (which happens to be fast) and on something else not destroying the bamboo. For Tyrannosaurs, they presumably would concentrate on eating large animals. Anything that was a few centimeters high would probably be safe, apart from being accidentally stood on, because the Tyrannosaur could not get its head down low enough and keep it there long enough to catch it. The smaller raptors were also probably safe because they could run faster. So now the problem is, how many large animals, and was there a restriction? My guess is it would take on any large herbivore. In terms of the probability of meeting one, it also depends on how they hunt. If they hunted in packs, which is sometimes postulated, you are less likely to meet them, but you are in more trouble if you do.

That now gets back to how many large herbivores would be in a given area, and that in turn depends on the amount of vegetation, and its food value. We have to make guesses about that. We also have to decide whether the Tyrannosaur generated its own heat. We cannot tell exactly, but the evidence does seem to support the fact that it was concerned about heat as it probably had feathers. The article assumed that the dinosaur was about half-way between mammals and large lizards as far as heat generation goes. Provided the temperatures were warm, something as large as a Tyrannosaur would probably be able to retain much of its own heat as surface area is a smaller fraction of volume than for small animals.The next problem is assigning body mass, which is reasonably straightforward for a given skeleton, but each animal starts out as an egg.  How many juvenile ones were there? This is important because juvenile ones will have different food requirements; they eat smaller herbivores. The authors took a distribution that is somewhat similar to that for tigers. If so, an area the size of California could support 3,800 T. Rex. We now need the area over which they roamed, and with a considerable possible error range and limiting ourselves to land that is above sea level now, they settled on 2.3 + 0.88 million square kilometers, which, at any one time would support about 20,000 individuals. If we take a mid-estimate of how long they roamed, which is 2.4 million years, we get, with a very large error range, that the total number of T. Rex that ever lived was about 2.5 billion individuals. Currently, there are 32 individual fossils (essentially all are partial), which shows how difficult fossilization really is. Part of this, of course, arises because fossilization is dependent on appropriate geology and conditions. So there we are: more useless information, almost certainly erroneous, but fun to speculate on.

Microplastics

You may have heard that the ocean is full of plastics, and while full is an excessive word, there are huge amounts of plastics there, thanks to humans inability to look after some things when they have finished using them. Homo litterus is what we are. You may even have heard that these plastics degrade in light, and form microscopic particles that are having an adverse effect on the fish population. If that is it, as they say, “You aint heard nothin’ yet.”

According to an article in the Proceedings of the National Academy of Science, there is roughly 1100 tons of microplastics in the air over the Western US, and presumably there are corresponding amounts elsewhere. When you go for a walk in the wilderness to take in the fresh air, well, you also breathe in microplastics. 84% of that in the western US comes from roads outside the major cities, and 11% appear to be blowing in from the oceans. They stay airborne for about a week, and eventually settle somewhere. As to source, plastic bags and bottles photodegrade and break down into ever-smaller fragments. When you put clothes made from synthetic fibers into your washing machine, tiny microfibers get sloughed off and end up wherever the wastewater ends up. The microplastics end up in the sludge, and if that is sold off as fertilizer, it ends up in the soil. Otherwise, it ends up in the sea. The fragments of plastics get smaller, but they stay more or less as polymers, although nylons and polyesters will presumably hydrolyse eventually. However, at present there are so many plastics in the oceans that there may even be as much microplastics blowing out as plastics going in.

When waves crash and winds scour the seas, they launch seawater droplets into the air. If the water can evaporate before the drops fall, i.e. in the small drops, you are left with an aerosol that contains salts from the sea, organic matter, microalgae, and now microplastics.

Agricultural dust provided 5% of the microplastics, and these are effectively recycled, while cities only provided 0.4%. The rest mainly come from roads outside cities. When a car rolls down a road, tiny flecks come off the tyres, and tyre particles are included in the microplastics because at that size the difference between a plastic and an elastomer is trivial. Road traffic in cities does produce a huge amount of such microplastics, but these did not affect this study because in the city, buildings shield the wind and particles do not get lifted to the higher atmosphere. They will simply pollute the citizens’ air locally so city dwellers merely get theirs “fresher”.  Also, the argument goes, cars moving at 100 k/h impart a lot of energy but in cities cars drive much more slowly. I am not sure how they counted freeways/motorways/etc that go through cities. They are hardly rural, although around here at rush hour they can sometimes look like they think they ought to be parking lots.

Another reason for assigning tyre particles as microplastics is that apparently all sources are so thoroughly mixed up it is impossible to differentiate them. The situation may be worse in Europe because there they get rid of waste plastics by incorporating them in road-surface material, and hence as the surface wears, recycled waste particles get into the air.

Which raises the question, what to do? Option 1 is to do nothing and hope we can live with these microplastics. You can form your own ideas on this. The second is to ban them from certain uses. In New Zealand we have banned supermarket plastic bags and when I go shopping I have reusable bags that are made out of, er, plastics, but of course they don’t get thrown away or dumped in the rubbish. The third option is to destroy the used plastics.I happen to favour the third option, because it is the only way to get rid of the polymers. The first step in such a system would be to size reduce the objects and separate those that float on water from those that do not. Those that do can be pyrolysed to form hydrocarbon fuels that with a little hydrotreating can make good diesel or petrol, while those that sink can be broken down with hydrothermal pyrolysis to get much the same result. Hydrothermal treatment of wastewater sludge also makes fuel, and the residues, essentially carbonaceous solids, can be buried to return carbon to the ground. Such polymers will no longer exist as polymers. However, whatever we do, all that will happen is we limit the load. The question then is, how harmless are they? Given we have yet to notice effects, they cannot be too hazardous, but what is acceptable?

A Discovery on Mars

Our space programs now seem to be focusing in the increasingly low concentrations or more obscure events, as if this will tell us something special. Recall earlier there was the supposed finding of phosphine in the Venusian atmosphere. Nothing like stirring up controversy because this was taken as a sign of life. As an aside, I wonder how many people actually have ever noticed phosphine anywhere? I have made it in the lab, but that hardly counts. It is not a very common material, and the signal in the Venusian atmosphere was almost certainly due to sulphur dioxide. That in itself is interesting when you ask how would that get there? The answer is surprisingly simple: sulphuric acid is known to be there, and it is denser, and might form a fog or even rain, but as it falls it hits the hotter regions near the surface and pyrolysis to form sulphur dioxide, oxygen and water. These rise, the oxygen reacts with sulphur dioxide to make sulphur trioxide (probably helped by solar radiation), which in turn reacts with water to form sulphuric acid, which in turn is why the acid stays in the atmosphere. Things that have a stable level on a planet often have a cycle.

In February this year, as reported in Physics World, a Russian space probe detected hydrogen chloride in the atmosphere of Mars after a dust storm occurred. This was done with a spectrometer that looked at sunlight as it passed through the atmosphere, and materials such as hydrogen chloride would be picked up as a darkened line at the frequency for the bond vibration in the infrared part of the spectrum. The single line, while broadened due to rotational options, would be fairly conclusive. I found the article to be interesting for all sorts of reasons, one of which was for stating the obvious. Thus it stated that dust density was amplified in the atmosphere during a global dust storm. Who would have guessed that? 

Then with no further explanation, the hydrogen chloride could be generated by water vapour interacting with the dust grains. Really? As a chemist my guess would be that the dust had wet salt on it. UV radiation and atmospheric water vapour would oxidise that, to make at first sodium hypochlorite, like domestic bleach and then hydrogen.  From the general acidity we would then get hydrogen chloride and probably sodium carbonate dust. They were then puzzled as to how the hydrogen chloride disappeared. The obvious answer is that hydrogen chloride would strongly attract water, which would form hydrochloric acid, and that would react with any oxide or carbonate in the dust to make chloride salts. If that sounds circular, yes it is, but there is a net degradation of water; oxygen or oxides would be formed, and hydrogen would be lost to space. The loss would not be very great, of course, because we are talking about parts per billion in a highly rarefied upper atmosphere and only during a dust storm.

Hydrogen chloride would also be emitted during volcanic eruptions, but that is probably able to be eliminated here because Mars no longer has volcanic eruptions. Fumarole emissions would be too wet to get to the upper atmosphere, and if they occurred, and there is no evidence they still do, any hydrochloric acid would be expected to react with oxides, such as the iron oxide that makes Mars look red, rather quickly.  So the unfortunate effect is that the space program is running up against the law of diminishing returns. We are getting more and more information that involves ever-decreasing levels of importance. Rutherford once claimed that physics was the only science – the rest was stamp collecting.  Well, he can turn in his grave because to me this is rather expensive stamp collecting.