Science in Action – or Not

For my first post in 2019, I wish everyone a happy and prosperous New Year. 2018 was a slightly different year than most for me, in that I finally completed and published my chemical bond theory as an ebook; that is something I had been putting off for a long time, largely because I had no clear idea what to do with the theory. There is something of a story behind this, so why not tell at least part of it in my first blog post for the year? The background to this illustrates why I think science has gone slightly off the rails over the last fifty years.

The usual way to get a scientific thought over is to write a scientific paper and publish it in a scientific journal. These tend to be fairly concise, and primarily present a set of data or make one point. One interesting point about science is that if it is not in accord with what people expect, the odds on are it will be ignored, or the journals will not even accept it. You have to add to what people believe to be accepted. As the great physicist Enrico Fermi once said, “Never underestimate the joy people derive from hearing something they already know.” Or at least think they know. The corollary is that you should never underestimate the urge people have to ignore anything that seems to contradict what they think they know.

My problem was I believed the general approach to chemical bond theory was wrong in the sense it was not useful. The basic equations could not be solved, and progress could only be made through computer modelling, together with as John Pople stated in his Nobel lecture, validation, which involved “the optimization of four parameters from 299 experimentally derived energies”. These validated parameters only worked for a very narrow range of molecules; if they were too different the validation process had to be repeated with a different set of reference molecules. My view of this followed another quote from Enrico Fermi: I remember my friend Johnny von Neumann used to say, “with four parameters I can fit an elephant and with five I can make him wiggle his trunk.” (I read that with the more modern density functional theory, there could be up to fifty adjustable parameters. If after using that many you cannot get agreement with observation, you should most certainly give up.)

Of course, when I started my career, the problem was just plain insoluble. If you remember the old computer print-out, there were sheets of paper about the size of US letter paper, and these would be folded in a heap. I had a friend doing such computations, and I saw him once with such a pile of computer paper many inches thick. This was the code, and he was frantic. He kept making alterations, but nothing worked – he always got one of two answers: zero and infinity. As I remarked, at least the truth was somewhere in between.

The first problem I attacked was the energy of electrons in the free atoms. In standard theory, the Schrödinger equation, when you assume that an electron in a neutral atom sees a charge of one, the binding energy is far too weak. This is “corrected”througha “screening constant”, and each situation had its own “constant”. That means that each value was obtained by multiplying what you expect by something to give the right answer. Physically, this is explained by the electron penetrating the inner electron shells and experiencing greater electric field.

What I came up with is too complicated to go into here, but basically the concept was that since the Schrödinger equation (the basis of quantum mechanics) is a wave equation, assume there was a wave. That is at odds with standard quantum mechanics, but there were two men, Louis de Broglie and David Bohm, who had argued there was a wave that they called the pilot wave. (In a recent poll of physicists regarding which interpretation was most likely to be correct, the pilot wave got zero votes.) I adopted the concept (well before that poll) but I had two additional features, so I called mine the guidance wave.

For me, the atomic orbital was a linear sum of component waves, one of which was the usual hydrogen-like wave, plus a wave with zero nodes, and two additional waves to account for the Uncertainty Principle. It worked to a first order using only quantum numbers. I published it, and the scientific community ignored it.

When I used it for chemical bond calculations, the results are accurate generally to within a few kJ/mol, which is a fraction of 1% frequently. Boron, sodium and bismuth give worse results.  A second order term is necessary for atomic orbital energies, but it cancels in the chemical bond calculations. Its magnitude increases as the distance from a full shell increases, and it oscillates in sign depending on whether the principal quantum number is odd or even, which results when going down a group of elements, that the lines joining them zig zag.

Does it matter? Well, in my opinion, yes. The reason is that first it gives the main characteristics of the wave function in terms only of quantum numbers, free f arbitrary parameters. More importantly, the main term differs depending on whether the electron is paired or not, and since chemical bonding requiresthe pairing of unpaired electrons, the function changes on forming bonds. That means there is a quantum effect that is overlooked in the standard calculations. But you say, surely they would notice that? Recall what I said about assignable parameters? With four of them, von Neumann could use the data to calculate an elephant! Think of what you could do with fifty!

As a postscript, I recently saw a claim on a web discussion that some of the unusual properties of gold, such as its colour, arise through a relativistic effect. I entered the discussion and said that if my paper was correct, gold is reasonably well-behaved, and its energy levels were quite adequately calculated without needing relativity, as might be expected from the energies involved. This drew almost derision – the paper was dated, an authority has spoken since then. A simple extrapolation from copper to silver to gold shows gold is anomalous – I should go read a tutorial. I offered the fact that all energy levels require enhanced screening constants, therefore Maxwell’s equations are not followed. These are the basic laws of electromagnetism. Derision again – someone must have worked that out. If so, what is the answer? As for the colour, copper is also coloured. As for the extrapolation, you should not simply keep drawing a zig to work out where the zag ends. The interesting point here was that this person was embedded in “standard theory”. Of course standard theory might be right, but whether it is depends on whether it explains nature properly, and not on who the authority spouting it is.

Finally, a quote to end this post, again from Enrico Fermi. When asked what characteristics Nobel prize winners had in common: “I cannot think of a single one, not even intelligence.”

Advertisements

Phlogiston – Early Science at Work

One of the earlier scientific concepts was phlogiston, and it is of interest to follow why this concept went wrong, if it did. One of the major problems for early theory was that nobody knew very much. Materials had properties, and these were referred to as principles, which tended to be viewed either as abstractions, or as physical but weightless entities. We would not have such difficulties, would we? Um, spacetime?? Anyway, they then observed that metals did something when heated in air:

M   + air +  heat        ÞM(calx) ±  ???  (A calx was what we call an oxide.)

They deduced there had to be a metallic principle that gives the metallic properties, such as ductility, lustre, malleability, etc., but they then noticed that gold refuses to make a calx, which suggested there was something else besides the metallic principle in metals. They also found that the calx was not a mixture, thus rust did not lead to iron being attached to a lodestone. This may seem obvious to us now, but conceptually this was significant. For example, if you mix blue and yellow paint, you get green and they cannot readily be unmixed, nevertheless it is a mixture. Chemical compounds are not mixtures, even though you might make them by mixing two materials. Even more important was the work by Paracelsus, the significance of which is generally overlooked. He noted there were a variety of metals, calces and salts, and he generalized that acid plus metal or acid plus metal calx gave salts, and each salt was specifically different, and depended only on the acid and metal used. He also recognized that what we call chemical compounds were individual entities, that could be, and should be, purified.

It was then that Georg Ernst Stahl introduced into chemistry the concept of phlogiston. It was well established that certain calces reacted with charcoal to produce metals (but some did not) and the calx was usually heavier than the metal. The theory was, the metal took something from the air, which made the calx heavier. This is where things became slightly misleading because burning zinc gave a calx that was lighter than the metal. For consistency, they asserted it should have gained but as evidence poured in that it had not, they put that evidence in a drawer and did not refer to it. Their belief that it should have was correct, and indeed it did, but this avoiding the “data you don’t like” leads to many problems, not the least of which include “inventing” reasons why observations do not fit the theory without taking the trouble to abandon the theory. This time they were right, but that only encourages the act. As to why there was the problem, zinc oxide is relatively volatile and would fume off, so they lost some of the material. Problems with experimental technique and equipment really led to a lot of difficulties, but who amongst us would do better, given what they had?

Stahl knew that various things combusted, so he proposed that flammable substances must contain a common principle, which he called phlogiston. Stahl then argued that metals forming calces was in principle the same as materials like carbon burning, which is correct. He then proposed that phlogiston was usually bound or trapped within solids such as metals and carbon, but in certain cases, could be removed. If so, it was taken up by a suitable air, but because the phlogiston wanted to get back to where it came from, it got as close as it could and took the air with it. It was the phlogiston trying to get back from where it came that held the new compound together. This offered a logical explanation for why the compound actually existed, and was a genuine strength of this theory. He then went wrong by arguing the more phlogiston, the more flammable the body, which is odd, because if he said some but not all such materials could release phlogiston, he might have thought that some might release it more easily than others. He also argued that carbon was particularly rich in phlogiston, which was why carbon turned calces into metals with heat. He also realized that respiration was essentially the same process, and fire or breathing releases phlogiston, to make phlogisticated air, and he also realized that plants absorbed such phlogiston, to make dephlogisticated air.

For those that know, this is all reasonable, but happens to be a strange mix of good and bad conclusions. The big problem for Stahl was he did not know that “air” was a mixture of gases. A lesson here is that very seldom does anyone single-handedly get everything right, and when they do, it is usually because everything covered can be reduced to a very few relationships for which numerical values can be attached, and at least some of these are known in advance. Stahl’s theory was interesting because it got chemistry going in a systemic way, but because we don’t believe in phlogiston, Stahl is essentially forgotten.

People have blind spots. Priestley also carried out Lavoisier’s experiment:  2HgO  + heat   ⇌   2Hg  + O2and found that mercury was lighter than the calx, so argued phlogiston was lighter than air. He knew there was a gas there, but the fact it must also have weight eluded him. Lavoisier’s explanation was that hot mercuric oxide decomposed to form metal and oxygen. This is clearly a simpler explanation. One of the most important points made by Lavoisier was that in combustion, the weight increase of the products exactly matched the loss of weight by the air, although there is some cause to wonder about the accuracy of his equipment to get “exactly”. Measuring the weight of a gas with a balance is not that easy. However, Lavoisier established the fact that matter is conserved, and that in chemical reactions, various species react according to equivalent weights. Actually, the conservation of mass was discovered much earlier by Mikhail Lomonosov, but because he was in Russia, nobody took any notice. The second assertion caused a lot of trouble because it is not true without a major correction to allow for valence. Lavoisier also disposed of the weightless substance phlogiston simply by ignoring the problem of what held compounds together. In some ways, particularly in the use of the analytical balance, Lavoisier advanced chemistry, but in disposing of phlogiston he significantly retarded chemistry.

So, looking back, did phlogiston have merit as a concept? Most certainly! The metal gives off a weightless substance that sticks to a particular gas can be replaced with the metal gives off an electron to form a cation, and the oxygen accepts the electron to form an anion. Opposite charges attract and try to bind together. This is, for the time, a fair description of the ionic bond. As for weightless, nobody at the time could determine the weight difference between a metal and a metal less one electron, if they could work out how to make it. Of course the next step is to say that the phlogiston is a discrete particle, and now valence falls into place and modern chemistry is around the corner. Part of the problem there was that nobody believed in atoms. Again, Lomonosov apparently did, but as I noted above, nobody took any notice of him. Of course, is it is far easier to see these things in retrospect. My guess is very few modern scientists, if stripped of their modern knowledge and put back in time would do any better. If you think you could, recall that Isaac Newton spent a lot of time trying to unravel chemistry and got nowhere. There are very few ever that are comparable to Newton.

Is Science in as Good a Place as it Might Be?

Most people probably think that science progresses through all scientists diligently seeking the truth but that illusion was was shattered when Thomas Kuhn published “The Structure of Scientific Revolutions.” Two quotes:

(a) “Under normal conditions the research scientist is not an innovator but a solver of puzzles, and the puzzles upon which he concentrates are just those which he believes can be both stated and solved within the existing scientific tradition.”

(b) “Almost always the men who achieve these fundamental inventions of a new paradigm have been either very young or very new to the field whose paradigm they change. And perhaps that point need not have been made explicit, for obviously these are the men who, being little committed by prior practice to the traditional rules of normal science, are particularly likely to see that those rules no longer define a playable game and to conceive another set that can replace them.”

Is that true, and if so, why? I think it follows from the way science is learned and then funded. In general, scientists gain their expertise by learning from a mentor, and if you do a PhD, you work for several years in a very narrow field, and most of the time the student follows the instructions of the supervisor. He will, of course, discuss issues with the supervisor, but basically the young scientist will have acquired a range of techniques when finished. He will then go on a series of post-doctoral fellowships, generally in the same area because he has to persuade the new team leaders he is sufficiently skilled to be worth hiring. So he gains more skill in the same area, but invariably he also becomes more deeply submerged in the standard paradigm. At this stage of his life, it is extremely unusual for the young scientist to question whether the foundations of what he is doing is right, and since most continue in this field, they have the various mentors’ paradigm well ingrained. To continue, either they find a company or other organization to get an income, or they stay in a research organization, where they need funding. When they apply for it they keep well within the paradigm; first, it is the easiest way for success, and also boat rockers generally get sunk right then. To get funding, you have to show you have been successful; success is measured mainly by the number of scientific papers and the number of citations. Accordingly, you choose projects that you know will work and shuld not upset any apple-carts. You cite those close to you, and they will cite you; accuse them of being wrong and you will be ignored, and with no funding, tough. What all this means is that the system seems to have been designed to generate papers that confirm what you already suspect. There will be exceptions, such as “discovering dark matter” but all that has done so far is to design a parking place for what we do not understand. Because we do  not understand, all we can do is make guesses as to what it is, and the guesses are guided by our current paradigm, and so far our guesses are wrong.

One small example follows to show what I mean. By itself, it may not seem important, and perhaps it isn’t. There is an emerging area of chemistry called molecular dynamics. What this tries to do is to work out is how energy is distributed in molecules as this distribution alters chemical reaction rates, and this can be important for some biological processes. One such feature is to try to relate how molecules, especially polymers, can bend in solution. I once went to hear a conference presentation where this was discussed, and the form of the bending vibrations was assumed to be simple harmonic because for that the maths are simple, and anyhting wrong gets buried in various “constants”. All question time was taken up by patsy questions from friends, but I got hold of the speaker later, and pointed out that I had published paper a long time previously that showed the vibrations were not simple harmonic, although that was a good approximation for small vibrations. The problem is that small vibrations are irrelevant if you want to see significant chemical effects; they come from large vibrations. Now the “errors” can be fixed with a sequence of anharmonicity terms, each with their own constant, and each constant is worked around until the desired answer is obtained. In short you get the asnswer you need by adjusting the constants.

The net result is, it is claimed that good agreement with observation is found once the “constants” are found for the given situation. The “constants” appear to be only constant for a given situation, so arguably they are not constant, and worse, it can be near impossible to find out what they are from the average paper. Now, there is nothing wrong with using empirical relationships since if they work, they make it a lot easier to carry out calculations. The problem starts when, if you do not know whyit works, you may use it under circumstances when it no longer works.

Now, before you say that surely scientists want to understand, consider the problem for the scientist: maybe there is a better relationship, but to change to use it would involve re-writing a huge amount of computer code. That may take a year or so, in which time no publications are generated, and when the time for applications for further funding comes up, besides having to explain the inactivity, you have to explain why you were wrong before. Who is going to do that? Better to keep cranking the handle because nobody is going to know the difference. Does this matter? In most cases, no, because most science involves making something or measuring something, and most of the time it makes no difference, and also most of the time the underpinning theory is actually well established. The NASA rockets that go to Mars very successfully go exactly where planned using nothing but good old Newtonian dynamics, some established chemistry, some established structural and material properties, and established electromagnetism. Your pharmaceuticals work because they have been empirically tested and found to work (at least most of the time).

The point I am making is that nobody has time to go back and check whether anything is wrong at the fundamental level. Over history, science has been marked by a number of debates, and a number of treasured ideas overthrown. As far as I can make out, since 1970, far more scientific output has been made than in all previous history, yet there have been no fundamental ideas generated during this period that have been accepted, nor have any older ones been overturned. Either we have reached a stage of perfection, or we have ceased looking for flaws. Guess which!

Fuel for Legacy Vehicles in a “Carbon-free” Environment

Electric vehicles will not solve our emissions problem: there are over a billion petroleum driven vehicles, and they will not go away any time soon. Additionally, people have a current investment, and while billionaires might throw away their vehicles, most ordinary people will not change unless they can sell what they have, which in turn means someone else is using it. This suggests the combustion motor is not yet finished, and the CO2emissions will continue for a long time yet. That gives us a rather awkward problem, and as noted in the previous posts on global warming, there is no quick fix. One of the more obvious contributions could be biofuels. Yes, you still burn carbon, but the carbon came from the atmosphere. There will also be processing energy, but often that can come from the byproducts of the process. At this point I should add a caveat: I have spent quite a bit of my professional life researching this route so perhaps I have a degree of bias.

The first point is that it will be wrong to take grain and make alcohol for fuel, other than as a way of getting rid of spare or spoiled grain. The world will also have a food shortage, especially if the sea levels start rising, because much of the most productive land is low-lying. If we want to grow biomass, we need an area of land roughly equivalent to the area used for food production, and that land is not there. There are wastelands, but they tend to be non-productive. However, that does not mean we cannot grow biomass for fuel; it merely states there is nowhere nearly enough. Again, there is no single fix.

What you get depends critically on how you do it, and what your biomass is. Of the various processes, I prefer hydrothermal processing, which involves heating the biomass in water up to supercritical temperatures with some additional conditions. In effect, this greatly accelerates the processes that formed oil naturally. Corresponding pyrolysis will break down plastics, and in general high quality fuel is obtainable. The organic fraction of municipal refuse could also be used to make fuel, and in my ebook “Biofuel” I calculated that refuse could produce roughly seven litres per week per person. Not huge, but still a contribution, and it helps solve the landfill problem. However, the best options that I can think of include macroalgae and microalgae. Macroalgae would have to be cultivated, but in the 1970s the US navy carried out an exercise that grew macroalgae on “submerged rafts” in the open Pacific, with nutrients from the sea floor brought up from wind and wave action. Currently there is work being carried out growing microalgae in tanks, etc, in various parts of the world. In principle, microalgae could be grown in the open ocean, if we knew how to harvest it.

I was involved in one project that used microalgae grown in sewage treatment plants. Here there should have been a double benefit – sewage has to be treated so the ponds are already there, and the process cleans up the nitrogen and phosphate that would otherwise be dumped into the sea, thus polluting it. The process could also use sewage sludge, and the phosphate, in principle, was recoverable. A downside was that the system would need more area than the average treatment plant because the residence time is somewhat longer than the current time, which seems designed to remove the worst of the oxygen demand then chuck everything out to sea, or wherever. This process went nowhere; the venture needed to refinance and unfortunately they left it too late, namely shortly after the Lehman collapse.

From the technical point of view, this hydrothermal technology is rather immature. What you get can critically depend on exactly how you do it. You end up with a thick brown fluid, from which you can obtain a number of products. Your petrol fraction is generally light aromatics, with a research octane number (RON) of about 140, and the diesel fraction can have a cetane number approaching 100 (because the main components are straight chain C15 or C17 saturated hydrocarbons. Cetane is the C16 equivalent.) These are superb fuels, however while current motors would run very well on them, they are not optimal.

We can consider ethanol as an example. It has an RON somewhere in the vicinity of 120 – 130. People say ethanol is not much of a fuel because its energy content is significantly lower than hydrocarbons, and that is correct, but energy is not the whole story because efficiency also counts. The average petrol motor is rather inefficient and most of the energy comes out as heat. The work you can get out depends on the change of pressure times volume, so the efficiency can be significantly improved by increasing the compression ratio. However, if the compression is too great, you get pre-ignition. The modern motor is designed to run well with an octane number of about 91, with some a bit higher. That is because they are designed to use the most of the distillate from crude oil. Another advantage of ethanol is you can blend in some water with it, which absorbs heat and dramatically increases the pressure. So ethanol and oxygenates can be used.

So the story with biofuels is very similar to the problems with electric vehicles; the best options badly need more research and development. At present, it looks as if they will not get it in time. Once you have your process, it usually takes at least ten years to get a demonstration plant operating. Not a good thought, is it?

Is science being carried out properly?

How do scientists carry out science, and how should they? These are questions that have been raised by reviewers in a recent edition of Science magazine, one of the leading science journals. One of the telling quotes is “resources (that) influence the course of science are still more rooted in traditions and intuitions than in evidence.” What does that mean? In my opinion, it is along the lines, for those who have, much will be given. “Much” here refers to much of what is available. Government funding can be tight. And in fairness, those who provide funds want to see something for their efforts, and they are more likely to see something from someone who has produced results consistently in the past. The problem is, the bureaucrats responsible for providing the finds have no idea of the quality of what is produced, so they tend to count scientific papers. This favours the production of fairly ordinary stuff, or even rubbish. Newbies are given a chance, but there is a price: they cannot afford to produce nothing. So what tends to happen is that funds are driven towards something that is difficult to fail, except maybe for some very large projects, like the large hadron collider. The most important thing required is that something is measured, and that something is more or less understandable and acceptable by a scientific journal, for that is a successful result. In some cases, the question, “Why was that measured?” would best be answered, “Because it was easy.” Even the large hadron collider fell into that zone. Scientists wanted to find the Higgs boson, and supersymmetry particles. They found the first, and I suppose when the question of building the collider, the reference (totally not apt) to the “God Particle” did not hurt.

However, while getting research funding for things to be measured is difficult, getting money for analyzing what we know, or for developing theories (other than doing applied mathematics on existing theories), is virtually impossible. I believe this is a problem, and particularly for analyzing what we know. We are in this quite strange position that while in principle we have acquired a huge amount of data, we are not always sure of what we know. To add to our problems, anything found more than twenty years ago is as likely as not to be forgotten.

Theory is thus stagnating. With the exception of cosmic inflation, there have been no new major theories that have taken hold since about 1970. Yet far more scientists have been working during this period than in all of previous history. Of course this may merely be due to the fact that new theories have been proposed, but nobody has accepted them. A quote from Max Planck, who effectively started quantum mechanics may show light on this: “A new scientific truth does not triumph
by convincing its opponents and making them see the light, but rather because its opponents eventually die.” Not very encouraging. Another reason may be that it failed to draw attention to itself. No scientist these days can read more than an extremely tiny fraction of what is written, as there are tens of millions of scientific papers in chemistry alone. Computer searching helps, but only for well-defined problems, such as a property of some material. How can you define carefully what you do not know exists?

Further information from this Science article provided some interest. An investigation led to what then non-scientists might consider a highly odd result, namely for scientific papers to be a hit, it was found that usually at least 90 per cent of what is written is well established. Novelty might be prized, but unless well mixed with the familiar, nobody will read it, or even worse, it will not be published. That, perforce, means that in general there will be no extremely novel approach, but rather anything new will be a tweak on what is established. To add to this, a study of “star” scientists who had premature deaths led to an interesting observation: the output of their collaborators fell away, which indicates that only the “star” was contributing much intellectual effort, and probably actively squashing dissenting views, whereas new entrants to the field who were starting to shine tended not to have done much in that field before the “star” died.

A different reviewer noticed that many scientists put in very little effort to cite past discoveries, and when citing literature, the most important is about five years old. There will be exceptions, usually through citing papers by the very famous, but I rather suspect in most cases these are cited more to show the authors in a good light than for any subject illumination. Another reviewer noted that scientists appeared to be narrowly channeled in their research by the need to get recognition, which requires work familiar to the readers, and reviewers, particularly those that review funding applications. The important thing is to keep up an output of “good work”, and that tends to mean only too many go after something that they more or less already now the answer. Yes, new facts are reported, but what do they mean? This, of course, fits in well with Thomas Kuhn’s picture of science, where the new activities are generally puzzles that are to be solved, but not puzzles that will be exceedingly difficult to solve. What all this appears to mean is that science is becoming very good at confirming that which would have been easily guessed, but not so good at coming up with the radically new. Actually, there is worse, but that is for the next post.

Have you got what it takes to form a scientific theory?

Making a scientific theory is actually more difficult than you might think. The first step involves surveying what knowledge is already available. That comes in two subsets: the actual observational data and the interpretation of what everyone thinks that set of data means. I happen to think that set theory is a great start here. A set is a collection of data with something in common, together with the rule that suggests it should be put into one set, as opposed to several. That rule must arise naturally from any theory, so as you form a rule, you are well on your way to forming a theory. The next part is probably the hardest: you have to decide what interpretation that is allegedly established is in fact wrong. It is not that easy to say that the authority is wrong, and your idea is right, but you have to do that, and at the same time know that your version is in accord with all observational data and takes you somewhere else. Why I am going on about this now is I have written two novels that set a problem: how could you prove the Earth goes around the sun if you were an ancient Roman? This is a challenge if you want to test yourself as a theoretician. If you don’t. I like to think there is still an interesting story there.

From September 13 – 20, my novel Athene’s Prophecy will be discounted in the US and UK, and this blog will give some background information to make the reading easier as regards the actual story not regarding this problem. In this, my fictional character, Gaius Claudius Scaevola is on a quest, but he must also survive the imperium of a certain Gaius Julius Caesar, aka Caligulae, who suffered from “fake news”, and a bad subsequent press. First the nickname: no Roman would call him Caligula because even his worst enemies would recognize he had two feet, and his father could easily afford two bootlets. Romans had a number of names, but they tended to be similar. Take Gaius Julius Caesar. There were many of them, including the father, grandfather, great grandfather etc. of the one you recognize. Caligulae was also Gaius Julius Caesar. Gaius is a praenomen, like John. Unfortunately, there were not a lot of such names so there are many called Gaius. Julius is the ancient family name, but it is more like a clan, and eventually there needed to be more, so most of the popular clans had a cognomen. This tended to be anything but grandiose. Thus for Marcus Tullius Cicero, Cicero means chickpea. Scaevola means “lefty”. It is less clear what Caesar means because in Latin the “ar” ending is somewhat unusual. Gaius Plinius Secundus interpreted it as coming from caesaries, which means “hairy”. Ironically, the most famous Julius Caesar was bald. Incidentally, in pronunciation, the latin “C” is the equivalent of the Greek gamma, so it is pronounced as a “G” or “K” – the difference is small and we have now way of knowing. “ae” is pronounced as in “pie”. So Caesar is pronounced something like the German Kaiser.

Caligulae is widely regarded as a tyrant of the worst kind, but during his imperium he was only personally responsible for thirteen executions, and he had three failed coup attempts on his life, the leaders of which contributed to that thirteen. That does not sound excessively tyrannical. However, he did have the bad habit of making outrageous comments (this is prior to a certain President tweeting, but there are strange similarities). He made his horse a senator. That was not mad; it was a clear insult to the senators.

He is accused of making a fatuous invasion of Germany. Actually, the evidence is he got two rebellious legions to build bridges over the Rhine, go over, set up camp, dig lots of earthworks, march around and return. This is actually a text-book account of imposing discipline and carrying out an exercise, following the methods of his brother-in-law Gnaeus Domitius Corbulo, one of the stronger Roman Generals on discipline. He then took these same two legions and ordered them to invade Britain. The men refused to board what are sometimes called decrepit ships. Whatever, Caligulae gave them the choices between “conquering Neptune” and collecting a mass of sea shells, invading Britain, or face decimation. They collected sea shells. The exercise was not madness: it was a total humiliation for the two legions to have to carry these through Rome in the form of a “triumph”. This rather odd behaviour ended legionary rebellion, but it did not stop the coups. The odd behaviour and the fact he despised many senators inevitably led to bad press because it was the senatorial class that wrote histories, but like a certain president, he seemed to go out of his way to encourage the bad press. However, he was not seen as a tyrant by the masses. When he died the masses gave a genuine outpouring of anger at those who killed him. Like the more famous Gaius Julius Caesar, Caligulae had great support from the masses, but not from the senators. I have collected many of his most notorious acts, and one of the most bizarre political incidents I have heard of is quoted in the novel more or less as reported by Philo of Alexandria, with only minor changes for style consistency, and, of course, to report it in English.

As for showing how scientific theory can be developed, in TV shows you find scientists sitting down doing very difficult mathematics, and while that may be needed when theory is applied, all major theories start with relatively simple concepts. If we take quantum mechanics as an example of a reasonably difficult piece of theoretical physics, thus to get to the famous Schrödinger equation, start with the Hamilton-Jacobi equation from classical physics. Now the mathematician Hamilton had already shown you can manipulated that into a wave-like equation, but that went nowhere useful. However, the French physicist de Broglie had argued that there was real wave-like behaviour, and he came up with an equation in which the classical action (momentum times distance in this case) for a wave length was constant, specifically in units of h (Planck’s quantum of action). All that Schrödinger had to do was to manipulate Hamilton’s waves and ensure that the action came in units of h per wavelength. That may seem easy, but everything was present for some time before Schrödinger put that together. Coming up with an original concept is not at all easy.

Anyway, in the novel, Scaevola has to prove the Earth goes around the sun, with what was available then. (No telescopes that helped Galileo.) The novel gives you the material avaiable, including the theory and measurements of Aristarchus. See if you can do it. You, at least, have the advantage you know it does. (And no, you do not have to invent calculus or Newtonian mechanics.)

The above is, of course, merely the background. The main part of the story involves life in Egypt, the aanti-Jewish riots in Egypt, then the religious problems of Judea as Christianty starts.

Monarchic Growth of Giant Planets

In the previous post, I outlined the basic mechanism of how I thought the giant planets formed, and how their mechanism of formation put them at certain distances from the sun. Given that, like everyone else, I assign Jupiter to the snow point, in which case the other planets are where they ought to be. But that raises the question, why one planet in a zone? Let’s take a closer look at this mechanism.

In the standard mechanism, dust accretes into objects by some unknown mechanism, and does this essentially based on collision probability, and so the disk progresses with a distribution of roughly equal sized objects that collide under the same rules, and eventually become what is called planetesimals, which are about the size of the classical asteroid. (I say classical because as we get better at this, we are discovering a huge number of much smaller “asteroids”, and we have the problem of what does the word asteroid mean?) This process continues, and eventually we get Mars-sized objects called oligarchs, or embryos, then these collide to get planets. The size of the planet depends on how many oligarchs collide, thus fewer collided to make Venus than Earth, and Mars is just one oligarch. I believe this is wrong for four reasons: the first is, giants cannot grow fast enough; second, the dust is still there in 30 My old disks; the collision energies should break up the bodies at any given size because collisions form craters, not hills; the system should be totally mixed up, but isotope evidence shows that bodies seem to have accreted solely from the material at roughly their own distance from the sun.

There is an alternative called monarchic growth, in which, if one body can get a hundred times bigger than any of the others, it alone grows by devouring the others. For this to work, we need initial accretion to be possible, but not extremely probable from dust collisions. Given that we see disks by their dust that are estimated to be up to 30 My old, that seems a reasonable condition. Then, once it starts, we need a mechanism that makes further accretion inevitable, that is, when dust collides, it sticks. The mechanism I consider to be most likely (caveat – I developed it so I am biased) is as follows.

As dust comes into an appropriate temperature zone, then collisions transfer their kinetic energy into heat that melts an ice at the point of contact, and when it quickly refreezes, the dust particles are fused to the larger body. So accretion occurs a little below the melting temperature, and the probability of sticking falls off as the distance from that appropriate zone increases, but there is no sharp boundary. The biggest body will be in the appropriate zone because most collisions will lead to sticking, and once the body gets to be of an appropriate size, maybe as little as a meter sized, it goes into a Keplerian orbit. The gas and dust is going slower, due to gas drag (which is why the star is accreting) so the body in the optimal zone accretes all the dust and larger objects it collides with. Until the body gets sufficiently large gravitationally, collisions have low relative velocity, so the impact energy is modest.

Once it gets gravitationally bigger, it will accrete the other bodies that are at similar radial distance. The reason is that if everything is in circular orbits, orbits slightly further from the star have longer periodic times, in part because they move slightly slower, and in part because they have slightly further to go, so the larger body catches up with them and its gravity pulls the smaller body in. Unless it has exactly the same radial distance from the star, they will pass very closely and if one has enough gravity to attract the other, they will collide. Suppose there are two bodies at the same radial distance. That too is gravitationally unstable once they get sufficiently large. All interactions do not lead to collisions, and it is possible that one can be thrown inwards while the other goes outwards, and the one going in may circularise somewhere else closer to the star. In this instance, Ceres has a density very similar to the moons of Jupiter, and it is possible that it started life in the Jovian region, came inwards, and then finished accreting material from its new zone.

The net result of this is that a major body grows, while smaller bodies form further away, trailing off with distance, then there is a zone where nothing accretes, until further out there is the next accretion zone. Such zones get further away as you get further from the star because the temperature gradient decreases. That is partly why Neptune has a Kuiper Belt outside it. The inner planets do not because with a giant on each side, the gravity causes them to be cleaned out. This means that after the system becomes settled, a lot of residues start bombing the planet. This requires what could be called a “Great Bombardment”, but it means each system gets a bombardment mainly of its own composition, and there could be no significant bombardment with bodies from another system. This means the bombardment would have the same chemical composition as the planet itself.

Accordingly, we have a prediction. Is it right? It is hard to tell on Earth because while Earth almost certainly had such a bombardment, plate tectonics has altered the surface so much. Nevertheless, the fact the Moon has the same isotopes as Earth, and Earth has been churned but the Moon has not, is at least minor support. There is, of course, a second prediction. There seem to be many who assume the interior of the Jovian satellites will have much nitrogen. I predict very little. There will be some through adsorption of ammonia onto dust, and since ammonia binds more strongly than neon, then perhaps there will be very modest levels, but the absence of such material in the atmosphere convinces me it will be very modest.