Some Shortcomings of Science

In a previous post, in reference to the blog repost, I stated I would show some of the short-comings of science, so here goes.

One of the obvious failings is that people seem happy to ignore what should convince them. The first sign I saw of this type of problem was in my very early years as a scientist. Sir Richard Doll produced a report that convincingly (at least to me) linked smoking to cancer. Out came a number of papers rubbishing this, largely from people employed by the tobacco industry. Here we have a clear conflict, and while it is ethically correct to show that some hypothesis is wrong, it should be based on sound logic. Now I believe that there are usually a very few results, and maybe as few as one specific result, that makes the conclusion unassailable. In this case, chemists isolated the constituents of cigarette smoke and found over 200 suspected carcinogens, and trials with some of these on lab rats were conclusive: as an example one dab of pure 3,4-benzopyrene gave an almost 100% probability of inducing a tumour. Now that is a far greater concentration than any person will get smoking, and people are not rats, nevertheless this showed me that on any reasonable assessment, smoking is a bad idea. (It was also a bad idea for a young organic chemist: who needs an ignition source a few centimeters in front of the face when handling volatile solvents?) Yet fifty years or so later, people continue to smoke. It seems to be a Faustian attitude: the cancer will come decades later, or for some lucky ones, not at all, so ignore the warning.

A similar situation is occurring now with climate change. The critical piece of information for me is that during the 1990s and early 2000s (the period of the study) it was shown there is a net power input to the oceans of 0.64 W/m2. If there is a continuing net energy input to the oceans, they must be warming. Actually, the Tasman has been clearly warming, and the evidence from other oceans supports that. So the planet is heating. Yet there are a small number of “deniers” who put their head in the sand and refuse to acknowledge this, as if by doing so, the problem goes away. Scientists seem unable to make people fact up to the fact that the problem must be dealt with now but the price is not paid until much later. As an example, in 2014 US Senate majority leader Mitch McConnell said: “I am not a scientist. I’m interested in protecting Kentucky’s economy.” He forgot to add, now.

The problem of ignoring what you do not like is general and pervasive, as I quickly learned while doing my PhD. My PhD was somewhat unusual in that I chose the topic and designed the project. No need for details here, but I knew the department, and my supervisor, had spent a lot of effort establishing constants for something called the Hammett equation. There was a great debate going on whether the cyclopropane ring could delocalise electronic charge in the same way as a double bond, only mre weakly. This equation would actually address that question. The very limited use of it by others at the start of my project was inconclusive, for reasons we need not go into here. Anyway, by the time I finished, my results showed quite conclusively that it did not, but the general consensus, based essentially on the observation that positive electric charge was strongly stabilised by it, and on molecular orbital theory (which assumes it initially, so was hardly conclusive on this question) was that it did. My supervisor made one really good suggestion as to what to do when I ran into trouble, and this was the part that showed the effect the most. But when it became clear that everyone else was agreeing the opposite and he had moved to a new position, he refused to publish that part.

This was an example of what I believe is the biggest failing. The observation everyone clung to was unexpected and needed a new explanation, and what they came up with most certainly gave the right answer for that specific case. However, many times there is more than one possible explanation, and I came up with an alternative based on classical electric field theory, that also predicted positive charge would be stabilized, and by how much, but it also predicted negative charge would be destabilized. The delocalization concept required bothto be stabilised. So there was a means of distinguishing them, and there was a very small amount of clear evidence that negative charge was destabilised. Why a small amount of evidence. Well, most attempts at making such compounds failed outright, which is in accord with the compounds being unstable but it is not definitive.

So what happened? A review came out that “convincingly showed” the answer was yes. The convincing part was that it cited a deluge of “me too” work on the stabilization of positive charge. It ignored my work, and as I later found out when I wrote a review, it ignored over 60 different types of evidence that showed results that contradicted the “yes” answer. My review was not published because it appears chemistry journals do not publish logic analyses. I could not be bothered rewriting, although the draft document is on the web if anyone is interested.

The point this shows is that once a paradigm is embedded, even if on shaky grounds, it is very hard to dislodge, in accord with what Thomas Kuhn noted in “The structure of scientific revolutions”. One of the points Kuhn noted was if the paradigm had evidence, scientists would rush to write papers confirming the paradigm by doing minor variations on what worked. That happened above: they were not interested in testing the hypothesis; they were interested in getting easy papers published to advance their careers. Kuhn also noted that observations that contradict the paradigm are ignored as long as they can be. Maybe over 60 different types of observations that contradict, or falsify, the paradigm is a record? I don’t know, but I suspect the chemical community will not be interested in finding out.

Advertisements

Repost from Sabine Hossenfelder’s blog “Backreaction”

Two posts this week. The first is more for scientists, but I think it mentions points that people reading about science should recognise as possibly there. Sabine has been somewhat critical of some of modern science, and I feel she has a point. I shall do a post of my own on this topic soon, but it might be of interest to read the post following this to see what sort of things can go wrong.
Both bottom-up and top-down measures are necessary to improve the current situation. This is an interdisciplinary problem whose solution requires input from the sociology of science, philosophy, psychology, and – most importantly – the practicing scientists themselves. Details differ by research area. One size does not fit all. Here is what you can do to help.

As a scientist:

  • Learn about social and cognitive biases: Become aware of what they are and under which circumstances they are likely to occur. Tell your colleagues.
  • Prevent social and cognitive biases: If you organize conferences, encourage speakers to not only list motivations but also shortcomings. Don’t forget to discuss “known problems.” Invite researchers from competing programs. If you review papers, make sure open questions are adequately mentioned and discussed. Flag marketing as scientifically inadequate. Don’t discount research just because it’s not presented excitingly enough or because few people work on it.
  • Beware the influence of media and social networks: What you read and what your friends talk about affects your interests. Be careful what you let into your head. If you consider a topic for future research, factor in that you might have been influenced by how often you have heard others speak about it positively.
  • Build a culture of criticism: Ignoring bad ideas doesn’t make them go away, they will still eat up funding. Read other researchers’ work and make your criticism publicly available. Don’t chide colleagues for criticizing others or think of them as unproductive or aggressive. Killing ideas is a necessary part of science. Think of it as community service.
  • Say no: If a policy affects your objectivity, for example because it makes continued funding dependent on the popularity of your research results, point out that it interferes with good scientific conduct and should be amended. If your university praises its productivity by paper counts and you feel that this promotes quantity over quality, say that you disapprove of such statements.

As a higher ed administrator, science policy maker, journal editor, representative of funding body:

  • Do your own thing: Don’t export decisions to others. Don’t judge scientists by how many grants they won or how popular their research is – these are judgements by others who themselves relied on others. Make up your own mind, carry responsibility. If you must use measures, create your own. Better still, ask scientists to come up with their own measures.
  • Use clear guidelines: If you have to rely on external reviewers, formulate recommendations for how to counteract biases to the extent possible. Reviewers should not base their judgment on the popularity of a research area or the person. If a reviewer’s continued funding depends on the well-being of a certain research area, they have a conflict of interest and should not review papers in their own area. That will be a problem because this conflict of interest is presently everywhere. See next 3 points to alleviate it.
  • Make commitments: You have to get over the idea that all science can be done by postdocs on 2-year fellowships. Tenure was institutionalized for a reason and that reason is still valid. If that means fewer people, then so be it. You can either produce loads of papers that nobody will care about 10 years from now, or you can be the seed of ideas that will still be talked about in 1000 years. Take your pick. Short-term funding means short-term thinking.
  • Encourage a change of field: Scientists have a natural tendency to stick to what they know already. If the promise of a research area declines, they need a way to get out, otherwise you’ll end up investing money into dying fields. Therefore, offer reeducation support, 1-2 year grants that allow scientists to learn the basics of a new field and to establish contacts. During that period they should not be expected to produce papers or give conference talks.
  • Hire full-time reviewers: Create safe positions for scientists specialized in providing objective reviews in certain fields. These reviewers should not themselves work in the field and have no personal incentive to take sides. Try to reach agreements with other institutions on the number of such positions.
  • Support the publication of criticism and negative results: Criticism of other people’s work or negative results are presently underappreciated. But these contributions are absolutely essential for the scientific method to work. Find ways to encourage the publication of such communication, for example by dedicated special issues.
  • Offer courses on social and cognitive biases: This should be mandatory for anybody who works in academic research. We are part of communities and we have to learn about the associated pitfalls. Sit together with people from the social sciences, psychology, and the philosophy of science, and come up with proposals for lectures on the topic.
  • Allow a division of labor by specialization in task: Nobody is good at everything, so don’t expect scientists to be. Some are good reviewers, some are good mentors, some are good leaders, and some are skilled at science communication. Allow them to shine in what they’re good at and make best use of it, but don’t require the person who spends their evenings in student Q&A to also bring in loads of grant money. Offer them specific titles, degrees, or honors.

As a science writer or member of the public, ask questions:

  • You’re used to asking about conflicts of interest due to funding from industry. But you should also ask about conflicts of interest due to short-term grants or employment. Does the scientists’ future funding depend on producing the results they just told you about?
  • Likewise, you should ask if the scientists’ chance of continuing their research depends on their work being popular among their colleagues. Does their present position offer adequate protection from peer pressure?
  • And finally, like you are used to scrutinize statistics you should also ask whether the scientists have taken means to address their cognitive biases. Have they provided a balanced account of pros and cons or have they just advertised their own research?

You will find that for almost all research in the foundations of physics the answer to at least one of these questions is no. This means you can’t trust these scientists’ conclusions. Sad but true.


Reprinted from Lost In Math by Sabine Hossenfelder. Copyright © 2018. Available from Basic Books, an imprint of Perseus Books, a division of PBG Publishing, LLC, a subsidiary of Hachette Book Group, Inc

Phlogiston – Early Science at Work

One of the earlier scientific concepts was phlogiston, and it is of interest to follow why this concept went wrong, if it did. One of the major problems for early theory was that nobody knew very much. Materials had properties, and these were referred to as principles, which tended to be viewed either as abstractions, or as physical but weightless entities. We would not have such difficulties, would we? Um, spacetime?? Anyway, they then observed that metals did something when heated in air:

M   + air +  heat        ÞM(calx) ±  ???  (A calx was what we call an oxide.)

They deduced there had to be a metallic principle that gives the metallic properties, such as ductility, lustre, malleability, etc., but they then noticed that gold refuses to make a calx, which suggested there was something else besides the metallic principle in metals. They also found that the calx was not a mixture, thus rust did not lead to iron being attached to a lodestone. This may seem obvious to us now, but conceptually this was significant. For example, if you mix blue and yellow paint, you get green and they cannot readily be unmixed, nevertheless it is a mixture. Chemical compounds are not mixtures, even though you might make them by mixing two materials. Even more important was the work by Paracelsus, the significance of which is generally overlooked. He noted there were a variety of metals, calces and salts, and he generalized that acid plus metal or acid plus metal calx gave salts, and each salt was specifically different, and depended only on the acid and metal used. He also recognized that what we call chemical compounds were individual entities, that could be, and should be, purified.

It was then that Georg Ernst Stahl introduced into chemistry the concept of phlogiston. It was well established that certain calces reacted with charcoal to produce metals (but some did not) and the calx was usually heavier than the metal. The theory was, the metal took something from the air, which made the calx heavier. This is where things became slightly misleading because burning zinc gave a calx that was lighter than the metal. For consistency, they asserted it should have gained but as evidence poured in that it had not, they put that evidence in a drawer and did not refer to it. Their belief that it should have was correct, and indeed it did, but this avoiding the “data you don’t like” leads to many problems, not the least of which include “inventing” reasons why observations do not fit the theory without taking the trouble to abandon the theory. This time they were right, but that only encourages the act. As to why there was the problem, zinc oxide is relatively volatile and would fume off, so they lost some of the material. Problems with experimental technique and equipment really led to a lot of difficulties, but who amongst us would do better, given what they had?

Stahl knew that various things combusted, so he proposed that flammable substances must contain a common principle, which he called phlogiston. Stahl then argued that metals forming calces was in principle the same as materials like carbon burning, which is correct. He then proposed that phlogiston was usually bound or trapped within solids such as metals and carbon, but in certain cases, could be removed. If so, it was taken up by a suitable air, but because the phlogiston wanted to get back to where it came from, it got as close as it could and took the air with it. It was the phlogiston trying to get back from where it came that held the new compound together. This offered a logical explanation for why the compound actually existed, and was a genuine strength of this theory. He then went wrong by arguing the more phlogiston, the more flammable the body, which is odd, because if he said some but not all such materials could release phlogiston, he might have thought that some might release it more easily than others. He also argued that carbon was particularly rich in phlogiston, which was why carbon turned calces into metals with heat. He also realized that respiration was essentially the same process, and fire or breathing releases phlogiston, to make phlogisticated air, and he also realized that plants absorbed such phlogiston, to make dephlogisticated air.

For those that know, this is all reasonable, but happens to be a strange mix of good and bad conclusions. The big problem for Stahl was he did not know that “air” was a mixture of gases. A lesson here is that very seldom does anyone single-handedly get everything right, and when they do, it is usually because everything covered can be reduced to a very few relationships for which numerical values can be attached, and at least some of these are known in advance. Stahl’s theory was interesting because it got chemistry going in a systemic way, but because we don’t believe in phlogiston, Stahl is essentially forgotten.

People have blind spots. Priestley also carried out Lavoisier’s experiment:  2HgO  + heat   ⇌   2Hg  + O2and found that mercury was lighter than the calx, so argued phlogiston was lighter than air. He knew there was a gas there, but the fact it must also have weight eluded him. Lavoisier’s explanation was that hot mercuric oxide decomposed to form metal and oxygen. This is clearly a simpler explanation. One of the most important points made by Lavoisier was that in combustion, the weight increase of the products exactly matched the loss of weight by the air, although there is some cause to wonder about the accuracy of his equipment to get “exactly”. Measuring the weight of a gas with a balance is not that easy. However, Lavoisier established the fact that matter is conserved, and that in chemical reactions, various species react according to equivalent weights. Actually, the conservation of mass was discovered much earlier by Mikhail Lomonosov, but because he was in Russia, nobody took any notice. The second assertion caused a lot of trouble because it is not true without a major correction to allow for valence. Lavoisier also disposed of the weightless substance phlogiston simply by ignoring the problem of what held compounds together. In some ways, particularly in the use of the analytical balance, Lavoisier advanced chemistry, but in disposing of phlogiston he significantly retarded chemistry.

So, looking back, did phlogiston have merit as a concept? Most certainly! The metal gives off a weightless substance that sticks to a particular gas can be replaced with the metal gives off an electron to form a cation, and the oxygen accepts the electron to form an anion. Opposite charges attract and try to bind together. This is, for the time, a fair description of the ionic bond. As for weightless, nobody at the time could determine the weight difference between a metal and a metal less one electron, if they could work out how to make it. Of course the next step is to say that the phlogiston is a discrete particle, and now valence falls into place and modern chemistry is around the corner. Part of the problem there was that nobody believed in atoms. Again, Lomonosov apparently did, but as I noted above, nobody took any notice of him. Of course, is it is far easier to see these things in retrospect. My guess is very few modern scientists, if stripped of their modern knowledge and put back in time would do any better. If you think you could, recall that Isaac Newton spent a lot of time trying to unravel chemistry and got nowhere. There are very few ever that are comparable to Newton.

Is Science in as Good a Place as it Might Be?

Most people probably think that science progresses through all scientists diligently seeking the truth but that illusion was was shattered when Thomas Kuhn published “The Structure of Scientific Revolutions.” Two quotes:

(a) “Under normal conditions the research scientist is not an innovator but a solver of puzzles, and the puzzles upon which he concentrates are just those which he believes can be both stated and solved within the existing scientific tradition.”

(b) “Almost always the men who achieve these fundamental inventions of a new paradigm have been either very young or very new to the field whose paradigm they change. And perhaps that point need not have been made explicit, for obviously these are the men who, being little committed by prior practice to the traditional rules of normal science, are particularly likely to see that those rules no longer define a playable game and to conceive another set that can replace them.”

Is that true, and if so, why? I think it follows from the way science is learned and then funded. In general, scientists gain their expertise by learning from a mentor, and if you do a PhD, you work for several years in a very narrow field, and most of the time the student follows the instructions of the supervisor. He will, of course, discuss issues with the supervisor, but basically the young scientist will have acquired a range of techniques when finished. He will then go on a series of post-doctoral fellowships, generally in the same area because he has to persuade the new team leaders he is sufficiently skilled to be worth hiring. So he gains more skill in the same area, but invariably he also becomes more deeply submerged in the standard paradigm. At this stage of his life, it is extremely unusual for the young scientist to question whether the foundations of what he is doing is right, and since most continue in this field, they have the various mentors’ paradigm well ingrained. To continue, either they find a company or other organization to get an income, or they stay in a research organization, where they need funding. When they apply for it they keep well within the paradigm; first, it is the easiest way for success, and also boat rockers generally get sunk right then. To get funding, you have to show you have been successful; success is measured mainly by the number of scientific papers and the number of citations. Accordingly, you choose projects that you know will work and shuld not upset any apple-carts. You cite those close to you, and they will cite you; accuse them of being wrong and you will be ignored, and with no funding, tough. What all this means is that the system seems to have been designed to generate papers that confirm what you already suspect. There will be exceptions, such as “discovering dark matter” but all that has done so far is to design a parking place for what we do not understand. Because we do  not understand, all we can do is make guesses as to what it is, and the guesses are guided by our current paradigm, and so far our guesses are wrong.

One small example follows to show what I mean. By itself, it may not seem important, and perhaps it isn’t. There is an emerging area of chemistry called molecular dynamics. What this tries to do is to work out is how energy is distributed in molecules as this distribution alters chemical reaction rates, and this can be important for some biological processes. One such feature is to try to relate how molecules, especially polymers, can bend in solution. I once went to hear a conference presentation where this was discussed, and the form of the bending vibrations was assumed to be simple harmonic because for that the maths are simple, and anyhting wrong gets buried in various “constants”. All question time was taken up by patsy questions from friends, but I got hold of the speaker later, and pointed out that I had published paper a long time previously that showed the vibrations were not simple harmonic, although that was a good approximation for small vibrations. The problem is that small vibrations are irrelevant if you want to see significant chemical effects; they come from large vibrations. Now the “errors” can be fixed with a sequence of anharmonicity terms, each with their own constant, and each constant is worked around until the desired answer is obtained. In short you get the asnswer you need by adjusting the constants.

The net result is, it is claimed that good agreement with observation is found once the “constants” are found for the given situation. The “constants” appear to be only constant for a given situation, so arguably they are not constant, and worse, it can be near impossible to find out what they are from the average paper. Now, there is nothing wrong with using empirical relationships since if they work, they make it a lot easier to carry out calculations. The problem starts when, if you do not know whyit works, you may use it under circumstances when it no longer works.

Now, before you say that surely scientists want to understand, consider the problem for the scientist: maybe there is a better relationship, but to change to use it would involve re-writing a huge amount of computer code. That may take a year or so, in which time no publications are generated, and when the time for applications for further funding comes up, besides having to explain the inactivity, you have to explain why you were wrong before. Who is going to do that? Better to keep cranking the handle because nobody is going to know the difference. Does this matter? In most cases, no, because most science involves making something or measuring something, and most of the time it makes no difference, and also most of the time the underpinning theory is actually well established. The NASA rockets that go to Mars very successfully go exactly where planned using nothing but good old Newtonian dynamics, some established chemistry, some established structural and material properties, and established electromagnetism. Your pharmaceuticals work because they have been empirically tested and found to work (at least most of the time).

The point I am making is that nobody has time to go back and check whether anything is wrong at the fundamental level. Over history, science has been marked by a number of debates, and a number of treasured ideas overthrown. As far as I can make out, since 1970, far more scientific output has been made than in all previous history, yet there have been no fundamental ideas generated during this period that have been accepted, nor have any older ones been overturned. Either we have reached a stage of perfection, or we have ceased looking for flaws. Guess which!

Is science being carried out properly?

How do scientists carry out science, and how should they? These are questions that have been raised by reviewers in a recent edition of Science magazine, one of the leading science journals. One of the telling quotes is “resources (that) influence the course of science are still more rooted in traditions and intuitions than in evidence.” What does that mean? In my opinion, it is along the lines, for those who have, much will be given. “Much” here refers to much of what is available. Government funding can be tight. And in fairness, those who provide funds want to see something for their efforts, and they are more likely to see something from someone who has produced results consistently in the past. The problem is, the bureaucrats responsible for providing the finds have no idea of the quality of what is produced, so they tend to count scientific papers. This favours the production of fairly ordinary stuff, or even rubbish. Newbies are given a chance, but there is a price: they cannot afford to produce nothing. So what tends to happen is that funds are driven towards something that is difficult to fail, except maybe for some very large projects, like the large hadron collider. The most important thing required is that something is measured, and that something is more or less understandable and acceptable by a scientific journal, for that is a successful result. In some cases, the question, “Why was that measured?” would best be answered, “Because it was easy.” Even the large hadron collider fell into that zone. Scientists wanted to find the Higgs boson, and supersymmetry particles. They found the first, and I suppose when the question of building the collider, the reference (totally not apt) to the “God Particle” did not hurt.

However, while getting research funding for things to be measured is difficult, getting money for analyzing what we know, or for developing theories (other than doing applied mathematics on existing theories), is virtually impossible. I believe this is a problem, and particularly for analyzing what we know. We are in this quite strange position that while in principle we have acquired a huge amount of data, we are not always sure of what we know. To add to our problems, anything found more than twenty years ago is as likely as not to be forgotten.

Theory is thus stagnating. With the exception of cosmic inflation, there have been no new major theories that have taken hold since about 1970. Yet far more scientists have been working during this period than in all of previous history. Of course this may merely be due to the fact that new theories have been proposed, but nobody has accepted them. A quote from Max Planck, who effectively started quantum mechanics may show light on this: “A new scientific truth does not triumph
by convincing its opponents and making them see the light, but rather because its opponents eventually die.” Not very encouraging. Another reason may be that it failed to draw attention to itself. No scientist these days can read more than an extremely tiny fraction of what is written, as there are tens of millions of scientific papers in chemistry alone. Computer searching helps, but only for well-defined problems, such as a property of some material. How can you define carefully what you do not know exists?

Further information from this Science article provided some interest. An investigation led to what then non-scientists might consider a highly odd result, namely for scientific papers to be a hit, it was found that usually at least 90 per cent of what is written is well established. Novelty might be prized, but unless well mixed with the familiar, nobody will read it, or even worse, it will not be published. That, perforce, means that in general there will be no extremely novel approach, but rather anything new will be a tweak on what is established. To add to this, a study of “star” scientists who had premature deaths led to an interesting observation: the output of their collaborators fell away, which indicates that only the “star” was contributing much intellectual effort, and probably actively squashing dissenting views, whereas new entrants to the field who were starting to shine tended not to have done much in that field before the “star” died.

A different reviewer noticed that many scientists put in very little effort to cite past discoveries, and when citing literature, the most important is about five years old. There will be exceptions, usually through citing papers by the very famous, but I rather suspect in most cases these are cited more to show the authors in a good light than for any subject illumination. Another reviewer noted that scientists appeared to be narrowly channeled in their research by the need to get recognition, which requires work familiar to the readers, and reviewers, particularly those that review funding applications. The important thing is to keep up an output of “good work”, and that tends to mean only too many go after something that they more or less already now the answer. Yes, new facts are reported, but what do they mean? This, of course, fits in well with Thomas Kuhn’s picture of science, where the new activities are generally puzzles that are to be solved, but not puzzles that will be exceedingly difficult to solve. What all this appears to mean is that science is becoming very good at confirming that which would have been easily guessed, but not so good at coming up with the radically new. Actually, there is worse, but that is for the next post.

Have you got what it takes to form a scientific theory?

Making a scientific theory is actually more difficult than you might think. The first step involves surveying what knowledge is already available. That comes in two subsets: the actual observational data and the interpretation of what everyone thinks that set of data means. I happen to think that set theory is a great start here. A set is a collection of data with something in common, together with the rule that suggests it should be put into one set, as opposed to several. That rule must arise naturally from any theory, so as you form a rule, you are well on your way to forming a theory. The next part is probably the hardest: you have to decide what interpretation that is allegedly established is in fact wrong. It is not that easy to say that the authority is wrong, and your idea is right, but you have to do that, and at the same time know that your version is in accord with all observational data and takes you somewhere else. Why I am going on about this now is I have written two novels that set a problem: how could you prove the Earth goes around the sun if you were an ancient Roman? This is a challenge if you want to test yourself as a theoretician. If you don’t. I like to think there is still an interesting story there.

From September 13 – 20, my novel Athene’s Prophecy will be discounted in the US and UK, and this blog will give some background information to make the reading easier as regards the actual story not regarding this problem. In this, my fictional character, Gaius Claudius Scaevola is on a quest, but he must also survive the imperium of a certain Gaius Julius Caesar, aka Caligulae, who suffered from “fake news”, and a bad subsequent press. First the nickname: no Roman would call him Caligula because even his worst enemies would recognize he had two feet, and his father could easily afford two bootlets. Romans had a number of names, but they tended to be similar. Take Gaius Julius Caesar. There were many of them, including the father, grandfather, great grandfather etc. of the one you recognize. Caligulae was also Gaius Julius Caesar. Gaius is a praenomen, like John. Unfortunately, there were not a lot of such names so there are many called Gaius. Julius is the ancient family name, but it is more like a clan, and eventually there needed to be more, so most of the popular clans had a cognomen. This tended to be anything but grandiose. Thus for Marcus Tullius Cicero, Cicero means chickpea. Scaevola means “lefty”. It is less clear what Caesar means because in Latin the “ar” ending is somewhat unusual. Gaius Plinius Secundus interpreted it as coming from caesaries, which means “hairy”. Ironically, the most famous Julius Caesar was bald. Incidentally, in pronunciation, the latin “C” is the equivalent of the Greek gamma, so it is pronounced as a “G” or “K” – the difference is small and we have now way of knowing. “ae” is pronounced as in “pie”. So Caesar is pronounced something like the German Kaiser.

Caligulae is widely regarded as a tyrant of the worst kind, but during his imperium he was only personally responsible for thirteen executions, and he had three failed coup attempts on his life, the leaders of which contributed to that thirteen. That does not sound excessively tyrannical. However, he did have the bad habit of making outrageous comments (this is prior to a certain President tweeting, but there are strange similarities). He made his horse a senator. That was not mad; it was a clear insult to the senators.

He is accused of making a fatuous invasion of Germany. Actually, the evidence is he got two rebellious legions to build bridges over the Rhine, go over, set up camp, dig lots of earthworks, march around and return. This is actually a text-book account of imposing discipline and carrying out an exercise, following the methods of his brother-in-law Gnaeus Domitius Corbulo, one of the stronger Roman Generals on discipline. He then took these same two legions and ordered them to invade Britain. The men refused to board what are sometimes called decrepit ships. Whatever, Caligulae gave them the choices between “conquering Neptune” and collecting a mass of sea shells, invading Britain, or face decimation. They collected sea shells. The exercise was not madness: it was a total humiliation for the two legions to have to carry these through Rome in the form of a “triumph”. This rather odd behaviour ended legionary rebellion, but it did not stop the coups. The odd behaviour and the fact he despised many senators inevitably led to bad press because it was the senatorial class that wrote histories, but like a certain president, he seemed to go out of his way to encourage the bad press. However, he was not seen as a tyrant by the masses. When he died the masses gave a genuine outpouring of anger at those who killed him. Like the more famous Gaius Julius Caesar, Caligulae had great support from the masses, but not from the senators. I have collected many of his most notorious acts, and one of the most bizarre political incidents I have heard of is quoted in the novel more or less as reported by Philo of Alexandria, with only minor changes for style consistency, and, of course, to report it in English.

As for showing how scientific theory can be developed, in TV shows you find scientists sitting down doing very difficult mathematics, and while that may be needed when theory is applied, all major theories start with relatively simple concepts. If we take quantum mechanics as an example of a reasonably difficult piece of theoretical physics, thus to get to the famous Schrödinger equation, start with the Hamilton-Jacobi equation from classical physics. Now the mathematician Hamilton had already shown you can manipulated that into a wave-like equation, but that went nowhere useful. However, the French physicist de Broglie had argued that there was real wave-like behaviour, and he came up with an equation in which the classical action (momentum times distance in this case) for a wave length was constant, specifically in units of h (Planck’s quantum of action). All that Schrödinger had to do was to manipulate Hamilton’s waves and ensure that the action came in units of h per wavelength. That may seem easy, but everything was present for some time before Schrödinger put that together. Coming up with an original concept is not at all easy.

Anyway, in the novel, Scaevola has to prove the Earth goes around the sun, with what was available then. (No telescopes that helped Galileo.) The novel gives you the material avaiable, including the theory and measurements of Aristarchus. See if you can do it. You, at least, have the advantage you know it does. (And no, you do not have to invent calculus or Newtonian mechanics.)

The above is, of course, merely the background. The main part of the story involves life in Egypt, the aanti-Jewish riots in Egypt, then the religious problems of Judea as Christianty starts.

Origin of the Rocky Planet Water, Carbon and Nitrogen

The most basic requirement for life to start is a supply of the necessary chemicals, mainly water, reduced carbon and reduced nitrogen on a planet suitable for life. The word reduced means the elements are at least partly bound with hydrogen. Methane and ammonia are reduced, but so are hydrocarbons, and aminoacids are at least partly reduced. The standard theory of planetary formation has it (wrongly, in my opinion) that none of these are found on a rocky planet and have to come from either comets, or carbonaceous asteroids. So, why am I certain this is wrong? There are four requirements that must be met. The first is, the material delivered must be the same as the proposed source; the second is they must come in the same proportions, the third is the delivery method must leave the solar system as it is now, and the fourth is that other things that should have happened must have.

As it happens, oxygen, carbon, hydrogen and nitrogen are not the same through the solar system. Each exists in more than one isotope (different isotopes have different numbers of neutrons), and the mix of isotopes in an element varies in radial distance from the star. Thus comets from beyond Neptune have far too much deuterium compared with hydrogen. There are mechanisms by which you can enhance the D/H ratio, such as UV radiation breaking bonds involving hydrogen, and hydrogen escaping to space. The chemical bonds to deuterium tend to be several kJ/mol. stronger than bonds to hydrogen. The chemical bond strength is actually the same, but the lighter hydrogen has more zero point energy so it more easily breaks and gets lost to space. So while you can increase the deuterium to hydrogen ratio, there is no known way to decrease it by natural causes. The comets around Jupiter also have more deuterium than our water, so they cannot be the source. The chondrites have the same D/H ratio as our water, which has encouraged people to believe that is where our water came from, but the nitrogen in the chondrites has too much 15N, so it cannot be the source of our nitrogen. Further, the isotope ratios of certain heavy elements such as osmium do not match those on Earth. Interestingly, it has been argued that if the material was subducted and mixed in the mantle, it would be just possible. Given that the mantle mixes very poorly and the main sources of osmium now come from very ancient plutonic extrusions, I have doubts on that.

If we look at the proportions, if comets delivered the water or carbon, we should have five times more nitrogen, and twenty thousand times more argon. Comets from the Jupiter zone get around this excess by having no significant nitrogen or argon, and insufficient carbon. For chondrites, there should be four times as much carbon and nitrogen to account for the hydrogen and chlorine on Earth. If these volatiles did come from chondrites, Earth has to be struck by at least 10^23 kg of material (that is, ten followed by 23 zeros). Now, if we accept that these chondrites don’t have some steering system, based on area the Moon should have been struck by about 7×10^21 kg, which is approximately 9.5% of the Moon’s mass. The Moon does not subduct such material, and the moon rocks we have found have exactly the same isotope ratios as Earth. That mass of material is just not there. Further, the lunar anorthosite is magmatic in origin and hence primordial for the Moon, and would retain its original isotope ratios, which should give a set of isotopes that so not involve the late veneer, if it occurred at all.

The third problem is that we are asked to believe that there was a narrow zone in the asteroid belt that showered a deluge of asteroids onto the rocky planets, but for no good reason they did not accrete into anything there, and while this was going on, they did not disturb the asteroids that remain, nor did they disturb or collide with asteroids closer to the star, which now is most of them. The hypothesis requires a huge amount of asteroids formed in a narrow region for no good reason. Some argue the gravitational effect of Jupiter dislodged them, but the orbits of such asteroids ARE stable. Gravitational acceleration is independent of the body’s mass, and the remaining asteroids are quite untroubled. (The Equivalence Principle – all bodies fall at the same rate, other than when air resistance applies.)

Associated with this problem is there is a number of elements like tungsten that dissolve in liquid iron. The justification for this huge barrage of asteroids (called the late veneer) is that when Earth differentiated, the iron would have dissolved these elements and taken them to the core. However, they, and iron, are here, so it is argued something must have brought them later. But wait. For the isotope ratios this asteroid material has to be subducted; for them to be on the continents, they must not be subducted. We need to be self-consistent.

Finally, what should have happened? If all the volatiles came from these carbonaceous chondrites, the various planets should have the same ratio of volatiles, should they not? However, the water/carbon ratio of Earth appears to be more than 2 orders of magnitude greater than that originally on Venus, while the original water/carbon ratio of Mars is unclear, as neither are fully accounted for. The N/C ratio of Earth and Venus is 1% and 3.5% respectively. The N/C ratio of Mars is two orders of magnitude lower than 1-2%. Thus if the atmospheres came from carbonaceous chondrites:

Only the Earth is struck by large wet planetesimals,

Venus is struck by asteroidal bodies or chondrites that are rich in C and especially rich in N and are approximately 3 orders of magnitude drier than the large wet planetesimals,

Either Earth is struck by a low proportion of relatively dry asteroidal bodies or chondrites that are rich in C and especially rich in N and by the large wet planetesimals having moderate levels of C and essentially no N, or the very large wet planetesimals have moderate amounts of carbon and lower amounts of nitrogen as the dry asteroidal bodies or chondrites, and Earth is not struck by the bodies that struck Venus,

Mars is struck only infrequently by a third type of asteroidal body or chondrite that is relatively wet but is very nitrogen deficient, and this does not strike the other bodies in significant amounts,

The Moon is struck by nothing,

See why I find this hard to swallow? Of course, these elements had to come from somewhere, so where? That is for a later post. In the meantime, see why I think science has at times lost hold of its methodology? It is almost as if people are too afraid to go against the establishment.