Voting for Aleppo

With the US elections coming up, and enduring chaos in the Middle East, we have to ask, which is the best candidate to sort that out? I am not exactly enthralled by either of them. First, the scenario. As is well known, the US has supported some “moderate” rebels who have the aim of ousting Assad in Syria, and is supporting the retaking of Mosul, apparently with air power. Russia is supporting Assad in Syria. Turkey wants to deal to ISIS in Mosul. This should have a straight-forward ending?

First, look at religion. Assad has support from Shia, including Hezbollah and Iran. The rebels include some moderates, but the important ones are associated with al Qaeda. So here, Russia is supporting Shia; the US is trying to protect Sunni and terrorists. Assad himself operated a secular government, as did Saddam, so an alien observer would presumably conclude that the US is against secular governments, which makes little sense. In Iraq, the US is supporting the Shia government to take out the fanatical ISIS, and will use air power to bomb the terrorists in Mosul. Or at least, that is a somewhat oversimplified account of what might happen. So, what do the candidates say?

Trump seemingly has not clearly defined policy here. That is not necessarily a bad thing. Leaving the place to sort itself out is a legitimate policy, and may be as good as any. The problem, though, is ISIS and al Qaeda. Left to their own devices, they are hardly likely to be beneficial. A second problem is that it appears Trump has not thought about this at all.

Clinton, however, is in my mind just outright dangerous. She has announced she will have a no-fly zone over Aleppo. That raises many questions. First, why? Does she want to nourish al Qaeda, who, as an aside, have killed far more innocent Western citizens than ISIS by a long shot? The nominal reason is to protect innocent civilians, but there are more in Mosul, and the US intends to bomb that, and, of course, there was “shock and awe”, which led to a very large number of dead Iraqis.

However, there is a much worse possible outcome than killing some innocent civilians, which is most certainly bad, but my worry is much worse. What does she do if the Russian air force continues bombing? Does she order the shooting down of Russian warplanes, which happen to be over the territory of, and at the invitation of, a sovereign government. If so, what is the justification? Because we can? That is a rather slippery slope. The US is actually one of the most warlike countries on the planet, and has been at war most of the time since 1890. It has been able to do this because the war is always “elsewhere”, and they cannot do any damage to the US. The result of this is that much of the population is unaffected by such wars. That does not give the US the right to shoot anyone they feel like, though. Russia has two choices: bow down before America, or ignore the threat.

If Russia bows down, then that establishes a precedent. Everybody expects that to happen again, and that encourages (from the Russian eyes) America to do more or less what it likes. It can order Russia out of the Crimea. Now what? Bow down again? Where does it stop?

If the Russians keep bombing, and the US shoots down at least one Russian warplane, now what? Russia can either bow down, or fight back. We don’t know the Russian capability, but we do know they have some ability, so unlike other recent opposition, Russia might shoot down an American plane, or attack an American base nearby. If the American aircraft came off a carrier, what happens if the Russians sink the carrier? The Russians have a carrier somewhere nearby, so the US sinks it. Now what?

Suppose the aircraft came from Turkey. Can Russia accept a border wherein America can come and shoot them down at will with no downside? It would be strange if they did. But if Russia attacks the base, that is an attack on a NATO country, which America could use to activate the NATO alliance. At this point we note that Russia could have done something more constructive earlier. When Clinton announces the no-fly zone, Russia should announce that any attack on Russia as a consequence of Clinton’s announcement from a NATO country will be taken as a declaration of war from that country on Russia. If the country is an aggressor, that should not trigger the NATO commitment. At the very least, some more nervous countries might decide that getting out of NATO is more desirable. Maybe not, but Russia should offer the option.

Now, suppose some missiles are launched from the Baltic states, and assume they are conventionally tipped. Now what? The least we can expect is that Russia sends all its motor rifle divisions westwards. Now what? Contrary to what some people think, my guess is that NATO would offer only moderate resistance, and indeed some of the countries would pull out of NATO on the grounds that it was not their fault that Clinton started all this. It is one thing to defend against an attack on one of their allies, but something else to get their ally out of a mess of its own making. Further, unless Russia makes good progress and has the ability to walk away from this with its head held high, there is little room for later negotiation.

It is hard to see such a war remaining conventional. The US simply does not have a big enough army to conquer Russia, which is a very big place, and it is far from clear that American soldiers want to fight multi-year wars in some of the world’s worst climate. One of the two will sooner or later resort to the nuclear option and a lot of us will be turned to ash.

Maybe there are other futures following such an edict. The Russians may surrender and comply with Clinton. After a brief skirmish, both parties might see sense, but do we want to bank on that? World War I was started almost accidentally. Do we want to start WW III?


The start of my career

Following on from last week’s post, getting my PhD did not go quite the way I had envisaged. The night before the oral exam, my fiancée terminated the engagement. Just before the oral exam I had arranged to get a smallpox injection, and I was not prepared for the rapidity of what came next. At the end of the exam I was sweating, and not because of the questions. I went to my car, and my left arm was the nearest to being paralysed and I needed it to change gears. Then I found out my mother was going into hospital for breast cancer (the good news was it worked, and she lived for quite a lot longer and she did not die of cancer.)

So, when I went to North America for a post-doc, and being lonely, I had time to work on a paper on strained molecules, and to explain exactly what I thought was going on. Because this was my first scientific paper, I got it wrong. No, not the work, but the presentation. Unless you were a big name, there were strict page limits on what a “newbie” could expect to get published. What I aimed for was to present a general method for calculating the properties of strained molecules that were dependent on the strain. In the associated post, for those interested, I shall give a description of the science as I see it, but for those that are not so interested in that, basically what I wanted to write was a means not only of accounting for what we already knew, but to present an approach that would make predictions for molecules that were yet to be measured, or even made.

The first draft was too long. It involved four parts: a general discussion of strain and the philosophy of the approach, the calculating method, a new proposed method for estimating the strain in a molecule, then the results of the calculations in terms of the ring bending strain in molecules and the resultant dipole moments. I had to shorten the paper so what I did was to cut out most of the first part, because it was “obvious”. It was obvious to me, but it appears nobody else could see it. What I should have done was to cut out the method for estimating the strain in molecules, and submit that as a separate paper because it could stand on its own and it had merit outside the paper I was writing. As I recall, the peer reviewers let it through without comment.

I wrote a series of further papers, and these showed that the properties of strained systems adjacent to charge or unsaturation were properly explained simply through standard electromagnetic theory, and no special quantum effects were required. More importantly, the predictions of parts of the electromagnetic explanation were opposite to those of the quantum delocalization explanation. Thus while both theories predicted that adjacent positive charge would be stabilized, only the electromagnetic explanation predicted that negative charge would be destabilized, so there was a clear and discrete difference between the two possible explanations. A limited number of spectral transitions could be used to separate these two theories, and where electrons moved towards the strain in the excited state, I predicted a shift that was in the opposite direction to delocalization. Even more importantly, experimental results showed I had calculated the shift almost exactly. Standard theory could not even get the direction of the shift right, so in principle the delocalization theory was falsified.

So, what happened? Two reviews came out, asserting the delocalization theory was right. End of story. How did that happen? First there was a review restricted to spectra. The review simply dismissed the few examples such as those I had identified that gave opposite effects for the two theories on the grounds these compounds were not very important!! Real science in action??? There was no mention of my paper on the subject, although it was possible that when that author submitted his review the paper may not yet have been in print.

The second “authoritative review” was much worse, because for no good reason, it ignored all my work. Worse than that, much later I tried to write a counter review, carrying out a logic analysis on all the data as of then. What I found was that there were about sixty different types of observations that were not in accord with the delocalization theory as generally presented. Now, some of those might have been explained away, as always there are some “special circumstances” but sixty is an awful lot. That review could not get published. Reason: the journals “did not publish logic analyses”! So, what did this “authoritative review” do about such papers? Simple. It ignored those as well. Basically, it found what it thought agreed with what it wanted (and some was debatable) and it ignored what it did not want. That, to me, is not real science.

The clincher, in the “authoritative review”, was that molecular orbital computations proved the cyclopropyl ring did delocalize! Quantum mechanics is obviously right, so this must be right. The problem here, of course, is that quantum mechanics produces equations that cannot be solved for systems like this, so all sorts of approximations have to be made. They prove nothing; they might predict something, but in this case they accounted for what we knew, but how? As an aside, calculations from exactly the same school of computations (I.e., the same programs were used) proved the stunning additional stability of polywater.

Never heard of polywater? That was a blot on science. Tiny amounts of water were collected by distilling them through microfine quartz capilliaries, and the water had a much higher boiling point, about 30% higher density, and a much higher viscosity. (That sample had dissolved silica.) It was shown to have a different infrared spectrum to that of water (later to be found to be the spectrum of sweat – another lowlight!)

To illustrate why I distrust computational chemistry, much later a new form of delocalization was proposed. The first such paper (Dewar, M. J. S. J. Am. Chem. Soc. 1984, 106, 669-682.) argued that if bond bending is simple harmonic then (U)bend = kq2, with k the carbon-carbon bond bending force constant, q the angle of deformation from the tetrahedral angle, in which case the strain energy of cyclopropane should be about 437 kJ/mol. It is actually about 120, so he calculated, using molecular orbital theory, a proposed s aromaticity to correct this, and came up with almost exact agreement with observation. Shortly after, there was a counter (Cremer, D.; Kraka, E. J. Am. Chem. Soc. 1985, 107, 3800-3810.) This paper argued the force constant used was wrong, and the strain should be 313 kJ/mol. Then, using exactly the same variant of molecular orbital theory, he calculated a s aromaticity of 200 kJ/mol, again claiming exact agreement with observation. It is impossible (at least for me) to work out where the difference was in the two sets of computations. (My equation for calculating the strain had no excess strain, and the function was proportional to sinq. There is a big difference in functionality between sinq and q2.)

So, I am a sore loser? I should just go away and rejoin the mainstream? Well, I am not entirely alone in this. One of the best current mathematicians and theoretical physicists, Roger Penrose, has just written a book called “Fashion, Faith, and Fantasy in the New Physics of the Universe “. For him, the success of quantum mechanics makes physicists insensitive to the theory’s conceptual problems and generates an unjustified degree of faith in its so-called “basic principles”. Interestingly enough, he cites chemistry as an example of quantum mechanics impressive record. One of the problems with computational chemistry is the equations are insoluble. In his Nobel lecture, Pople introduced the process of validation, which involved “the optimization of four parameters from 299 experimentally derived energies”. This sets the parameters to be used on similar molecules, and as far as I am concerned is not much better than empirical relationships. What happens if the nature of the molecules change? It appears they re-validate. When Moran et al. (2006 J. Am Chem Soc. 128: 9342-9343.) used some computational programs then commonly in place and available as packages for the general chemist, they found that when they applied them to molecules that had been satisfactorily computed earlier, they got wrong answers. The change in validation now lost the ability to calculate earlier molecules, and, as an aside, these errors were dramatic, thus benzene was no longer planar.

That is why I am less than impressed with modern science. It is too ridden with fashion. The reason? I believe it is due to the funding mechanism. Nobody wants to go back over accepted material. If it were yours, you have trouble getting funding the next time. If it were someone else’s, that person had better not be a peer reviewer. Better to go with the flow. My trouble was, I could not get myself to do that.

At the time of the cyclopropane issue, there was another young chemist who carried out an experiment that came to a similar conclusion that there was no delocalization. As things settled down, he abandoned the area, got into a “hot” area, and eventually became very prestigious. I asked him about that paper once and why he left it alone. “Oh that,” he said, with the air of someone who wished I would go away. “That was just . . . ” and he gave a shrug of disinterest. Even if you are right, it is better career-wise to forget it and get back into the flow. Unfortunately for me, that was not the science I signed up for so long ago.

Chemical effects from strained molecules

The major evidence supporting the fact that cyclopropane permits electron delocalization was that like ethylene, it stabilizes adjacent positive charge, and it stabilizes the excited states of many molecules when the cyclopropane ring is adjacent to the unsaturation. My argument was that the same conclusion arises from standard electromagnetic theory.

Why is the ring strained (i.e. at a higher energy)? Either the molecule is based on distorted ordinary C – C bonds (the “strain” model) or it involves a totally new form of bonding (required for delocalization). If we assume the strain model, the orbitals around carbon are supposed to be at 109.4 degrees to each other, but the angle between nuclei is 60 degrees. The orbitals try to follow the reduced angle, but as they move inwards, there is increased electron-electron repulsion, and that is the source of the strain energy. That repulsion “lifts” the electron up from the line between the nuclei to form a “banana” shaped bond. Of the three atoms, each with two orbitals, four of those orbitals come closer to a substituent when the bonds are bent, and the two on the atom to which the substituent is attached maintain a more or less similar distance, because the movement is more rotational.

If so, the strained system should be stabilized by adjacent positive charge. Those four orbitals are destabilized by the electron repulsion from other electrons in the ring; the positive charge gives the opposite effect by reducing the repulsion energy. Alternatively, if four orbitals move towards a substituent carrying positive charge, then as they come closer to a point, the negative electric field is stronger at that point, in which case positive charge is stabilized. The problem is to put numbers on such a theory.

My idea was simple. The energy of such an interaction is stored in the electric field, and therefore it is the same for any given change of electric field, irrespective of how the change of field is generated. Suppose you were sitting on the substituent with a means of measuring the electric field, and the electrons were on the other side of a wall. You see an increase in electric field, but what generates it? It could be that the electrons have moved closer, and work is done by their doing so (because the change of field strength requires a change of energy) OR you could have left them in the same place but added charge, and now the work corresponding to the strain energy would be done by adding the charge. There is, of course, no added charge, BUT if you pretend there is, it makes the calculation that relates the strain energy to the effects on adjacent substituents a lot simpler. The concept is a bit like using centrifugal force in an orbital calculation. Strictly speaking, there is no such force – if you use it, it is called a pseudoforce – but it makes the calculations a lot easier. The same here, because if one represented the change of electric field as due to a pseudocharge, there is an analytic solution to the integration. One constant still has to be fixed, but fix it for one molecule and it applies to all the others. So an alternative reason why adjacent positive charge is stabilized was obtained, and my calculation was very close to the experimental value that was obtained. So far, so good.

The UV spectra could also be easily explained. From Maxwell’s electromagnetic theory, to absorb a photon and form an excited state, there has to be a change of dipole moment, so as long as the positive end of the dipole can be closer to the cyclopropane ring than the negative end, the excited state is stabilized. More importantly, when this effect was applied to various systems, the changes due to different strained rings were proportional to my calculated changes in electric field at substituents. Very good news.

If positive charge were stabilized due to delocalization, so should negative charge be stabilized, but if it were due to my proposed change of electric field, then negative charge should be destabilized. This is where wheels fell off, because a big name published asserting negative charge was stabilized (Maerker, A.; Roberts, J. D. J. Am. Chem.Soc. 1966, 88, 1742-1759.) They reported numerous experiments in which they tried to make the required anion, and they all failed. Not exactly a great sign of stabilization. If they used a method that cannot fail, the resultant anion rearranged. That is also not a great sign of stabilization, but equally it does not necessarily show destabilization because stabilization could be there, but it changes to something even more stable.

Their idea of a clinching experiment was to make an anion adjacent to a cyclopropane ring and two benzene rings. The anion could be made provided potassium was a counterion. How that got through peer review I have no idea because that anion would be far less stable than the same anion without the cyclopropane ring. Even one benzene ring adjacent to an anion is well known to stabilize it. The reason why potassium was required was because the large cation could not get near the nominal carbon atom carrying the charge and that ballowed the negative charge to be destabilized away from the cyclopropane.. If lithium were used, it would get closer, and focus the negative charge closer to the cyclopropane ring. This was a case of a big name able to publish just about anything, and everyone believed him because they wanted to.

Which is all very well, but it is one thing to argue an experiment could have been interpreted some other way, but that is hardly conclusive. However, there was an “out”. The very lowest frequency ultraviolet spectral absorptions of carbonyl compounds were found to involve charge moving from the oxygen towards the carbon atom, and the electric moment of the transition was measured for formaldehyde. My theory now could make a prediction: strained systems should move the transition to higher frequency, whereas if delocalization were applicable, it should move to lower frequency. My calculations got the change of frequency for cyclopropane as a substituent correct to within 1 nm, whereas the delocalization argument could not even get the direction of the shift correct. It also explained another oddity: if there were a highly strained system such as bicyclobutyl as a substituent, you did not see this transition. My reason was simple: the signal moved to such a higher frequency that it was buried in another signal. So, I was elated.

When my publications came out, however, there was silence. Nobody seemed to understand, or care, about what I had done. The issue was settled; no need to look further. So much for Popper’s philosophy. And this is one of the reasons I am less than enthused at the way alternative theories to the mainstream are considered. However, there is a reason why this is so. Besides the occasional good theory, there is a lot of quite spurious stuff circulating. It is easy to understand why nobody wants to divert their attention from the work required for them to get more funding. Self-interest triumphs.

Does it matter? It does if you want to devise new catalysts, or understand how enzymes work.

A Tale of Two Supervisors

Something scary happened to me the other day. It occurred to me that about now was the fiftieth anniversary of my submitting my PhD thesis, as well as that of my friend, who I shall call John, in part because that is his name. The two of us had started our PhDs at about the same time, and we finished about the same time. But the effects were not the same. PhD projects tend to involve somewhat unexciting work, largely because when you set out you have to nominate the title, and the final thesis work has to reflect that title. Accordingly, supervisors tend to come up with projects that cannot fail to end up somewhere, but somewhere is usually very unexciting. John elected to work on chemical transformations of steroids, which, at the time, were considered very important. A steroid is a hydrocarbon molecule with four rings fused in a specific configuration (See Wikipedia for a diagram) and steroids were becoming important because they involve the sex hormones, and the contraceptive pill was just making its appearance. The different steroids differ in substitution, so a lot of chemistry could be done to see what sort of new steroids could be made. (Much later, this seems to have continued with effort into anabolic steroids for athletes to cheat, but at the time this was more or less unheard of.)

So John worked in a “hot” topic, and he got lucky. Something quite unusual happened – the so-called “Backbone rearrangement” was discovered. This led to a good number of papers being published, and John’s supervisor worked this to the hilt, with good use of the dreaded “letter”, a short paper that effectively lets the same work be reported first in parts in sketches, and later in detail. John’s career was well on the way to a sound scientific career, with lots of publications in a “hot” area.

At the time of thesis submission, I was highly excited because while I was not going to get many publications, in many ways against the odds, I thought I was making a real contribution to science. My project had a rocky start. I had selected my supervisor to be, he had given me a project, and I started a review of what had previously been done. Three and a half weeks later, and a few days before Christmas, I gave him the answer to the project: it had just turned up in the latest edition of Journal of the American Chemical Society. It was a good project, but we had been scooped. (Actually, this was good. It would have been awful if it had turned up, say, two years into the project, particularly as it then turned out my supervisor had no fall-back position.) So, after a couple of days, he gave me the choice of two projects, and we all went off for Christmas, which is in the summer in New Zealand.

I came back reasonably promptly, and the literature review of these projects was anything but encouraging. One involved a nightmare synthesis that required heating stuff with about 2/3 of a kg of a substance that was highly explosive, and the explosion could be triggered by scratches on glass. Then, very small amounts of the desired material could be separated, in principle, from a little under 4 kg of tar. And that only gave the starting material for some rather long and horrible syntheses to follow, which might or might not have worked. The second project was simpler: to measure the rates of chemical reaction of a class of compounds that were at least tolerably easy to make. That looked good until I found a reference that the rate was zero. The reaction simply did not go. My supervisor was still on holiday, so when the Head of Department saw me in a somewhat depressed state, he suggested I find a project. That was probably to get rid of me, but I took up the challenge. There was a debate underway in the chemical literature, and I elected to join in. So, that started my PhD. The project was to decide whether the electrons in a cyclopropane ring could delocalize, and I intended to use to be the first (I wasn’t, as it turned out) to use the Hammett equation to actually solve a problem.

I soon found out the compounds were a lot harder to make than I expected, and I also found out my supervisor was not going to be much help with advice. I had another problem. One of the syntheses was outlined in one of those dreaded “letters”. My problem was, the method did not work on my compounds. Since the original publication was on quite different types of molecules, I thought that was just life. Several years later, the full paper came out and the reason was obvious: in the letter a very key condition was omitted. I did get another quick method to work, but to be useful I needed to order a number of compounds from the US. Eventually, they turned up – three weeks after my thesis was submitted, and apparently after having gone via Hong Kong, Singapore, and spent time in warehouses in each place! So I was back to my rather tortuous fall-back synthesis route.

Shortly after the end of year 1, my supervisor took a year sabbatical in the US, so I had to do all my thinking myself. Meanwhile, the debate was settling down, and most had come to the conclusion that there was delocalization, and more particularly, some big names were coming down on that side. Somewhere in this period two publications came out relating to using this Hammett equation on the carboxylic acids I was making (Scooped!). One showed there was no delocalization; the other claimed to show there was. In my view, neither was likely to be correct. In the first there was a huge scatter in the data fit and I knew the acids were barely soluble in the solvent used. In my opinion, the second contained a logic error (see explaining post if interested). Since I intended to work on amines, which were more sensitive to this equation, I was still in business.

That was until I tried to measure my equilibrium constants. Everything went like a charm until I had to use the trans 2-(p-nitrophenyl)cyclopropylamine. I introduced this as the hydrochloride (because hydrochlorides were easier to purify) but when I introduced the buffer to generate the equilibrium, the solution gradually darkened. I got a fast result before the darkening became significant, technically the result was consistent with no delocalization, but I did not trust the result. Unfortunately, this was the key compound. I was out of business.

It was now that my supervisor made a brief appearance before heading back to North America to try to get a better-paid position. He made the suggestion of reacting the acids with a diazo compound. His stated reason for the suggestion: I needed to pad out the thesis. Nevertheless, this turned out to be a great suggestion because the required low dielectric solvent had the effect of amplifying changes between compounds. And there it was: I had an unambiguous result lying well outside any experimental uncertainty. The answer was no, the cyclopropane ring did not delocalize. Pity that by this time everyone else was starting to believe it did because the big names said it did. I saw my supervisor once more, briefly, then he was off for good. No advice on writing the thesis from him.

Needless to say, I was excited because I had settled a dispute (or so I thought), and more to the point, I had worked out what was really going on. The writing of that in the thesis was a bit obscure, first because I wanted the work to stand on the results, and secondly, I had decided that if my supervisor wanted to get any benefit from my theoretical work, he would have to make some progress himself, or at least talk to me. One of the jobs of a supervisor is to write up scientific papers based on the thesis results, after all the supervisor puts his name on those papers so he should do something. (A lot of supervisors, like John’s, do a lot more; this one did not.) After some considerable delay he wrote up one paper, relating to the work on the amines. This was not really conclusive, but I had made some compounds for the first time. But what really annoyed me was that he never wrote up the one part to which he had contributed, and which had unambiguously proven the absence of such delocalization. Why not? As far as I could tell, either he did not understand it, or he was too afraid to stick his head above the parapet and challenge the big names.

Needless to say, there is more to this story, but that can wait for another day. Meanwhile, in the post below, I shall try to explain the chemistry for those who are interested.

Chemical Reactivity and the Hammett Equation

This post is being presented as a background explanation for the post above. If you have no interest in chemistry, ignore this post.

The Hammett equation is an empirical relationship that relates the effect of a distant substituent in a molecule with the reactive centre. Such a centre might react with something, or be in equilibrium with another form. An example of the latter could be an acid or an amine, which could be in equilibrium with its ionized form, thus

X – H ⇋ X- + H+

Now, further suppose X is part of a molecular structure where some distance away there is a substituent Y, in which case

Y—-X – H ⇋ Y—-X- + H+

and —- is the hydrocarbon structure separating X and Y. Now it is observed that different Y can alter the equilibrium position or rate of reaction of the function X, and the reason is, the substituent alters the potential energy of electrons around wherever it is attached, and that alters to a lesser degree the potential energy around the next carbon atom, and so on. Thus the effect attenuates with distance. There are two complicating factors.

The first is what is called (in my opinion, misleadingly) electron delocalization. It is well known that carbon-carbon double bonds can “delocalize”. What that means is, if you have a molecule with a structure C=C-C(+/-) , where (+/-) means there is either a positive or negative electric charge (or the start of another double bond) on the third carbon atom, then the molecule behaves as if it were C-C-C with two additional electrons that make the bonds effectively 1.5 bonds, and half the charge is at each end. That particular system is called allyl.

The Hammett equation

log(K/Ko) = ρσ

relates the effect of distant substituents on a reactive site. Here K is a rate or equilibrium constant, Ko is a reference constant (to make the bracketed part a number – you cannot have a logarithm of apples!) and this should give a straight line when the logarithm is plotted against the axis that measures σ. ρ is the slope of the line, which attenuates as the path between substituent and site increases provided we assume each intervening chemical bond is localized. If so, σ is a specific value for the substituent. The straight line results because the values of σ are assigned to the substituent to ensure you get a straight line provided the attenuation is dependent on the substituents always having the same values of their assigned σ. What you are doing is empirically relating the attenuation of the electric potential change caused by the substituent at one point by the time it reaches the reactive site. Of course, there is always scatter, but it should be random.

To understand why delocalization becomes relevant, we have to consider what is actually meant. In chemistry textbooks you will often see mechanisms postulated to explain what is going on, and you will see electron pairs moving about. Electrons do not hold hands and move about, and they are never “localised” in as much as they can be anywhere in the region of the molecule at any given instant. The erroneous concept comes from the Copenhagen Interpretation of Quantum Mechanics, whereby the intensity of the wave function gives the probability of finding charge. The chemical bond arises from the interference between two wave functions and the interference zone has two electrons associated with it, and if you agree with my Guidance Wave Interpretation, half the periodic time (because a wave needs a crest and a trough per period, and in the absence of a nodal surface, which is only generated in the so-called antibonds, you need two electrons to provide both in one cycle). What I consider to be localised is not the electrons but the wave interference zone. If you follow the Copenhagen Interpretation, such an interference zone represents regions of enhanced electron density. If the wave interference zone is restricted to a certain volume of space, that characteristic space in the molecule conveys characteristic properties to the molecule because there is enhanced electron density at a lower potential within that region.

Why does it become localised at all, after all the waves can go on forever. The simplest answer is that because of molecular structure, e.g. the carbon atom has four orbitals directed towards the corners of a tetrahedron because that is the optimal distribution to minimize electron repulsion between the four carbon electrons. Interference to create single bonds is “end-on”, in which case for the wave to proceed its axis has to turn a corner, and it cannot do that without a change of refractive index, which requires a change of total energy. However, the allyl system, and a number of others, can delocalize because the axes of the orbitals are normal to any change of direction, and the orbitals can interfere sideways (i.e. normal to the orbital axes) as opposed to the end-on interference in single bonds. So, to get delocalization, the bonding must involve sideways interference of atomic orbitals, while the single bonds are invariably end-on. The reason why cyclopropane was of interest is that if the atomic waves have axes directed towards a tetrahedron, and an angle of 109.4 degrees between them, and since the resultant structure of cyclopropane perforce has angles of 60 degrees between the inter-atomic axes, then either there is partial sideways interference, or the bonds are “bent”. The first should permit delocalization; The second is ambiguous.

If we now reconsider the Hammett equation, we see why it is a test for delocalization. First, if there is delocalization, the value of ρ increases because there is no attenuation over the delocalized zone (i.e. overall the distance has fewer links to attenuate in it). There is, of course, the base value of how much change a substituent can cause anyway. Now, in the cyclopropyl systems I discussed in the previous post, the cyclopropane ring gave a value of ρ that was about 30% higher than a C – C link. My argument was that this is expected if there is no delocalization because there are two routes around the ring, and the final effect should be the sum of the two routes, which is what was found.

The value of σ also changes with delocalization for a limited number of substituents, namely those that can delocalize and amplify a certain effect if demanded. An example is if the reactive site generates a demand for more electron charge, then a substituent such as methoxyl will supply extra by delocalizing its lone pairs on the oxygen, or alternatively, if the demand is to disperse negative charge, a nitro group will behave as if it takes on more. Thus the effect of a limited number of substituents can address the question of whether there is delocalization. The saddest part of the exercise outlined in the previous post is that the first time it was ever deployed to answer a proper question, those who used it on the whole did not seem to appreciate the subtleties available to them. For the ionization of the 2-phenylcyclopropane carboxylic acids, the results obtained in water were too erratic, thanks to solubility problems. The results of reactions in ethanol had an acceptable value of ρ to get a result, but the authors overlooked the effect of two routes, and did not bother to examine the values of σ.

The last we see of comet 67P

This week marked the end of the Rosetta spacecraft, sent by the European Space Agency (ESA) to uncover what it could of a comet, specifically 67P/Churyumov-Gerasimenko. Rosetta’s purpose was to orbit the comet, send back information from the Philae lander, take images, and analyze the gases in the tail of the comet. Now it was time to die, and ESA crashed it into the comet. The reason: Rosetta’s activities were powered by electricity from solar panels, but now the comet was getting sufficiently far from the sun that solar power would not suffice much longer, so if there was anything left to do, now was the time to do it. As a consequence of this visit to the comet, it sent back a huge amount of information, allegedly enough that it would take decades to analyze it all.

So, what have we learned so far? First, this may not be an “average” comet, because apparently it originated from the region around Jupiter, as opposed to much further out. Second, we got some idea of how a comet gets its tail. One characteristic of the comet was that it is covered with pits. What appears to happen is that gases below the surface get heated by the sun, the pressure breaks the surface and the gases are ejected. Pits in the same region tend to be the same size, mainly because the size depends on the strength of the surface covering and about a million tonnes of matter come from each pit for this comet. In some ways, this suggests the volatile material is not uniformly distributed, but during comet formation, some sort of separation of volatiles from solids such as silicates went on.

One fact that I found interesting was that the emitted gases were mainly water, carbon monoxide and carbon dioxide. The comet apparently had very little nitrogen or other volatiles in it. To me, that is important, because in my ebook “Planetary Formation and Biogenesis” I pointed out that if my mechanism of the accretion of bodies was correct, in the Jupiter region there should be very little nitrogen, or ammonia, because it is too close to the star, and hence too warm, for them to accrete as such. That is one of the reasons why I assert there can be no life on Europa. In that context, in the very wispy atmosphere of Europa more sodium has been detected than nitrogen. There are very small amounts of nitrogenous material, such as isocyanates in the comet, though. On the other hand, I would not have expected carbon monoxide either, unless carbon dioxide could have been reduced subsequent to cometary formation.

There were also significant amounts of silicates, mainly as finely divided material. This is consistent with the concept that the original dust in the accretion disk contained such finely divided silicates, and in all probability, the dust acted as nuclei for ice condensation. Generally speaking, when something crystallizes out from another phase, it needs something in the other phase to get started. It is sometimes quite easy to make supersaturated solutions of something, and these solutions refuse to crystallize, and then when a suitable piece of dust or seed is added, it all simply crashes out of solution.

One of the other things I found to be of great interest was the shape of the nucleus of the comet, because it shows (as far as I am concerned) how accretion might have progressed. In my ebook, I proposed that what happened was that small particles would impact on the surface of a growing body, and one of two things would happen. The first was that nothing would happen, and gas would eventually abrade the surface and it would fly off (at least until the object became big enough that gravity would be strong enough to hold it.)

The second option was that at a certain temperature, an ice within the object would absorb the energy of impact, melt, then cool and re-freeze, thus melt welding the two bodies together. That seemingly may well have happened on a somewhat larger scale with this comet, as it has the appearance of two large bodies seemingly having collided and stuck together. (There are further smaller examples of seeming attached roundish objects, not visible in the given image.)


Comet 67P – image supplied by ESA

As an aside, that would be a mechanism by which volatiles might separate and concentrate in the small areas that would later generate the pits. For a collision to result in no subsequent separation, the collision had to be inelastic. To be inelastic, all the kinetic energy of the collision has to be absorbed by the objects as heat, and to keep the bodies together, rather than have them fly off again through centrifugal force as the body rotates, something has to hold them together. Ice melting and re-freezing looks a good option to me, but then I am biased. It is also interesting that there are no pits on the face facing the junction. Localised heat may have blown such gas away at the time of collision.

In my opinion, this was a great technical achievement, and ESA should be complimented. This was a truly complicated procedure to get the vehicle to orbit the comet, because the gravitational field of the comet is not exactly strong. Everything had to be done exactly right, and it was. We must now await further results.