What is nothing?

Shakespeare had it right – there has been much ado about nothing, at least in the scientific world. In some of my previous posts I have advocated the use of the scientific method on more general topics, such as politics. That method involves the rigorous evaluation of evidence, of making propositions in accord with that evidence, and most importantly, rejecting those that are clearly false. It may appear that for ordinary people, that might be too hard, but at least that method would be followed by scientists, right? Er, not necessarily. In 1962 Thomas Kuhn published a work, “The structure of scientific revolutions” and in it he argued that science itself has a very high level of conservatism. It is extremely difficult to change a current paradigm. If evidence is found that would do so, it is more likely to be secreted away in the bottom drawer, included in a scientific paper in a place where it is most likely to be ignored, or, if it is published, ignored anyway, and put in the bottom drawer of the mind. The problem seems to be, there is a roadblock towards thinking that something not in accord with expectations might be significant. With that in mind, what is nothing?

An obvious answer to the title question is that a vacuum is nothing. It is what is left when all the “somethings” are removed. But is there “nothing” anywhere? The ancient Greek philosophers argued about the void, and the issue was “settled” by Aristotle, who argued in his Physica that there could not be a void, because if there were, anything that moved in it would suffer no resistance, and hence would continue moving indefinitely. With such excellent thinking, he then, for some reason, refused to accept that the planets were moving essentially indefinitely, so they could be moving through a void, and if they were moving, they had to be moving around the sun. Success was at hand, especially if he realized that feathers did not fall as fast as stones because of wind resistance, but for some reason, having made such a spectacular start, he fell by the wayside, sticking to his long held prejudices. That raises the question, are such prejudices still around?

The usual concept of “nothing” is a vacuum, but what is a vacuum? Some figures from Wikipedia may help. A standard cubic centimetre of atmosphere has 2.5 x 10^19 molecules in it. That’s plenty. For those not used to “big figures”, 10^19 means the number where you write down 10 and follow it with 19 zeros, or you multiply 10 by itself nineteen times. Our vacuum cleaner gets the concentration of molecules down to 10^19, that is, the air pressure is two and a half times less in the cleaner. The Moon “atmosphere” has 4 x 10^5 molecules per cubic centimetre, so the Moon is not exactly in vacuum. Interplanetary space has 11 molecules per cubic centimetre, interstellar space has 1 molecule per cubic centimetre, and the best vacuum, intergalactic space, needs a million cubic centimetres to find one molecule.

The top of the Earth’s atmosphere, the thermosphere goes from 10^14 to 10^7. That is a little suspect at the top because you would expect it to gradually go down to that of interplanetary space. The reason there is a boundary is not because there is a sharp boundary, but rather it is the point where gas pressure is more or less matched by solar radiation pressure and that from solar winds, so it is difficult to make firm statements about further distances. Nevertheless, we know there is atmosphere out to a few hundred kilometres because there is a small drag on satellites.

So, intergalactic space is most certainly almost devoid of matter, but not quite. However, even without that, we are still not quite there with “nothing”. If nothing else, we know there are streams of photons going through it, probably a lot of cosmic rays (which are very rapidly moving atomic nuclei, usually stripped of some of their electrons, and accelerated by some extreme cosmic event) and possibly dark matter and dark energy. No doubt you have heard of dark matter and dark energy, but you have no idea what it is. Well, join the club. Nobody knows what either of them are, and it is just possible neither actually exist. This is not the place to go into that, so I just note that our nothing is not only difficult to find, but there may be mysterious stuff spoiling even what little there is.

However, to totally spoil our concept of nothing, we need to see quantum field theory. This is something of a mathematical nightmare, nevertheless conceptually it postulates that the Universe is full of fields, and particles are excitations of these fields. Now, a field at its most basic level is merely something to which you can attach a value at various coordinates. Thus a gravitational field is an expression such that if you know where you are and if you know what else is around you, you also know the force you will feel from it. However, in quantum field theory, there are a number of additional fields, thus there is a field for electrons, and actual electrons are excitations of such fields. While at this point the concept may seem harmless, if overly complicated, there is a problem. To explain how force fields behave, there needs to be force carriers. If we take the electric field as an example, the force carriers are sometimes called virtual photons, and these “carry” the force so that the required action occurs. If you have such force carriers, the Uncertainty Principle requires the vacuum to have an associated zero point energy. Thus a quantum system cannot be at rest, but must always be in motion and that includes any possible discrete units within the field. Again, according to Wikipedia, Richard Feynman and John Wheeler calculated there was enough zero point energy inside a light bulb to boil off all the water in the oceans. Of course, such energy cannot be used; to use energy you have to transfer it from a higher level to a lower level, when you get access to the difference. Zero point energy is at the lowest possible level.

But there is a catch. Recall Einstein’s E/c^2 = m? That means according to Einstein, all this zero point energy has the equivalent of inertial mass in terms of its effects on gravity. If so, then the gravity from all the zero point energy in vacuum can be calculated, and we can predict whether the Universe is expanding or contracting. The answer is, if quantum field theory is correct, the Universe should have collapsed long ago. The difference between prediction and observation is merely about 10^120, that is, ten multiplied by itself 120 times, and is the worst discrepancy between prediction and observation known to science. Even worse, some have argued the prediction was not right, and if it had been done “properly” they justified manipulating the error down to 10^40. That is still a terrible error, but to me, what is worse, what is supposed to be the most accurate theory ever is suddenly capable of turning up answers that differ by 10^80, which is roughly the same as the number of atoms in the known Universe.

Some might say, surely this indicates there is something wrong with the theory, and start looking elsewhere. Seemingly not. Quantum field theory is still regarded as the supreme theory, and such a disagreement is simply placed in the bottom shelf of the minds. After all, the mathematics are so elegant, or difficult, depending on your point of view. Can’t let observed facts get in the road of elegant mathematics!

A Further Example of Theory Development.

In the previous post I discussed some of what is required to form a theory, and I proposed a theory at odds with everyone else as to how the Martian rivers flowed. One advantage of that theory is that provided the conditions hold, it at least explains what it set out to do. However, the real test of a theory is that it then either predicts something, or at least explains something else it was not designed to do.

Currently there is no real theory that explains Martian river flow if you accept the standard assumption that the initial atmosphere was full of carbon dioxide. To explore possible explanations, the obvious next step is to discard that assumption. The concept is that whenever forming theories, you should look at the premises and ask, if not, what?

The reason everyone thinks that the original gases were mainly carbon dioxide appears to be because volcanoes on Earth largely give off carbon dioxide. There can be two reasons for that. The first is that most volcanoes actually reprocess subducted material, which includes carbonates such as lime. The few that do not may be as they are because the crust has used up its ability to turn CO2 into hydrocarbons. That reaction depends on Fe (II) also converting to Fe (III), and it can only do that once. Further, there are many silicates with Fe (II) that cannot do it because the structure is too tightly bound, and the water and CO2 cannot get at the iron atoms. Then, if that did not happen, would methane be detected? Any methane present mixed with the red hot lava would burn on contact with air. Samples are never taken that close to the origin. (As an aside, hydrocarbon have been found, especially where the eruptions are under water.)

Also, on the early planet, iron dust will have accreted, as will other reducing agents, but the point of such agents is, they can also only be used once. What happens now will be very different from what happened then. Finally, according to my theory, the materials were already reduced. In this context we know that there are samples of meteorites that have serious reduced matter, such as phosphides, nitrides and carbides (both of which I argue should have been present), and even silicides.

There is also a practical point. We have one sample of Earth’s sea/ocean from over three billion years ago. There were quite high levels of ammonia in it. Interestingly, when that was found, the information ended up as an aside in a scientific paper. Because it was inexplicable to the authors, it appears they said the least they could.

Now if this seems too much, bear with me, because I am shortly going to get to the point of this. But first, a little chemistry, where I look at the mechanism of making these reduced gases. For simplicity, consider the single bond between a metal M and, say, a nitrogen atom N in a nitride. Call that M – N. Now, let it be attacked by water. (The diagram I tried to include refused to cooperate. Sorry) Anyway, the water attacks the metal and because the number of bonds around the metal stays the same, a hydrogen atom has to get attached to N, thus we get M-OH  + NH. Do this three times and we have ammonia, and three hydroxide groups on a metal ion. Eventually, two hydroxides will convert to one oxide and one molecule of water will be regenerated. The hydroxides do not have to be on the same metal to form water.

Now, the important thing is, only one hydrogen gets transferred per water molecule attack. Now suppose we have one hydrogen atom and one deuterium atom. Now, the one that is preferentially transferred is the one that it is easier to transfer, in which case the deuterium will preferentially stay on the oxygen because the ease of transfer depends on the bond strength. While the strength of a chemical bond starts out depending only on the electromagnetic forces, which will be the same for hydrogen and deuterium, that strength is reduced by the zero point vibrational energy, which is required by quantum mechanics. There is something called the Uncertainty Principle that says that two objects at the quantum level cannot be an exact distance from each other, because then they would have exact position, and exact momentum (zero). Accordingly, the bonds have to vibrate, and the energy of the vibration happens to depend on the mass of the atoms. The bond to hydrogen vibrates the fastest, so less energy is subtracted for deuterium. That means that deuterium is more likely to remain on the regenerated water molecule. This is an example of the chemical isotope effect.

There are other ways of enriching deuterium from water. The one usually considered for planetary bodies is that as water vapour rises, solar winds will blow off some water or UV radiation will break a oxygen – hydrogen bond, and knock the hydroden atom to space. Since deuterium is heavier, it is slightly less likely to get to the top. The problem with this is that the evidence does not back up the solar wind concept (it does happen, but not enough) and if the UV splitting of water is the reason, then there should be an excess of oxygen on the planet. That could work for Earth, but Earth has the least deuterium enrichment of the rocky planets. If it were the way Venus got its huge deuterium enhancement, there had to be a huge ocean initially, and if that is used to explain why there is so much deuterium, then where is the oxygen?

Suppose the deuterium levels in a planet’s hydrogen supply is primarily due to the chemical isotope effect, what would you expect? If the model of atmospheric formation noted in the previous post is correct, the enrichment would depend on the gas to water ratio. The planet with the lowest ratio, i.e. minimal gas/water would have the least enrichment, and vice versa. Earth has the least enrichment. The planet with the highest ratio, i.e. the least water to make gas, would have the greatest enrichment, and here we see that Venus has a huge deuterium enrichment, and very little water (that little is bound up in sulphuric acid in the atmosphere). It is quite comforting when a theory predicts something that was not intended. If this is correct, Venus never had much water on the surface because what it accreted in this hotter zone was used to make the greater atmosphere.

Chemical Reactivity and the Hammett Equation

This post is being presented as a background explanation for the post above. If you have no interest in chemistry, ignore this post.

The Hammett equation is an empirical relationship that relates the effect of a distant substituent in a molecule with the reactive centre. Such a centre might react with something, or be in equilibrium with another form. An example of the latter could be an acid or an amine, which could be in equilibrium with its ionized form, thus

X – H ⇋ X- + H+

Now, further suppose X is part of a molecular structure where some distance away there is a substituent Y, in which case

Y—-X – H ⇋ Y—-X- + H+

and —- is the hydrocarbon structure separating X and Y. Now it is observed that different Y can alter the equilibrium position or rate of reaction of the function X, and the reason is, the substituent alters the potential energy of electrons around wherever it is attached, and that alters to a lesser degree the potential energy around the next carbon atom, and so on. Thus the effect attenuates with distance. There are two complicating factors.

The first is what is called (in my opinion, misleadingly) electron delocalization. It is well known that carbon-carbon double bonds can “delocalize”. What that means is, if you have a molecule with a structure C=C-C(+/-) , where (+/-) means there is either a positive or negative electric charge (or the start of another double bond) on the third carbon atom, then the molecule behaves as if it were C-C-C with two additional electrons that make the bonds effectively 1.5 bonds, and half the charge is at each end. That particular system is called allyl.

The Hammett equation

log(K/Ko) = ρσ

relates the effect of distant substituents on a reactive site. Here K is a rate or equilibrium constant, Ko is a reference constant (to make the bracketed part a number – you cannot have a logarithm of apples!) and this should give a straight line when the logarithm is plotted against the axis that measures σ. ρ is the slope of the line, which attenuates as the path between substituent and site increases provided we assume each intervening chemical bond is localized. If so, σ is a specific value for the substituent. The straight line results because the values of σ are assigned to the substituent to ensure you get a straight line provided the attenuation is dependent on the substituents always having the same values of their assigned σ. What you are doing is empirically relating the attenuation of the electric potential change caused by the substituent at one point by the time it reaches the reactive site. Of course, there is always scatter, but it should be random.

To understand why delocalization becomes relevant, we have to consider what is actually meant. In chemistry textbooks you will often see mechanisms postulated to explain what is going on, and you will see electron pairs moving about. Electrons do not hold hands and move about, and they are never “localised” in as much as they can be anywhere in the region of the molecule at any given instant. The erroneous concept comes from the Copenhagen Interpretation of Quantum Mechanics, whereby the intensity of the wave function gives the probability of finding charge. The chemical bond arises from the interference between two wave functions and the interference zone has two electrons associated with it, and if you agree with my Guidance Wave Interpretation, half the periodic time (because a wave needs a crest and a trough per period, and in the absence of a nodal surface, which is only generated in the so-called antibonds, you need two electrons to provide both in one cycle). What I consider to be localised is not the electrons but the wave interference zone. If you follow the Copenhagen Interpretation, such an interference zone represents regions of enhanced electron density. If the wave interference zone is restricted to a certain volume of space, that characteristic space in the molecule conveys characteristic properties to the molecule because there is enhanced electron density at a lower potential within that region.

Why does it become localised at all, after all the waves can go on forever. The simplest answer is that because of molecular structure, e.g. the carbon atom has four orbitals directed towards the corners of a tetrahedron because that is the optimal distribution to minimize electron repulsion between the four carbon electrons. Interference to create single bonds is “end-on”, in which case for the wave to proceed its axis has to turn a corner, and it cannot do that without a change of refractive index, which requires a change of total energy. However, the allyl system, and a number of others, can delocalize because the axes of the orbitals are normal to any change of direction, and the orbitals can interfere sideways (i.e. normal to the orbital axes) as opposed to the end-on interference in single bonds. So, to get delocalization, the bonding must involve sideways interference of atomic orbitals, while the single bonds are invariably end-on. The reason why cyclopropane was of interest is that if the atomic waves have axes directed towards a tetrahedron, and an angle of 109.4 degrees between them, and since the resultant structure of cyclopropane perforce has angles of 60 degrees between the inter-atomic axes, then either there is partial sideways interference, or the bonds are “bent”. The first should permit delocalization; The second is ambiguous.

If we now reconsider the Hammett equation, we see why it is a test for delocalization. First, if there is delocalization, the value of ρ increases because there is no attenuation over the delocalized zone (i.e. overall the distance has fewer links to attenuate in it). There is, of course, the base value of how much change a substituent can cause anyway. Now, in the cyclopropyl systems I discussed in the previous post, the cyclopropane ring gave a value of ρ that was about 30% higher than a C – C link. My argument was that this is expected if there is no delocalization because there are two routes around the ring, and the final effect should be the sum of the two routes, which is what was found.

The value of σ also changes with delocalization for a limited number of substituents, namely those that can delocalize and amplify a certain effect if demanded. An example is if the reactive site generates a demand for more electron charge, then a substituent such as methoxyl will supply extra by delocalizing its lone pairs on the oxygen, or alternatively, if the demand is to disperse negative charge, a nitro group will behave as if it takes on more. Thus the effect of a limited number of substituents can address the question of whether there is delocalization. The saddest part of the exercise outlined in the previous post is that the first time it was ever deployed to answer a proper question, those who used it on the whole did not seem to appreciate the subtleties available to them. For the ionization of the 2-phenylcyclopropane carboxylic acids, the results obtained in water were too erratic, thanks to solubility problems. The results of reactions in ethanol had an acceptable value of ρ to get a result, but the authors overlooked the effect of two routes, and did not bother to examine the values of σ.

Are Bell’s Inequalities really violated by rotating polarizer experiments?

In a previous post on Bell’s Inequalities, I argued that the inequalities could be derived from two premises:

Premise 1: One can devise a test that will give one of two discrete results. For simplicity we label these (+) and (-).

Premise 2: We can carry out such a test under three different sets of conditions, which we label A, B and C.

Since a violation of a mathematical relationship falsifies it, and since tests on entangled particles are alleged to comply with these two premises yet the inequalities were violated, either one of these premises were violated, or a new mathematical relationship is required. In this post I shall present one argument that the experiments that involve rotating polarizing detectors, with the classic experiment of Aspect et al. (Phys. Rev. Lett. 49, 91-94, 1982) as an example, did not, as claimed, show violations of the inequality.

Before proceeding, this argument does not in any way deny entanglement, nor does it say anything about locality/non-locality. I am merely arguing there is a logic mistake in what is generally considered. To proceed, to clarify my definition of entanglement:

An entangled pair of particles is such that certain properties are connected by some rule such that when you know the value of a discrete property of one particle, you know the value of the other particle, even though you have not measured it.

Thus if one particle was found to have a spin clockwise, the spin of its entangled partner MUST be either clockwise or anticlockwise, depending on the rule or the particles are not entangled. That there are only two discrete values for properties such as spin or polarization means you can apply Bell’s Inequality, which for purposes of illustration, we shall test in the form

A(+) B(-) + B(+)C(-) ≧ A(+)C(-)

The Aspect experiment tested this by making a pair of entangled photons, and the way the experiment was set up, the rule was, each photon in an entangled pair would have the same polarization. The reason why they had entangled polarization was that the photons were generated from an excited 4P spin-paired state of calcium, which in sequence decayed to the spin-paired 4S state. The polarization arose from the fact that when decaying from a P state to an S state, each electron loses one quantum of action associated with angular momentum, and since angular momentum must be conserved, the photon associated with each decay must carry it away, and that is observed as polarization. There is nothing magic here; all allowed emissions of photons from electronic states involve a change of angular momentum, and usually of one quantum of action.

What Aspect did was to assign a position for the polarization detectors such that A was assigned to be vertical for the + measurement, say, 12 o’clock, and horizontal, say 3 o’clock for the – measurement. The position B was to rotate the detectors clockwise by 22.5o, and for C, to rotate by 45o. There is nothing magic about these rotations, but they are chosen to maximise the chances of seeing the effects. So you do the experiment and what happens? All detectors count the same number of photons. The reason is, the calcium atoms have no particular orientation so pairs of photons are emitted in all polarizations. What has happened is the first detector has detected half the entangled pairs, and the second the other half. We want only the second photon of the entangled pair detected by the first detector, so, instead at the (-) detector we only count photons that arrived within 19 ns of a photon registered at the (+) detector, then we find, as required, if the first detector is at A(+), then no photons come through at A(-). That was nearly the case.

Given that we count only measured photons, the law of probability requires A(+) =1; A(-) =0. The same will happen at B, and at C. (Some count all photons, so their probabilities are the ones that follow, divided by two.) There would be the occasional equipment failure, but for the present let’s assume all went ideally. This occurs because if we apply the Malus law to polarized photons, if the two filters are at right angles, and if working ideally, and if the two photons have the same polarization, and you only count photons at the second detector that are entangled with the first, there are zero photons going through the second filter. What is so special about the Malus law? It is a statement of the law of conservation of energy for polarized light, or the conservation of probability at 1 per event.

Now, let us put this into Bell’s Inequality, from three independent measurements, because the minus determinations are all zero: {A(+).B(-) + B(+).C(-)} = 0 + 0, while A(+)C(-) = 0. We now have 0 + 0 = 0, in accord with Bell’s inequality.

What Aspect did, however, was to argue that we can do joint tests and measure A(+) and B(-) on a set of entangled pairs. The proposition is, if we leave the first polarizing detector at A(+), but rotate the second we can score B(-) at the same time. Let the difference in clockwise rotations of the detectors be θ, thus in this example θ = 22.5 degrees. Following some turgid manipulations with the state vector formalism, or by simply applying the Malus law, if A(+) = 1, then B(-) = sin2 θ, and if we do the same for the others, we find,

{A(+).B(-) + B(+).C(-)} = 0.146 +0.146 while A(+)C(-) = 0.5 Oops! Since 0.292 is not greater or equal to 0.5, Bell’s inequality appears to be violated. At this point, I believe we should carefully re-examine what the various terms mean. In one of Bell’s examples (washing socks!) the socks undergoing the tests at A, B and C were completely separate subsets of all socks, and if we label these as a, b and c respectively, we can write {a} = ~{b, c}; {b}= ~{a, c}; {c} = ~ {a, b} where the brackets {} indicate sets. What Bell did with the sock washing was to take the result A(+) from the subset {a} and B(-) from the subset {b} and so on. But that is not what happened in the Aspect experiment, because as seen above, when we do that we have the result, 0 + 0 = 0. So, does this variation have an effect? In my opinion, clearly yes.

My first criticism of this is that the photons that give the B(-) determination are not those entangled with the B(+) determination. By manipulating things this way, B(+) + B(-) > 1. Previously, we decided that 1 represented the fact that an entangled pair was detected only once during a B(+) + B(-) determination, because the minus indicates “photons not detected”, but it has grown by rotating the B(-) filter. If we recall the derivation, we used the fact that B(+) + B(-) =1. Our experiment has not only violated Bell’s Inequality, but it violates our derivation of it.

Let us return to the initial position, The first detector vertical, the second horizontal, and we interpret that as A(-) = 0. That means that no photons entangled with those assigned as A(+) are recorded, and all photons actually recorded there are the other half of the photons, i.e. ¬A(+), or 0*A(+). Now, rotate the second detector by 90o. Now it records all the photons that are entangled with the selection chosen by A(+). It is nothing more than an extension of the first detector, or part of the first detector translated in space, but chosen to detect the second photon. Its value is equivalent to that of A(+), or 0*A(-). Because the second photon to be detected is chosen as only those entangled with those detected at A(+), surely what is detected is still in the subset A, and what Aspect labelled as B(-) should more correctly be labelled 0.146*A(+), and what was actually counted includes 0.854*A(-), in accord with the Malus law. What the first detector does is to select a subset of half the available photons, which means A(+) is not a variable, because its value is set by how you account for the selection. The second detector applies the Malus law to that selection.

Now, if that is not bad enough, then consider that the B(+).C(-) determination is an exact replica of the A(+).B(-) determination, but has been rotated by 22.5 degrees. Now, you cannot prove 2[A(+) B(-)]≧ A(+)C(-), so how can you justify simply rotating the experiment? The rotational symmetry of space says that simply rotating the experiment does not change anything. This fact is, from Nöther’s theorem, the source of the law of conservation of angular momentum, and conservation laws to arise from such symmetries. Thus the law of conservation of energy depends on the fact that if I do an experiment today I should get the same result if I do it tomorrow. The law of conservation of momentum depends on the fact if I move the experiment to the other end of the lab, or to another town, the same result arises. Moving the experiment somewhere else does not change anything physically. The law of conservation of angular momentum depends on the fact that if I orient the experiment some other way, I still get the same result. Therefore just rotating the experiment does not generate new variables. So, we have the rather peculiar fact that it is because of the rotational symmetry of space that we get the law of conservation of angular momentum, and that is why we assert that the photons are entangled. We then reject that symmetry in order to generate the required number of variables to get what we want.

Suppose there were no rotational symmetry? This happens with experiments involving compass needles, where the Earth’s magnetic field orients the needle. Now, further assume energy is conserved and the Malus law applies. If a thought experiment is carried out on a polarized source and we correctly measure the number of emitted photons, now we have the required number of variables, but we find, surprise, Bell’s Inequality is followed. Try it and see.

My argument is quite simple: Bell’s Inequality is not violated in rotating polarizer experiments, but logic is. People wanted a weird result and they got it, by hook or by crook.

A Challenge! How can Entangled Particles violate Bell’s Inequalities?

The role of mathematics in physics is interesting. Originally, mathematical relationships were used to summarise a myriad of observations, thus from Newtonian gravity and mechanics, it is possible to know where the moon will be in the sky at any time. But somewhere around the beginning of the twentieth century, an odd thing happened: the mathematics of General Relativity became so complicated that many, if not most physicists could not use it. Then came the state vector formalism for quantum mechanics, a procedure that strictly speaking allowed people to come up with an answer without really understanding why. Then, as the twentieth century proceeded, something further developed: a belief that mathematics was the basis of nature. Theory started with equations, not observations. An equation, of course, is a statement, thus A equals B can be written with an equal sign instead of words. Now we have string theory, where a number of physicists have been working for decades without coming up with anything that can be tested. Nevertheless, most physicists would agree that if observation falsifies a mathematical relationship, then something has gone wrong with the mathematics, and the problem is usually a false premise. With Bell’s Inequalities, however, it seems logic goes out the window.

Bell’s inequalities are applicable only when the following premises are satisfied:

Premise 1: One can devise a test that will give one of two discrete results. For simplicity we label these (+) and (-).

Premise 2: We can carry out such a test under three different sets of conditions, which we label A, B and C. When we do this, the results between tests have to be comparable, and the simplest way of doing this is to represent the probability of a positive result at A as A(+). The reason for this is that if we did 10 tests at A, 10 at B, and 500 at C, we cannot properly compare the results simply by totalling results.

Premise 1 is reasonably easily met. John Bell used as an example, washing socks. The socks would either pass a test (e.g. they are clean) or fail, (i.e. they need rewashing). In quantum mechanics there are good examples of suitable candidates, e.g. a spin can be either clockwise or counterclockwise, but not both. Further, all particles must have the same spin, and as long as they are the same particle, this is imposed by quantum mechanics. Thus an electron has a spin of either +1/2 or -1/2.

Premises 1 and 2 can be combined. By working with probabilities, we can say that each particle must register once, one way or the other (or each sock is tested once), which gives us

A(+) + A(-) = 1; B(+) + B(-) = 1;   C(+) + C(-) = 1

i.e. the probability of one particle tested once and giving one of the two results is 1. At this point we neglect experimental error, such as a particle failing to register.

Now, let us do a little algebra/set theory by combining probabilities from more than one determination. By combining, we might take two pieces of apparatus, and with one determine the (+) result at condition A, and the negative one at (B) If so, we take the product of these, because probabilities are multiplicative. If so, we can write

A(+) B(-) = A(+) B(-) [C(+) + C(-)]

because the bracketed term [C(+) + C(-)] equals 1, the sum of the probabilities of results that occurred under conditions C.

Similarly

B(+)C(-)   = [A(+) + A(-)] B(+)C(-)

By adding and expanding

A(+) B(-) + B(+)C(-) = A(+) B(-) C(+) + A(+) B(-) C(-) + A(+) B(+)C(-) + A(-)B(+)C(-)

=   A(+)C(-) [(B(+) + B(-)] + A+B C+ + AB(+)C(-)

Since the bracketed term [(B(+) + B(-)] equals 1 and the last two terms are positive numbers, or at least zero, we have

A(+) B(-) + B(+)C(-) ≧ A(+)C(-)

This is the simplest form of a Bell inequality. In Bell’s sock-washing example, he showed how socks washed at three different temperatures had to comply.

An important point is that provided the samples in the tests must give only one result from only two possible results, and provided the tests are applied under three sets of conditions, the mathematics say the results must comply with the inequality. Further, only premise 1 relates to the physics of the samples tested; the second is merely a requirement that the tests are done competently. The problem is, modern physicists say entangled particles violate the inequality. How can this be?

Non-compliance by entangled particles is usually considered a consequence of the entanglement being non-local, but that makes no sense because in the above derivation, locality is not mentioned. All that is required is that premise 1 holds, i.e. measuring the spin of one particle, say, means the other is known without measurement. So, the entangled particles have properties that fulfil premise 1. Thus violation of the inequality means either one of the premises is false, or the associative law of sets, used in the derivation, is false, which would mean all mathematics are invalid.

So my challenge is to produce a mathematical relationship that shows how these violations could conceivably occur? You must come up with a mathematical relationship or a logic statement that falsifies the above inequality, and it must include a term that specifies when the inequality is violated. So, any takers? My answer in my next Monday post.