Is the Earth’s Core Younger than the Crust?

There was a rather interesting announcement recently: three Danes calculated that the centre of the earth is 2.5 years younger than the crust ( U I Uggerhøj et al. The young centre of the Earth, European Journal of Physics (2016). DOI: 10.1088/0143-0807/37/3/035602 ). The concept is that from general relativity, the gravitational field of earth warps the fabric of space-time, thus slowing down time. This asserts that space-time is something more than a calculating aid and it brings up a certain logic problem. First, what is time and how do we measure it? The usual answer to the question or measurement is that we use a clock, and a clock is anything that has a change over a predictable period of time, as determined by some reference clock. One entity that can be used as a clock is radioactive decay and according to general relativity, that clock at the core would be 2.5 years younger than a clock on the surface; another is the orbit of the Earth around the star, and here the core has carried out precisely the same number of orbits as the crust. Where this becomes relevant is that according to relativity all clocks must behave the same way towards velocity, otherwise you could take your rocket ship and by comparing two different types of clocks you could measure your absolute velocity. So, does that mean gravitational time dilation is conceptually different from velocity time dilation? I believe this matters because it brings into question exactly what is space-time?

The above does not mean that time dilation does not occur. It is unambiguous. Thus we know that the muon travelling at relativistic velocities has its lifetime extended relative to a stationary one. If we assume that the process of decay is unaffected by the velocity, then the passage of time has to have slowed down. But that raises the question, is the assumption valid? An analogy might be, suppose I have a clock that is powered by a battery, and as the voltage drops, the clock slows. I would argue this is because the lower voltage is inadequate to keep the mechanism going at its previous rate, and not that time itself has slowed down.

Now, consider the mechanism of muon decay. If apparent mass increases according to velocity, why should not the rate of decay of a muon slow down, after all, it has accumulated more mass/energy so it is not the same entity? Is the accretion of mass equivalent to the change of gravitational potential?

Perhaps what relativity tells us is the rate at which clocks move indicates their altering the scale of the passage of time, rather than time itself slowing down. By that, I mean that when a clock hand completes one period, we say an hour has passed, but at relativistic speeds it might say that γ hours have passed per clock period, where γ = 1/√(1 – v2/c2). In terms of gravitational fields, it is not that time slows down, but rather clocks do, together with the rate of physical processes affected by the gravitational field.

That suggests we take our concept over to inertial motion. If a body travels near the velocity of light, then our equations tell us that time appears to dilate, but has time really slowed, or is it the process that leads to the decay that has slowed? Does it matter? In my opinion, yes, because it is through understanding that we are more likely to make progress into new areas.

The reason it is asserted that it is time itself that slows down comes from the principle of relativity, first (as far as I can tell) loosely stated by Galileo, used as the basis of his first law by Newton, and perhaps more clearly stated by Poincaré: the laws of physical phenomena must be the same for a fixed observer as for an observer who has uniform translational motion relative to him, so that we have not, nor can we possibly have, any means of discerning whether or not we are carried along in such motion. When added to the requirement from Maxwell that the velocity of light is a constant, we end up with Einstein’s relativity.

The question is, is the principle correct? It has to be in Galilean relativity, as it is the basis of Newtonian dynamics. If velocities are added vectorially, there is no option. But does it translate over into Einstein’s dynamics?

My argument is that it does not. There is an external fixed background, and that is the cosmic microwave background. The microwave energy comes almost uniformly from all directions, and through the Doppler shift one can detect an absolute velocity relative to it. (The accuracy of such determinations at present is not exactly high, but that is beside the point.) Very specifically, at 1977 our solar system was travelling with respect to this black body radiation at 390 +60 km/s in the direction 11.0+ 0.6 h right ascension and 6o +10 o declination. (Smoot et al., 1977, Phys Rev Lett. 39, 898 – 901). So we DO have the means of discerning whether or not we are carried along with such motion.

If we can measure an absolute velocity, it follows there is an absolute time, and as I have noted before, we can always measure when we are by determining the age of the Universe. Therefore I am reasonably confident in saying that the core of the Earth has aged at precisely the same rate as the crust once the Earth formed, and since there has not been complete mixing, it is more likely the core is older, as it on average would have accreted first. One the other hand, isotope decay there should have been held back by about two and a half years.

Advertisement

A disastrous example of free market economics

Do we see crises coming, and if so, are we in the habit of preventing their arrival? Is our free market system of economics capable of preventing their arrival? In answer to the first question, I think some of us do. As the second, no, especially if it means we do not make so much money so fast. Climate change is an example. The scientific community has made it fairly clear that our addition of infrared absorbing molecules into the atmosphere is causing the planet to warm. The politicians, or at least some of them, wave their arms and say we have to burn less carbon, but who says we have to stop using spray cans? A device that led to air creating the spray would be fine, but hydrocarbons, fluorocarbons, etc are not. How about stopping the manufacture of sulphur hexafluoride? Or reducing the level of application of nitrates to the soil?

So, we are at best a quarter hearted about climate change, but what about other impending problems? It is here I think the answer to my third question is no, and in fact the free market is more than just a part of the problem. One such problem that I think needs more thought is the question of antibiotic resistance. How does this come about? Basically because when antibiotics are used, the surviving bacteria are more likely to be resistant, after all, how else did they survive? This is evolution at work; the survival of the adequate, and being adequate to survive in the presence of antibiotics is to develop resistance to the antibiotic. And the problem is, the resistance can be transferred to further bacteria.

So, how does that come about? The most obvious example comes from agriculture, where antibiotics at low levels are used to promote growth. This helps the farmer’s and the drug company’s profits. The object is not to kill off all the bacteria, but rather to reduce their number, hence the low levels. (If you kill off the lot, digestion is impeded.) So, we have a little fermentation pot where resistant strains can develop, and then be transferred to the general environment. Why is this permitted? Because there is more money to be made by the companies, and a bit more by the farmers. Up to 80% of the antibiotic usage in the US has apparently gone into agriculture, and the big pharmaceutical companies are not going to give away that market. The chances of the farming sector turning down the quicker bringing of stock to the market are somewhat slight. Some do not use them, but only because they can then sell meat that can be advertised as “grown antibiotic-free”. So, maybe the consumer is at fault. Are we prepared to pay a bit more to prevent antibiotics being used this way?

Does it matter? I think so. If antibiotics no longer work, or if there is a reasonable risk they will not work, then medicine goes back a hundred years. The more advanced surgery developed during that period may well have to be abandoned. Surgery in the late 19th century was not something many would want to see their family undertake, let alone themselves. Additionally, many cancer treatments seriously suppress the immune system, and antibiotics are needed to deal with adventitious infection.

Now, for the moment we still have a slate of antibiotics, and while resistance is growing, it is rare to get superbugs resistant to just about all of them. Accordingly, our society is responding to this problem in its usual way: we ignore it, and assume we can find a way around it. The way around it is to have a “last resort” antibiotic, or preferably, more than one. The problem is, what used to be the antibiotics reserved for the most serious problems are now being used loosely and widely. But we can discover more, can’t we? Well, probably not. The first problem is, who is going to do the discovering?

The usual answer would be, big pharma. Nevertheless, success there is somewhat unlikely because by and large big pharma is not looking. The problem is, drug discovery has become hideously expensive, and suppose one was discovered and put away as a drug of last resort, usage would be incredibly small compared with the costs of getting it. The reason, of course, is that to prevent getting resistance to this, it too would be used very rarely. The company would never get its money back. Big pharma wants drugs to treat chromic conditions.

There is another problem. One drug that has had a dramatic increase of sales to the agricultural sector is tylosin, and it, by and large has little use in human medicine, but it is in the class known as macrolides, and if resistance is developed to tylosin, it is quite plausible that resistance will be developed to all those in the macrolide class. The use of third and fourth generation cephalosporins in animals has jumped seriously, and these are essential to human medicine. Why are they used? Almost certainly because of direct marketing. These drugs are convenient to use, but they are by no means the most suitable. There have been large increases in the use of tetracyclines and aminoglycosides in the agricultural sector, the latter class includes streptomycin. This report shocks me because of direct experience. When she was about 40, my wife got severe brucellosis, and the only cure then was serious doses of both tetracyclines and streptomycin. At the time it was a close call what would die first: Claire or the bacteria. Fortunately the bacteria did, but brucella live in animals, and I would hate to see that become resistant.

Will the worst-case scenario actually happen? I don’t know, and hopefully it won’t. Nevertheless, from a strategic point of view, don’t we want to optimize our chances of avoiding disaster? And it is here that the problem is most apparent, because the sufferers of the disaster scenario are not the current beneficiaries. We have an economic model that is almost designed to maximize the chances of disaster. It is not time to panic, but equally, it is also not the time to continue being stupid. If we want to insure our medicine does not descend into the state where serious surgery is to be avoided, should we not be cautious and defend what we have? Or do we say, let the corporations make what they can now, and not worry, and hope we never need the antibiotics?

The day after first posting this, the Huffington Post reported: ‘THE END OF THE ROAD FOR ANTIBIOTICS’

Free enterprise at work: the vulture fund.

We have all heard of financial goings on in Russia that turn up some toes in disgust. There is no doubt that huge amounts of wealth have been accrued there by a very limited number of favoured people for no good reason. The wealth was accrued through political favours/gifts on a huge scale. Everyone blames Putin, and while I feel he has hardly shone in cleaning this mess up, I still put the majority of the blame on Yeltsin. Nevertheless, there are many who seem to think that is a sign of the general immorality of Russians. That could not happen in the West. If you think that, think again. A recent item on the Huffington Post introduces the reader to the vulture fund. http://www.huffingtonpost.com/2016/05/12/vulture-fund-lobbying_n_9954612.html

A vulture fund is one that picks up for hardly anything some debt-ridden entity that everyone else has written off, and perforce, they get it for very little. The vulture part comes from the fact they are feeding off what is generally recognized as expired. You may think that is fair enough; they are taking a huge risk, and they get them cheap because they are worth essentially nothing. If that were all there is to it, I would agree wholeheartedly, but it is not. What these funds do is to buy the scrip up extremely cheaply, and then spend very large amounts of money to make the scrip valuable, in which case they sell at a huge profit. Nothing wrong with that if the money was spent on the entity, to sort it out, stop aberrant behaviour, etc. An example of such virtuous vulturing is what Steve Jobs did for Apple. Apple was heading for disaster at an astonishing rate before he returned, and with considerable effort he turned the carcass into one of the most valuable companies in the world. That is great.

But that is not what I am referring to. The bad vultures do not spend their money on making the entity work well again. They spend their money on influencing the decisions that will be made as to its future, and in particular to bailouts. So when a bailout comes, the vultures pass go and bank a huge profit by selling. The bailouts tend to be direct gifts to the vultures.

How well this will work will depend on the nature of the asset. If the asset is inherently healthy, they may present publicity that disses it, which may lead the market to get out of the asset, and their short position rolls in the loot. This particular method presents an interesting position for me, because I am currently writing a thriller type of novel in which something similar happens, and here I read an article that shows what I was thinking actually is happening. (Except my method of “dissing” is a bit more criminal. If you are writing a thriller, there is no point in everybody being legal.) So much for my imagining that you can be original when the objective is to gather loot. I an afraid I must consign myself to the barely amateur class.

Now, suppose the vultures somehow access an asset, directly or indirectly, that the public does not want to die, such as something related to health that in the end cannot be left to rot? The vultures then lobby for political action to prevent death, and then, for the politicians to come out of it without egg on face, they have the debts cleared, to the fund, of course. Paid for by, you guessed, the tax payer. At least that is the plan. According to that article, billions in profits have been made. The pressure will vary, but the key element in most such transactions is they would not work unless the government helped them.

Also of interest is the fact that hedge funds and their like appear to get special tax privileges. This has the effect of giving them a huge advantage over others who might compete with them, which raises two questions. The first is, why the special privileges? What do they do for society that is so valuable? Sucking up money and making a few very very rich is not an adequate answer for me. The second question is, how did they get these tremendous tax advantages? Now, surely you do not suspect this to have arisen as a consequence of political lobbying do you? That would be, well, up there with what you are accusing Putin of doing, and that could not happen in the West, could it?

Of course the hedge funds do not always get their own way. The article cited has an interesting few sentences on the Puerto Rican debt issue. Now Puerto Rico is a US territory, so you might think that being a territory of the richest country on the planet might mean a good infrastructure. The article cites the Eleanor Roosevelt Elementary School, a good solid US name, in which the electricity supply is such that they cannot run two computers at the same time and there are no lights when it rains. You don’t want to hear about the major trauma centre. Nevertheless, some hedge funds will probably take a bath. Why? What has happened is that two groups of hedge funds have apparently taken opposite bets, and thanks to all the lobbying and counter-lobbying, nothing is happening. The situation for the Puerto Ricans is not helped by the fact that the issue is tied down by a Constitutional argument in the Republican majority Congress. Paul Ryan apparently wants to help Puerto Rico, on the basis that Article IV of the Constitution stipulates Congress has power “to dispose of and make all needful rules and regulations respecting the territory.” Those on the right argue the financial problems are their own, and, moreover, what is “the territory”? They argue there is nothing in the Constitution that specifies Puerto Rico, so don’t do anything. Presumably the argument is, leave it to the market, i.e. the hedge funds, and all will be, if not well, at least elsewhere.

This is a clear reason why you cannot leave everything to the market: too much power gets concentrated in few hands, and those few hands have only one objective: to make more money. That does not mean Communism is the answer. What it does mean is the government ought to act as a counter balance and stop excessive misuse of money.

Are Bell’s Inequalities really violated by rotating polarizer experiments?

In a previous post on Bell’s Inequalities, I argued that the inequalities could be derived from two premises:

Premise 1: One can devise a test that will give one of two discrete results. For simplicity we label these (+) and (-).

Premise 2: We can carry out such a test under three different sets of conditions, which we label A, B and C.

Since a violation of a mathematical relationship falsifies it, and since tests on entangled particles are alleged to comply with these two premises yet the inequalities were violated, either one of these premises were violated, or a new mathematical relationship is required. In this post I shall present one argument that the experiments that involve rotating polarizing detectors, with the classic experiment of Aspect et al. (Phys. Rev. Lett. 49, 91-94, 1982) as an example, did not, as claimed, show violations of the inequality.

Before proceeding, this argument does not in any way deny entanglement, nor does it say anything about locality/non-locality. I am merely arguing there is a logic mistake in what is generally considered. To proceed, to clarify my definition of entanglement:

An entangled pair of particles is such that certain properties are connected by some rule such that when you know the value of a discrete property of one particle, you know the value of the other particle, even though you have not measured it.

Thus if one particle was found to have a spin clockwise, the spin of its entangled partner MUST be either clockwise or anticlockwise, depending on the rule or the particles are not entangled. That there are only two discrete values for properties such as spin or polarization means you can apply Bell’s Inequality, which for purposes of illustration, we shall test in the form

A(+) B(-) + B(+)C(-) ≧ A(+)C(-)

The Aspect experiment tested this by making a pair of entangled photons, and the way the experiment was set up, the rule was, each photon in an entangled pair would have the same polarization. The reason why they had entangled polarization was that the photons were generated from an excited 4P spin-paired state of calcium, which in sequence decayed to the spin-paired 4S state. The polarization arose from the fact that when decaying from a P state to an S state, each electron loses one quantum of action associated with angular momentum, and since angular momentum must be conserved, the photon associated with each decay must carry it away, and that is observed as polarization. There is nothing magic here; all allowed emissions of photons from electronic states involve a change of angular momentum, and usually of one quantum of action.

What Aspect did was to assign a position for the polarization detectors such that A was assigned to be vertical for the + measurement, say, 12 o’clock, and horizontal, say 3 o’clock for the – measurement. The position B was to rotate the detectors clockwise by 22.5o, and for C, to rotate by 45o. There is nothing magic about these rotations, but they are chosen to maximise the chances of seeing the effects. So you do the experiment and what happens? All detectors count the same number of photons. The reason is, the calcium atoms have no particular orientation so pairs of photons are emitted in all polarizations. What has happened is the first detector has detected half the entangled pairs, and the second the other half. We want only the second photon of the entangled pair detected by the first detector, so, instead at the (-) detector we only count photons that arrived within 19 ns of a photon registered at the (+) detector, then we find, as required, if the first detector is at A(+), then no photons come through at A(-). That was nearly the case.

Given that we count only measured photons, the law of probability requires A(+) =1; A(-) =0. The same will happen at B, and at C. (Some count all photons, so their probabilities are the ones that follow, divided by two.) There would be the occasional equipment failure, but for the present let’s assume all went ideally. This occurs because if we apply the Malus law to polarized photons, if the two filters are at right angles, and if working ideally, and if the two photons have the same polarization, and you only count photons at the second detector that are entangled with the first, there are zero photons going through the second filter. What is so special about the Malus law? It is a statement of the law of conservation of energy for polarized light, or the conservation of probability at 1 per event.

Now, let us put this into Bell’s Inequality, from three independent measurements, because the minus determinations are all zero: {A(+).B(-) + B(+).C(-)} = 0 + 0, while A(+)C(-) = 0. We now have 0 + 0 = 0, in accord with Bell’s inequality.

What Aspect did, however, was to argue that we can do joint tests and measure A(+) and B(-) on a set of entangled pairs. The proposition is, if we leave the first polarizing detector at A(+), but rotate the second we can score B(-) at the same time. Let the difference in clockwise rotations of the detectors be θ, thus in this example θ = 22.5 degrees. Following some turgid manipulations with the state vector formalism, or by simply applying the Malus law, if A(+) = 1, then B(-) = sin2 θ, and if we do the same for the others, we find,

{A(+).B(-) + B(+).C(-)} = 0.146 +0.146 while A(+)C(-) = 0.5 Oops! Since 0.292 is not greater or equal to 0.5, Bell’s inequality appears to be violated. At this point, I believe we should carefully re-examine what the various terms mean. In one of Bell’s examples (washing socks!) the socks undergoing the tests at A, B and C were completely separate subsets of all socks, and if we label these as a, b and c respectively, we can write {a} = ~{b, c}; {b}= ~{a, c}; {c} = ~ {a, b} where the brackets {} indicate sets. What Bell did with the sock washing was to take the result A(+) from the subset {a} and B(-) from the subset {b} and so on. But that is not what happened in the Aspect experiment, because as seen above, when we do that we have the result, 0 + 0 = 0. So, does this variation have an effect? In my opinion, clearly yes.

My first criticism of this is that the photons that give the B(-) determination are not those entangled with the B(+) determination. By manipulating things this way, B(+) + B(-) > 1. Previously, we decided that 1 represented the fact that an entangled pair was detected only once during a B(+) + B(-) determination, because the minus indicates “photons not detected”, but it has grown by rotating the B(-) filter. If we recall the derivation, we used the fact that B(+) + B(-) =1. Our experiment has not only violated Bell’s Inequality, but it violates our derivation of it.

Let us return to the initial position, The first detector vertical, the second horizontal, and we interpret that as A(-) = 0. That means that no photons entangled with those assigned as A(+) are recorded, and all photons actually recorded there are the other half of the photons, i.e. ¬A(+), or 0*A(+). Now, rotate the second detector by 90o. Now it records all the photons that are entangled with the selection chosen by A(+). It is nothing more than an extension of the first detector, or part of the first detector translated in space, but chosen to detect the second photon. Its value is equivalent to that of A(+), or 0*A(-). Because the second photon to be detected is chosen as only those entangled with those detected at A(+), surely what is detected is still in the subset A, and what Aspect labelled as B(-) should more correctly be labelled 0.146*A(+), and what was actually counted includes 0.854*A(-), in accord with the Malus law. What the first detector does is to select a subset of half the available photons, which means A(+) is not a variable, because its value is set by how you account for the selection. The second detector applies the Malus law to that selection.

Now, if that is not bad enough, then consider that the B(+).C(-) determination is an exact replica of the A(+).B(-) determination, but has been rotated by 22.5 degrees. Now, you cannot prove 2[A(+) B(-)]≧ A(+)C(-), so how can you justify simply rotating the experiment? The rotational symmetry of space says that simply rotating the experiment does not change anything. This fact is, from Nöther’s theorem, the source of the law of conservation of angular momentum, and conservation laws to arise from such symmetries. Thus the law of conservation of energy depends on the fact that if I do an experiment today I should get the same result if I do it tomorrow. The law of conservation of momentum depends on the fact if I move the experiment to the other end of the lab, or to another town, the same result arises. Moving the experiment somewhere else does not change anything physically. The law of conservation of angular momentum depends on the fact that if I orient the experiment some other way, I still get the same result. Therefore just rotating the experiment does not generate new variables. So, we have the rather peculiar fact that it is because of the rotational symmetry of space that we get the law of conservation of angular momentum, and that is why we assert that the photons are entangled. We then reject that symmetry in order to generate the required number of variables to get what we want.

Suppose there were no rotational symmetry? This happens with experiments involving compass needles, where the Earth’s magnetic field orients the needle. Now, further assume energy is conserved and the Malus law applies. If a thought experiment is carried out on a polarized source and we correctly measure the number of emitted photons, now we have the required number of variables, but we find, surprise, Bell’s Inequality is followed. Try it and see.

My argument is quite simple: Bell’s Inequality is not violated in rotating polarizer experiments, but logic is. People wanted a weird result and they got it, by hook or by crook.

Infested with Panamanian Rorts!

By now, most people in the reasonably developed world will have heard of Mossack Fonseca, the Panamanian law firm whose activities have led to New Zealand being called a tax haven. Well, right now it doesn’t feel like it to me, as it is time for me to start preparing my tax return. However, the history of this is quite interesting. After the first release of information, our Prime Minister, John Key, did his usual performance when something is not going his way: he smiled and said sort of derisively that NZ is not a tax haven, and there was nothing in this. His standard operating protocol is to dismiss the accusations and say not much more, on the basis that if he says nothing, it all runs out of steam. This time, however, it did not.

The next step was that the reason for the accusation became public. Part of New Zealand tax law is, if you do not earn anything in New Zealand and you do not reside here, you have no tax obligation. That seems reasonable. But what has happened, apparently, is that thanks to Fonseca and an Auckland law firm, Bentleys, a number of foreign trusts have been set up here. Now, since they have no tax obligations, they do not have to file a return, and that means there is a small black hole between the money and the owners of said money, and Fonseca has complicated the issue by having the New Zealand entity owning trusts elsewhere, and so on. What it does is to make it very difficult to track down who the rich are that are using such mechanisms to evade tax. It is also important to note there are sometimes good reasons for foreign trusts. If you live in a country where dictators are likely to confiscate everything if your political views are wrong have good reason to put their money offshore.

The answer to this, of course, is reasonably simple: the trusts should have all their owners and their activity declared. Total transparency does not worry the legal (given that tax authorities must maintain confidentiality, other than for prosecuting for tax evasion). If there is a chain of trusts, the chain should be explicitly declared. So, why has this not been done?

Here comes the problem for Key. Apparently two or so years ago, our IRD started to consider what it should do about such trusts, as they were blossoming. A certain Mr Whitney, Key’s personal lawyer (at least that was how it was described in the media initially, but there seems to be some accusations that he is no longer a lawyer) approached Key and asked about the issue. (Nothing wrong there.) What Key replied is unclear; he says there were no current plans as far as he knew and Whitney should see the appropriate Minister. Fair enough. Then it is unclear what happened because there is some accusation that Whitney asserted to the Minister that there should be no action. We have no real idea what happened, but the next thing is the Minister stopped the IRD from future action on these trusts. Whitney, it appears, was another lawyer running such trusts.

Then, suddenly, Key bounces back, triumphant. Some of these trusts that people are complaining about, says he, have the Red Cross, or Greenpeace, or Amnesty International as beneficiaries. In an attempt to dispose of another problem with one brush, there are arguments that Auckland’s property boom is being driven in part by foreign trusts. Key announces that only 3% of sales are to foreigners. However, perhaps he should have thought before putting mouth into action. It turns out that the Red Cross, Greenpeace, Amnesty, etc, know nothing about these trusts. What the likes of Fonseca, or whoever draws up the trusts, have done is to name beneficiaries on these trusts to allay the tax authorities, but they are fraudulent. Then for the property sales, it turns out that foreign trusts were excluded from the statistics. Why do house sales matter? Well, it appears people of unknown origin have turned up with suitcases of hundred dollar notes to pay. Soon after, the house is sold, and since there is a bubble progressing, at a profit. Now, New Zealand has an excellent banking system, so what’s the betting this is money laundering?

Obviously, we need more action from the government to make all this transparent. Tax evasion by the very rich is bad; the laundering of drug money is certainly worse. At this point I should emphasize that I am not in any way accusing Key of corruption. On the other hand, he has made a personal fortune as a trader in one of those banks, so he is hardly likely to mount a crusade against such activities. And herein lies the problem. Sloth and indolence from those who we vote in to make life fairer for all is just as bad, if not worse, than actual involvement. A further curse of what we call democracy (but is not) is that politicians want to stay in power, and they have a vested interest in not annoying the people that give their party the big donations to mount election campaigns.

Of course, on the world stage, so far New Zealand’s role is rather small as far as New Zealand citizens go, at least as far as we know. But the impact of slothful regulation might be huge. As Ellen Zimiles, a former New York federal prosecutor noted, fraudsters like offshore because the lack of transparency makes it very difficult for investigators to get at the ultimate beneficiary. And, of course, New Zealand is hardly the only guilty party.

A Challenge! How can Entangled Particles violate Bell’s Inequalities?

The role of mathematics in physics is interesting. Originally, mathematical relationships were used to summarise a myriad of observations, thus from Newtonian gravity and mechanics, it is possible to know where the moon will be in the sky at any time. But somewhere around the beginning of the twentieth century, an odd thing happened: the mathematics of General Relativity became so complicated that many, if not most physicists could not use it. Then came the state vector formalism for quantum mechanics, a procedure that strictly speaking allowed people to come up with an answer without really understanding why. Then, as the twentieth century proceeded, something further developed: a belief that mathematics was the basis of nature. Theory started with equations, not observations. An equation, of course, is a statement, thus A equals B can be written with an equal sign instead of words. Now we have string theory, where a number of physicists have been working for decades without coming up with anything that can be tested. Nevertheless, most physicists would agree that if observation falsifies a mathematical relationship, then something has gone wrong with the mathematics, and the problem is usually a false premise. With Bell’s Inequalities, however, it seems logic goes out the window.

Bell’s inequalities are applicable only when the following premises are satisfied:

Premise 1: One can devise a test that will give one of two discrete results. For simplicity we label these (+) and (-).

Premise 2: We can carry out such a test under three different sets of conditions, which we label A, B and C. When we do this, the results between tests have to be comparable, and the simplest way of doing this is to represent the probability of a positive result at A as A(+). The reason for this is that if we did 10 tests at A, 10 at B, and 500 at C, we cannot properly compare the results simply by totalling results.

Premise 1 is reasonably easily met. John Bell used as an example, washing socks. The socks would either pass a test (e.g. they are clean) or fail, (i.e. they need rewashing). In quantum mechanics there are good examples of suitable candidates, e.g. a spin can be either clockwise or counterclockwise, but not both. Further, all particles must have the same spin, and as long as they are the same particle, this is imposed by quantum mechanics. Thus an electron has a spin of either +1/2 or -1/2.

Premises 1 and 2 can be combined. By working with probabilities, we can say that each particle must register once, one way or the other (or each sock is tested once), which gives us

A(+) + A(-) = 1; B(+) + B(-) = 1;   C(+) + C(-) = 1

i.e. the probability of one particle tested once and giving one of the two results is 1. At this point we neglect experimental error, such as a particle failing to register.

Now, let us do a little algebra/set theory by combining probabilities from more than one determination. By combining, we might take two pieces of apparatus, and with one determine the (+) result at condition A, and the negative one at (B) If so, we take the product of these, because probabilities are multiplicative. If so, we can write

A(+) B(-) = A(+) B(-) [C(+) + C(-)]

because the bracketed term [C(+) + C(-)] equals 1, the sum of the probabilities of results that occurred under conditions C.

Similarly

B(+)C(-)   = [A(+) + A(-)] B(+)C(-)

By adding and expanding

A(+) B(-) + B(+)C(-) = A(+) B(-) C(+) + A(+) B(-) C(-) + A(+) B(+)C(-) + A(-)B(+)C(-)

=   A(+)C(-) [(B(+) + B(-)] + A+B C+ + AB(+)C(-)

Since the bracketed term [(B(+) + B(-)] equals 1 and the last two terms are positive numbers, or at least zero, we have

A(+) B(-) + B(+)C(-) ≧ A(+)C(-)

This is the simplest form of a Bell inequality. In Bell’s sock-washing example, he showed how socks washed at three different temperatures had to comply.

An important point is that provided the samples in the tests must give only one result from only two possible results, and provided the tests are applied under three sets of conditions, the mathematics say the results must comply with the inequality. Further, only premise 1 relates to the physics of the samples tested; the second is merely a requirement that the tests are done competently. The problem is, modern physicists say entangled particles violate the inequality. How can this be?

Non-compliance by entangled particles is usually considered a consequence of the entanglement being non-local, but that makes no sense because in the above derivation, locality is not mentioned. All that is required is that premise 1 holds, i.e. measuring the spin of one particle, say, means the other is known without measurement. So, the entangled particles have properties that fulfil premise 1. Thus violation of the inequality means either one of the premises is false, or the associative law of sets, used in the derivation, is false, which would mean all mathematics are invalid.

So my challenge is to produce a mathematical relationship that shows how these violations could conceivably occur? You must come up with a mathematical relationship or a logic statement that falsifies the above inequality, and it must include a term that specifies when the inequality is violated. So, any takers? My answer in my next Monday post.

Substitutes for fossil fuel

In my previous two posts I have discussed how we could assist climate change by reflecting light back to space, and some ways to take carbon dioxide from the atmosphere. However, there is another important option: stop burning fossil fuels, and to do that either we need replacement sources of energy, or we need to stop using energy. In practice, reducing energy usage and replacing the rest would seem optimal. We already have some options, such as solar power and wind power. New Zealand currently gets about 80% of its electricity from natural sources, the two main ones being hydro and geothermal, with wind power coming a more distant third. However, that won’t work for many countries. Nuclear power is one option, and would be a much better one if we could develop a thorium cycle, because thorium reactors do not go critical, you cannot make bombs from the wastes, and the nuclear waste is a lot safer to handle as the bulk of the radioactive wastes have very short half-lives. Thermonuclear power would be a simple answer, but there is a standard joke about that, which I might as well include:

A Princeton plasma physicist is at the beach when he discovers an ancient looking oil lantern sticking out of the sand. He rubs the sand off with a towel and a genie pops out. The genie offers to grant him one wish. The physicist retrieves a map of the world from his car, circles the Middle East and tells the genie, ‘I wish you to bring peace in this region’.

 After 10 long minutes of deliberation, the genie replies, ‘Gee, there are lots of problems there with Lebanon, Iraq, Israel, and all those other places. This is awfully embarrassing. I’ve never had to do this before, but I’m just going to have to ask you for another wish. This one is just too much for me’.

Taken aback, the physicist thinks a bit and asks, ‘I wish that the Princeton tokamak would achieve scientific fusion energy break-even.’

After another deliberation the genie asks, ‘Could I see that map again?’

So, although there is a lot of work to be done, the generation of electricity is manageable so let’s move on to transport. Electricity is great for trains and for vehicles that can draw power from a mains source, and for short-distance travel, but there is a severe problem for vehicles that store their electricity and have to do a lot of work between charging. Essentially, the current batteries or fuel cells are too heavy and voluminous for the amount of charge. There may be improvements, but most of the contenders have problems of either price or performance, or both. In my novel, Red Gold, set during a future colonization of Mars, I used thermonuclear power as the primary source of electricity, and for transport I used an aluminium chlorine fuel cell. That does not exist as yet, but I chose it because for power density aluminium is probably optimal for unit weight, and chlorine the optimal for the oxidizing agent because chlorine would be a liquid on Mars, and further under my refining scheme, there would be an excess of it. Chlorine has the added advantage that it reacts well with aluminium and the aluminium chloride will contribute to the electrolyte. As it happens, since then someone has demonstrated an Al/Cl battery that works very well, so it might even be plausible, but not on Earth. One basic problem with such batteries is an odd one: the ions that have to move in the electrolyte usually interact strongly with any oxygen atoms in the electrolyte, thus slowing down, and reducing the possible power output. That is another reason why I chose a chloride mechanism; it might be fiction but I try and make the speculative science behind it at least based on some correct physics and chemistry.

So, in the absence of very heavy duty batteries, liquid fuels are very desirable. As it happens, I have worked in the area of biofuels (and summarised my basic thoughts in an ebook Biofuels) and with a little basic arithmetic we find that to replace our current usage of oil, and assuming the most optimal technology, we would need to add another amount of productive land equal to our total arable farmland, and that is simply not going to happen. That does not mean that biofuels cannot contribute, but it does mean we need to reduce the load.

There is more than one way to do that. In one of my novels I came up with the answer of having everyone live closer to work. Where I live, during the rush hours there are streams of cars going in opposite directions. If they all lived closer to work, this would be unnecessary. Everyone says, use public transport, except that if you do, you see the trains are choked at that time of day. Such an option would require a lot of social engineering because the bosses want work done at centres where they think it should be done, while the workers cannot afford to live anywhere even vaguely nearby. That means social engineering is required, and people tend to object to that, and politicians will not impose it on the bosses.

As mentioned in my last post, a slightly better option is to grow algae. Some of these are the fastest growing plants on the planet, and of course as far as area is concerned, the oceans are unlimited, at least at present. Accordingly, it should be possible in theory to solve this energy problem. The problem is, though, with the technologies I have recommended here, they all require serious development. We know in principle how they all should work, except possibly nuclear fusion, but we do not know how to put the technology into a useful form. Meanwhile, with the low price of oil there is no incentive. Here, the answer is clear: a serious carbon tax is required on fossil fuels. I would like to see the resultant money being at least in part spent on developing potential technologies. Maybe this is my personal bias coming through – the promising algal technology I was working on collapsed when fund-raising was scheduled for the end of 2007, and thanks to Lehmans, that was not going to succeed. I am not alone. I am familiar with at least three other technologies of which I had no involvement but looked extremely promising, but they ran out of funding. As a society, can we afford the waste?