Martian Fluvial Flows, Placid and Catastrophic

Image

Despite the fact that, apart localized dust surfaces in summer, the surface of Mars has had average temperatures that never exceeded about minus 50 degrees C over its lifetime, it also has had some quite unexpected fluid systems. One of the longest river systems starts in several places at approximately 60 degrees south in the highlands, nominally one of the coldest spots on Mars, and drains into Argyre, thence to the Holden and Ladon Valles, then stops and apparently dropped massive amounts of ice in the Margaritifer Valles, which are at considerably lower altitude and just north of the equator. Why does a river start at one of the coldest places on Mars, and freeze out at one of the warmest? There is evidence of ice having been in the fluid, which means the fluid must have been water. (Water is extremely unusual in that the solid, ice, floats in the liquid.) These fluid systems flowed, although not necessarily continuously, for a period of about 300 million years, then stopped entirely, although there are other regions where fluid flows probably occurred later. To the northeast of Hellas (the deepest impact crater on Mars) the Dao and Harmakhis Valles change from prominent and sharp channels to diminished and muted flows at –5.8 k altitude that resemble terrestrial marine channels beyond river mouths.

So, how did the water melt? For the Dao and Harmakhis, the Hadriaca Patera (volcano) was active at the time, so some volcanic heat was probably available, but that would not apply to the systems starting in the southern highlands.

After a prolonged period in which nothing much happened, there were catastrophic flows that continued for up to 2000 km forming channels up to 200 km wide, which would require flows of approximately 100,000,000 cubic meters/sec. For most of those flows, there is no obvious source of heat. Only ice could provide the volume, but how could so much ice melt with no significant heat source, be held without re-freezing, then be released suddenly and explosively? There is no sign of significant volcanic activity, although minor activity would not be seen. Where would the water come from? Many of the catastrophic flows start from the Margaritifer Chaos, so the source of the water could reasonably be the earlier river flows.

There was plenty of volcanic activity about four billion years ago. Water and gases would be thrown into the atmosphere, and the water would ice/snow out predominantly in the coldest regions. That gets water to the southern highlands, and to the highlands east of Hellas. There may also be geologic deposits of water. The key now is the atmosphere. What was it? Most people say it was carbon dioxide and water, because that is what modern volcanoes on Earth give off, but the mechanism I suggested in my “Planetary Formation and Biogenesis” was the gases originally would be reduced, that is mainly methane and ammonia. The methane would provide some sort of greenhouse effect, but ammonia on contact with ice at minus 80 degrees C or above, dissolves in the ice and makes an ammonia/water solution. This, I propose, was the fluid. As the fluid goes north, winds and warmer temperatures would drive off some of the ammonia so oddly enough, as the fluid gets warmer, ice starts to freeze. Ammonia in the air will go and melt more snow. (This is not all that happens, but it should happen.)  Eventually, the ammonia has gone, and the water sinks into the ground where it freezes out into a massive buried ice sheet.

If so, we can now see where the catastrophic flows come from. We have the ice deposits where required. We now require at least fumaroles to be generated underneath the ice. The Margaritifer Chaos is within plausible distance of major volcanism, and of tectonic activity (near the mouth of the Valles Marineris system). Now, let us suppose the gases emerge. Methane immediately forms clathrates with the ice (enters the ice structure and sits there), because of the pressure. The ammonia dissolves ice and forms a small puddle below. This keeps going over time, but as it does, the amount of water increases and the amount of ice decreases. Eventually, there comes a point where there is insufficient ice to hold the methane, and pressure builds up until the whole system ruptures and the mass of fluid pours out. With the pressure gone, the remaining ice clathrates start breaking up explosively. Erosion is caused not only by the fluid, but by exploding ice.

The point then is, is there any evidence for this? The answer is, so far, no. However, if this mechanism is correct, there is more to the story. The methane will be oxidised in the atmosphere to carbon dioxide by solar radiation and water. Ammonia and carbon dioxide will combine and form ammonium carbonate, then urea. So if this is true, we expect to find buried where there had been water, deposits of urea, or whatever it converted to over three billion years. (Very slow chemical reactions are essentially unknown – chemists do not have the patience to do experiments over millions of years, let alone billions!) There is one further possibility. Certain metal ions complex with ammonia to form ammines, which dissolve in water or ammonia fluid. These would sink underground, and if the metal ions were there, so might be the remains of the ammines now. So we have to go to Mars and dig.

 

 

 

 

 

Book Discount

From January 23 – 30, my thriller, The Manganese Dilemma, will be discounted to 99c/99p on Amazon. 

The Russians did it; everyone is convinced of that. But just exactly what did they do? Charles Burrowes, a master hacker, is thrown into a ‘black op’ with the curvaceous Svetlana for company to validate new super stealth technology she has brought to the West. Some believe there is nothing there since their surveillance technology cannot show any evidence of it, but then it is “super stealth” so just maybe . . . Also, Svetlana’s father was shot dead as they made their escape. Can Burrowes provide what the CIA needs before Russian counterintelligence or a local criminal conspiracy blow the whole operation out of the water? The lives of many CIA agents in Russia will depend on how successful he is.

Scientific Journals Accused of Behaving Badly

I discussed peer review in my previous post, and immediately came across an article on “Predatory Publishing” (Science367, p 129). They report that six out of ten articles published in a sample of what they call “predatory” journals received no citations, i.e. nobody published a further paper referring to the work in these papers. The only reasonable inference that could be taken from what followed was that this work was very much worse than that published in more established journals. So, first, what are “predatory” journals, are they inherently bad, is the work described there seriously worse than in the established ones, or is this criticism more to defend the elite positions of some? I must immediately state I don’t know because the article did not give specific examples that I could analyse, neither science nor journals, although the chances are I would not have been able to read such journals. There are so many journals out there that libraries are generally restricted by finance on what they purchase.

Which gets to the first response. Maybe there are no citations because nobody is reading the articles because libraries do not buy the journals. There can, of course, be other good reasons why a paper is not cited, in that the subject may be of very narrow interest, but it was published to archive a fact. I have some papers that fit that description. For a while I had a contract to establish the chemical structures of polysaccharides from some New Zealand seaweeds and publish the results. If the end result is clearly correct, and if the polysaccharide was unique to that seaweed, which was restricted to being found in New Zealand and had no immediate use, why would anyone reference it? One can argue that the work ended up being not that interesting, but I did not know that before I started. Before starting I did not know; after completion I did, and by publishing, so will everyone else. If they never have any use, well, at least we know why. From my point of view, they were useful; I had a contract and I fulfilled it. When you are independent and do not have such things as secure salaries, contracts are valuable.

The article defined the “predatory” journal as (a) one that charged to publish (Page charges were well established in the main stream journals); (b) they used aggressive marketing tactics (so do the mainstream journals); and (c) they offered “little or no peer review” (I have no idea how they reached this conclusion because peer review is not open to examination). As an aside, the marketing tactics of the big conglomerates is not pretty either, but they have the advantage of being established, and libraries cannot usually bring themselves to stop using them, as it is an “all or nothing” subscription with a lot of journals involved, with at least one or two essential for a University.

The next criticism was these upstarts were getting too much attention. And horrors, 40% of the articles drew at least one citation. You can’t win against this sort of critic: it is bad because articles are not cited, and bad because they are. I find citations are a bad iindication of importance. Many scientists in the West cite their friends frequently, irrespective of whether the article cited has any relevance because they know nobody checks, and the number of citations is important in the West for getting grants. You cite them, they cite you, everybody wins, except those not in the loop. This is a self-help game.

The next criticism of them is there are too many of them. Actually the same could be said of mainstream journals; take a look at the number of journals from Elsevier. Even worse, many of these come from Africa and Asia. How dare they challenge our established superiority! Another criticism – the articles are not cited in Wikipedia. As if citations in Wikipedia were important. So why do scientists in Africa and Asia publish in such journals? The article suggests an answer: publication is faster. Hmm, fancy going for better performance! So, if it is a problem, the answer would surely be to fix it with the “approved” journals, but that is not going to happen any time soon. Also, from the Africans’ perspective, their papers may well be more likely to be rejected in the peer review of Western journals because they are not using the most modern equipment, in part because they can’t afford it. The work may be less interesting to Western eyes, but is that relevant if it is interesting in Africa? I can’t help but think this article was more a sign of “protecting their turf” than of trying to improve the situation.

Peer Review – a Flawed Process for Science

Back from a Christmas break, and I hope all my readers had a happy and satisfying break. 2020 has arrived, more with a bang than a whimper, but while languishing in what has started off as a dreadful summer here, thanks to Australia (the heat over Central Australia has stirred up the Southern Ocean to give us cold air, while their bush fires have given us smoky air, even though we are about 2/3 the width of the Atlantic away) I have been thinking of how science progresses, or doesn’t. One of the thoughts that crossed my mind was the assertion that we must believe climate change is real because the data are published in peer-reviewed journals. Climate change is certainly real, Australia is certainly on fire, but what do you make of the reference to peer-reviewed journals? Does such publication mean it is right, and that peer review is some sort of gold standard?

Unfortunately, that is not necessarily so, and while the process filters out some abysmal rubbish it also lets through some fairly mediocre stuff, although we can live with that. If the work reports experimental observations we should have more faith in it, right? After all, it will have been looked at by experts in the field who use the same techniques and they will filter out errors. There are two reasons why that is not so. The first is that the modern scientific paper, written to save space, usually gives insufficient evidence to tell. The second is illustrated by climate change; there are a few outlets that are populated solely by deniers, in which another denier reviews the work, in other words, prejudice rules. 

Chemistry World reported a study carried out by the Royal Society of Chemistry that reviewed the performance of peer review, and came to the conclusion that peer review is sexist. Females as corresponding authors made up 23.9% of submissions, and 25.6% of the rejections without peer review. Only 22.9% of the papers accepted after peer review came from female corresponding authors. Female corresponding authors are less likely to receive an immediate “accept”, or “accept with minor revisions”, but interestingly, if the reviewer is female, the males are less likely to receive that. These figures come from 700,000 submissions, so although the percentages are not very big, the question remains: are they meaningful, and if so, what do they mean?

There is a danger in drawing conclusions from statistics because correlations do not mean cause. It may be nothing more than women are more likely to be younger, and hence being early in their careers are more likely to need help, or they are more likely to have sent the paper to a less than appropriate journal, since journals tend to publish only in very narrow fields. It could also indicate that style is more important than substance, because the only conceivable difference with a gender bias is the style used in presentation. It would be of greater interest to check out how status affects the decision. Is a paper from Harvard, say, more likely to be accepted than a paper from a minor college, or something non-academic, such as a patent office?

One of my Post-doc supervisors once advised me that a new idea will not get published, but publication is straightforward if you report the melting point of a new compound. Maybe he was a little bitter, but it raises the question, does peer review filter out critical material because it does not conform to the required degree of comfort and compliance with standard thinking? Is important material rejected simply because of the prejudices or incompetence of the reviewer? What happens if the reviewer is not a true peer? Irrespective of what the editor tells the author, is a paper that criticizes the current paradigm rejected on that ground? I have had some rather pathetic experiences, and I expect a number of others have too, but the counter to that is, maybe the papers had insufficient merit. That is the simple out, after all, who am I?

Accordingly, I shall end by citing someone else. This related to a paper about spacetime, which at its minimum is a useful trick for solving the equations of General Relativity. However, for some people, spacetime is actually a “thing”; you hear about the “fabric of spacetime” and in an attempt to quantize it, scientists have postulated that it exists in tiny lumps. In 1952 an essay was written that was against the prevailing view that spacetime is filled with fields that froth with “virtual particles”. I don’t know whether this was right or not because nobody would publish it, so it is not to be discussed in polite scientific society. It was probably rejected because it went totally against the prevailing view, and we must not challenge that. And no, it was no written by an ignorant fool, although it should have been judged on content and not the status of the person. The author was Albert Einstein, who could legitimately argue that he knew a thing or two about General Relativity, so nobody is immune to such rejection. If you want to see such prejudice in action, try arguing that quantum field theory is flawed in front of an advocate. You would be sent to the corner wearing a conical hat. The advocate will argue that the theory has calculated the magnetic moment of the electron and this is the most accurate calculation in physics. The counter is yes, but only through some rather questionable mathematics (like cancelling out infinities), while another calculation based on Einstein’s relativity gives an error in the cosmological constant of about 120 orders of magnitude (ten followed by 120 zeros), the worst error in all of physics. Oops!

A Different Way of Applying Quantum Mechanics to Chemistry

Time to explain what I presented at the conference, and to start, there are interpretive issues to sort. The first issue is either there is a physical wave or there is not. While there is no definitive answer to this, I argue there is, because something has to go through both slits in the two slit experiment, and the evidence is the particle always goes through only one slit. That means something else should be there. 

I differ from the pilot wave theory in two ways. The first is mathematical. The wave is taken as complex because its phase is. (Here, complex means a value includes the square root of minus 1, and is sometimes called an imginary number for obvious reasons.) However, Euler, the mathematician that really invented complex numbers, showed that if the phase evolves, as waves always do by definition of an oscillation and as action does with time, there are two points that are real, and these are at the antinodes of the wave. That means the amplitudes of the quantal matter wave should have real values there. The second difference is if the particle is governed by a wave pulse, the two must travel at the same velocity. If so, and if the standard quantum equations apply, there is a further energy, equal in magnitude to the kinetic energy of the particle. This could be regarded as equivalent to Bohm’s quantum potential, except there is a clear difference in that there is a clear value of it. It is a hidden variable in that you cannot measure it, but then again, so is potential energy; you have never actually measured that.

This gives a rather unexpected simplification: the wave behaves exactly as a classical wave, in which the square of the amplitude, which is a real number, gives the energy of the particle. This is a big advance over the probabilistic interpetation because while it may be correct,  the standard wave theory would say that A^2 = 1 per particle, that is, the probability of one particle being somewhere is 1, which is not particularly informative. The difference now is that for something like the chemical bond, the probabilistic interpretation requires all values of ψ1 to interact with all values of ψ2; The guidance wave method merely needs the magnitude of the combined amplitude.  Further, the wave refers only to a particle’s motion, and not to a state (although the particle motion may define a state). This gives a remarkable simplification for the stationary state, and in particular, the chemical bond. The usual way of calculating this involves calculating the probabilities of where all the electrons will be, then further calculating the interactions between them, recalculating their positions with the new interactions, recalculating the new interactions, etc, and throughout this procedure, getting the positional probabilities requires double integrations because forces give accelerations, which means constants have to be assigned. This is often done, following Pople, by making the calculations fit similar molecules and transferring the constants, which to me is essentially an empirical assignment, albeit disguised with some fearsome mathematics.

The approach I introduced was to ignore the position of the electrons and concentrate on the waves. The waves add linearly, but you introduce two new interactions that effectively require two new wave components. They are restricted to being between the nuclei, the reason being that the force on nucleus 1, from atom 2 must be zero, otherwise it would accelerate and there would be no bond. The energy of an interaction is the energy added to the antinode. For a hydrogen molecule, the electrons are indistinguishable, so the energy of the two new interactions are equivalent to the energy of the two equivalent ones from the linear addition, therefore the bond energy of H2 is 1/3 the Rydberg energy. This is out by about 0.3%. We get better agreement using potential energy, and introduce electric field renormalisation to comply with Maxwell’s equations. Needless to say this did not get much excitement from the conference audience. You cannot dazzle with obscure mathematics doing this.

The first shock for the audience was when I introduced my Certainty Principle, which, roughly stated, is the action of a stationary state must be quantized, and further, because of the force relationship I introduced above, the action must be that of the sum of the participating atoms. Further, the periodic time of the initial electron interactions must be constant (because their waves add linearly, a basic wave property). The reason you get bonding from electron pairing is that the two electrons halve the periodic time on that zone, which permits twice the energy at constant action, or the addition of the two extras as shown for hydrogen. That also contracts the distance to the antinode, in which case the appropriate energy can only arise because the electron – electron repulsion compensates. This action relationship is also the first time a real cause of there being a bond radius that is constant across a number of bonds has been made. The repulsion energy which is such a problem if you consider it from the point of view of electron position is self-correcting if you consider the wave aspect. The waves cannot add linearly and maintain constant action unless the electrons provide exactly the correct compensation, and the proposition is the guidance waves guide them into doing that.

The next piece of “shock” comes with other atoms. The standard theory says their wave functions correspond to the excited states of hydrogen because they are the only solutions of the Schrödinger equation. However, if you do that, you find that the calculated values are too small. By caesium, the calculated energy is out by an order of magnitude. The standard answer – the electrons penetrate the space occupied by the inner electrons and experience stronger electric field. If you accept the guidance wave principle, that cannot be because the energy is determined at the major antinode. (If you accept Maxwell’s equations I argue it cannot be either, but that is too complicated to be put here.) So my answer is that the wave is actually a sum of component waves that have different nodal structures, and an expression for the so-called screening constant (which is anything but constant, and varies according to circumstances) is actually a function of quantum numbers, and I produced tables that shows the functions give good agreement for the various groups, and for excited states.

Now, the next shock. Standard theory has argued that the properties of heavy elements like gold have unusual properties because these “penetrations” lead to significant relativistic corrections. My guidance wave theory requires such relativistic effects to be very minor, and I provided  evidence that showed the properties of elements like francium, gold, thallium, and lead were as good or better fitting with my functions as any of the other elements.

What I did not tell them was that the tables of data I showed them had been published in a peer reviewed journal over thirty years before, but nobody took any notice. As someone said to me, when I mentioned this as an aside, “If it isn’t published in the Physical Review or J. Chem Phys., nobody reads it.” So much for publishing in academic journals.  As to why I skipped those two, I published that while I was starting my company, and $US 1,000 per page did not strike me as a good investment.So, if you have got this far, I appreciate your persistence. I wish you a very Merry Christmas and the best for 2020. I shall stop posting until a Thursday in mid January, as this is the summer holiday season here.

Ebook Discount

From December 25 – January 1, my ebooks at Smashwords will be significantly discounted. The fictional ebooks include”

Puppeteer:  A technothriller where governance is breaking down due to government debt, and where a terrorist attack threatens to kill tens to hundreds of millions of people and destroy billions of dollars worth of infrastructure.

http://www.smashwords.com/books/view/69696

‘Bot War:  A technothriller set about 8 years later, a more concerted series of terrorist attacks made by stolen drones lead to partial governance breaking down.

Smashwords    https://www.smashwords.com/books/view/677836

Troubles. Dystopian, set about 10 years later still, the world is emerging from anarchy, and there is a scramble to control the assets. Some are just plain greedy, some think corporate efficiency should rule, some think the individual should have the right to thrive, some think democracy should prevail as long as they can rig it, while the gun is the final arbiter.

https://www.smashwords.com/books/view/174203

There is also the non-fictional “Biofuels”. This gives an overview of the issues involved in biofuels having an impact on climate change. Given that electric vehicles, over their lifetime probably have an environmental impact equivalent to or greater than the combustion motor, givn that we might want to continue to fly, and given that the carbon from a combustion exhaust offers no increase in atmospheric carbon levels if it came from biofuel, you might be interested to see what potential this has. The author was involved in research on this intermittently (i.e. when there was a crisis and funding was available) for over thirty years. https://www.smashwords.com/books/view/454344

What does Quantum Mechanics Mean?

Patrice Ayme gave a long comment to my previous post that effectively asked me to explain in some detail the significance of some of my comments on my conference talk involving quantum mechanics. But before that, I should explain why there is even a problem, and I apologise if the following potted history seems a little turgid. Unfortuately, the background situation is important. 

First, we are familiar with classical mechanics, where, given all necessary conditions, exact values of the position and momentum of something can be calculated for any future time, and thanks to Newtom and Leibniz, we do this through differential equations involving familiar concepts such as force, time, position, etc. Thus suppose we shot an arrow into the air and ignored friction and we wanted to know where it was, when. Velocity is the differential of position with respect to time, so we take the velocity and integrate it. However, to get an answer, because there are two degrees of freedom (assuming we know which direction it was shot) we get two constants to the two integrations. In classical mechanics these are easily assigned: the horizontal constant depends on where it was fired from, and the other constant comes from the angle of elevation. 

Classical mechanics reached a mathematical peak through Lagrange and Hamilton. Lagrange introduced a term that is usually the difference between the potential and kinetic energy, and thus converted the problem from forces to one of energy. Hamilton and Jacobi converted the problem to one involving action, which is the time integral of the Lagrangian. The significance of this is that in one sense action summarises all that is involved in our particle going from A to B. All of these variations are equivalent, and merely reflect alternative ways of going about the problem, however the Hamilton Jacobi equation is of special significance because it can be mathematically transformed into a mathematical wave expression. When Hamilton did this, there were undoubtedly a lot of yawns. Only an abstract mathematician would want to represent a cannonball as a wave.

So what is a wave? While energy can be transmitted by particles moving (like a cannon ball) waves transmit energy without moving matter, apart from a small local oscillation. Thus if you place a cork on the sea far from land, the cork basically goes around in a circle, but on average stays in the same place. If there is an ocean current, that will be superimposed on the circular motion without affecting it. The wave has two terms required to describe it: an amplitude (how big is the oscillation?) and a phase (where on the circle is it?).

Then at the end of the 19th century, suddenly classical mechanics gave wrong answers for what was occurring at the atomic level. As a hot body cools, it should give radiation from all possible oscillators and it does not. To explain this, Planck assumed radiation was given off in discrete packets, and introduced the quantum of action h. Einstein, recognizing the Principle of Microscopic Reversibility should apply, argued that light should be absorbed in discrete packages as well, which solved the problem of the photoelectric effect. A big problem arose with atoms, which have positively charged nuclei and electrons moving around it. To move, electrons must accelerate, and hence should radiate energy and spiral into the nucleus. They don’t. Bohr “solved” this problem with the ad hoc assumption that angular momentum was quantised, nevertheless his circular orbits (like planetary orbits) are wrong. For example, if they occurred, hydrogen would be a powerful magnet and it isn’t. Oops. Undeterred, Sommerfeld recognised that angular momentum is dimensionally equivalent to action, and he explained the theory in terms of action integrals. So near, but so far.

The next step involved the French physicist de Broglie. With a little algebra and a bit more inspiration, he represented the motion in terms of momentum and a wavelength, linked by the quantum of action. At this point, it was noted that if you fired very few electrons through two slits at an appropriate distance apart and let them travel to a screen, each electron was registered as a point, but if you kept going, the points started to form a diffraction pattern, the characteristic of waves. The way to solve this was if you take Hamilton’s wave approach, do a couple of pages of algebra and quantise the period by making the phase complex and proportional to the action divided by (to be dimensionally correct bcause the phase must be a number), you arrive at the Schrödinger equation, which is a partial differential equation, and thus is fiendishly difficult to solve. About the same time, Heisenberg introduced what we call the Uncertainty Principle, which usually states that you cannot know the product of the position and the momentum to better than h/2π. Mathematicians then formulated the Schrödinger equation into what we call the state vector formalism, in part to ensure that there are no cunning tricks to get around the Uncertainty Principle.

The Schrödinger equation expresses the energy in terms of a wave function ψ. That immediately raised the question, what does ψ mean? The square of a wave amplitude usually indicats the energy transmitted by the wave. Because ψ is complex, Born interpreted ψ.ψ* as indicating the probability that you would find the particle at the nominated point. The state vector formalism then proposed that ψ.ψ* indicates the probability that a state will have probabilities of certain properties at that point. There was an immediate problem that no experiment could detect the wave. Either there is a wave or there is not. De Broglie and Bohm assumed there was and developed what we call the pilot wave theory, but almost all physicists assume, because you cannot detect it, there is no actual wave.

What do we know happens? First, the particle is always detected as a point, and it is the sum of the points that gives the diffraction pattern characteristic of waves. You never see half a particle. This becomes significant because you can get this diffraction pattern using molecules made from 60 carbon atoms. In the two-slit experiment, what are called weak measurements have shown that the particle always goes through only one slit, and not only that, they do so with exactly the pattern predicted by David Bohm. That triumph appears to be ignored. Another odd feature is that while momentum and energy are part of uncertainty relationships, unlike random variation in something like Brownian motion, the uncertainty never grows

Now for the problems. The state vector formalism considers ψ to represent states. Further, because waves add linearly, the state may be a linear superposition of possibilities. If this merely meant that the probabilities merely represented what you do not know, then there would be no problem, but instead there is a near mystical assertion that all probabilities are present until the subject is observed, at which point the state collapses to what you see. Schrödinger could not tolerate this, not the least because the derivation of his equation is incompatible with this interpretation, and he presented his famous cat paradox, in which a cat is neither dead nor alive but in some sort of quantum superposition until observed. The result was the opposite of what he expected: this ridiculous outcome was asserted to be true, and we have the peculiar logic applied in that you cannot prove it is not true (because the state collapses if you try to observe the cat). Equally, you cannot prove it is true, but that does not deter the mystics. However, there is worse. Recall I noted when we integrate we have to assign necessary constants. When all positions are uncertain, and when we are merely dealing with probabilities in superposition, how do you do this? As John Pople stated in his Nobel lecture, for the chemical bonds of hydrocarbons, he assigned values to the constants by validating them with over two hundred reference compounds. But suppose there is something fundamentally wrong? You can always get the right answer if you have enough assignable constants.The same logic applies to the two-slit experiment. Because the particle could go through either slit and the wave must go through both to get the diffraction pattern, when you assume there is no wave it is argued that the particle goes through both slits as a superposition of the possibilities. This is asserted even though it has clearly been demonstrated that it does not. There is another problem. The assertion that the wave function collapses on observation, and all other probabilities are lost actually lies outside the theory. How does that actually happen? That is called the measurement problem, and as far as I am aware, nobody has an answer, although the obvious answer, the probabilities merely reflected possibilities and the system was always just one but we did not know it is always rejected. Confused? You should be. Next week I shall get around to some from my conference talk that caused stunned concern with the audience.

The Sociodynamics of Science

The title is a bit of an exaggeration as to the importance of this post, nevertheless since I was at what was probably my last scientific conference (NZ Institute of Chemistry, at Christchurch) I could not resist looking around at behaviour as well as the science. I also gave two presentations. Speaking to an audience gives the speaker an opportunity to order the presentation so as to give the most force to the surprising parts of it, not that many took advantage of this. Overall, very few, if any (apart from yours truly) seemed to want to provide their audience with something that might be uncomfortable for their preconceived notions.

First, the general part provided great support for Thomas Kuhn’s analysis. I found most of the invited speakers and keynote speakers to illustrate an interesting aspect: why are they speaking? Very few actually wished to educate or convince anyone of anything in particular, and personally, I found the few that did to be by far the most interesting. Most of the presentations from academics could be summarised as, “I have a huge number of research students and here is what they have done.” What then followed was a very large amount of results, but there was seldom an interesting unifying principle. Chemistry tends to be susceptible to this, as a very common student research program is to try to make a variety of related compounds. This may well have been very useful, but if we do not see why this approach was taken, it tends to feel like filling up some compendium of compounds, or, as Rutherford put it rather acidly, “stamp collecting”. These types of talks are characterised by the speaker trying to get in as many compounds as they can, so they keep talking and use up the allocated question time. I suspect that one of the purposes of these presentations is to say, “Look at what we have done. This has given our graduate students a good number of scientific publications, so if you are thinking of being a grad student, why not come here?” I can readily understand that line of thinking, but its relevance for older scientists is questionable. There were a few presentations where the output would be of more general interest, though. I found the odd presentation that showed how to do something new, where it could have quite wide applications, to be of particular interest.

Now to the personal. My first presentation was a summary of my biogenesis approach. It may have had too much information across too wide a field, but the interesting point was that it generated a discussion at the end relating to my concept of how homochirality was generated. My argument is that reproduction depends on it because the geometry prevents the formation of a second strand if the first strand is not either entirely left-handed or right-handed in its pitch. So the issue then was, it was pure chance that D-ribose containing helices predominated, in part because the chance of getting a long-enough homochiral strand is very remote, and when one arises, then it takes up all the resources and predominates. The legitimate question then is, why doesn’t the other handed helix eventually arise? It may be slower to do so, but it is not necessarily impossible. My partial answer to that is the mer units are also used to bind to some other important units for life to give them solubility, and the wrong sort gets used up and does not build up concentration. Maybe that is so, but there is no evidence.

It was my second presentation that would be controversial, and it was interesting to watch the expressions. Part of the problem for me was it was the last such presentation (there were some closing speakers after me, and after morning tea) and there is something about conferences at the end – everyone is busy thinking about how to get to the airport, etc, so they tend to lose concentration. My first slide put up three propositions: the wave functions everyone uses for atomic orbitals are wrong; because of that, the calculation of the chemical bond requires the use of a hitherto unrecognised quantum effect (which is a very specific expression involving only universally recognised quantum numbers) and finally, the commonly held belief that relativistic effects on the inner electrons make a major effect on the valence electron of the heaviest elements is wrong. 

As you might expect, this was greeted initially with yawns and disinterest: this was going to be wrong. At least that seemed to be written over their faces. I then diverted to explain my guidance wave interpretation, which is essentially the de Broglie pilot wave concept, but with two additions: an application of Euler’s complex number theory that everyone seems to have missed, and secondly, I argued that if the wave really causes diffraction in the two-slit-type experiment, it has to travel at the same speed as the particle. These two points lead to serious simplifications in the calculation of properties of chemical bonds. The next step was to put up a lot of evidence for the different wave functions, with about 70 data points spanning a selection of atoms, of which about twenty supported the absence of any significant relativistic effect. (This does not say relativity is wrong, but merely that its effects on valence electrons are too small to be noticed at this level of analysis.) What this was effectively saying was that most of the current calculations only give agreement with observation when liberal use is made of assignable constants, which conveniently can be adjusted so you get the “right” answer.So, question time. One question surprised me: Does my new approach do anything new? I argued that the fact everyone is using the wrong wave functions, there is a quantum effect that nobody has recognised, and everyone is wrong with those relativistic effects could be considered new. Yes, but have you got a prediction? This was someone difficult to satisfy. Well, if you have access to a good physics lab, I suggested, here is where you can show that, assuming my theory is correct, make an adjustment to the delayed choice quantum eraser experiment (and I outlined the simple change) then you will reach the opposite conclusion. If you don’t agree with me, then you should do the experiment to prove I am wrong. The stunned expressions were worth the cost of going to the conference. Not that anyone will do the experiment. That would show interest in finding the truth, and in fairness, it is more a job for a physicist.