Ossified Science

There was an interesting paper in the Proceedings of the National Academy of Sciences (118,  e2021636118,  https://doi.org/10.1073/pnas.2021636118 ) which argued that science is becoming ossified, and new ideas simply cannot emerge. My question is, why has it taken them this long to recognize it? That may seem a strange thing to say, but over the life of my career I have seen no radically new ideas get acknowledgement.

The argument in the paper basically fell down to one simple fact: over this period there has been a huge expansion in the quantity of scientists, research funding, and the number of publications. Progress in the career of a scientist depends on the number of papers produced. However, the more papers produced, the more likely is the science to stagnate because nobody has the time to read everything. People pick and choose what to read, the selection biased by the need not to omit people who may read your funding application. Reading is thus focused on established thinking. As the number of papers increase, citations flow increasingly towards the already well-cited papers. Lesser known authors are unlikely to ever become highly cited; if they do it is not through a cumulative process of analysis. New material is extremely unlikely to disrupt existing work, with the result that progress in large established scientific fields may be trapped in existing canon. That is fairly stern stuff.

It is important to note there are at least three major objectives relating to science. The first is developing methods to gain information, or, if you prefer, developing new experimental or observational techniques. The second is using those techniques to record more facts. The more scientists there are, the more successful these are, and over the period we have most certainly been successful in these objectives. The rapid provision of new vaccines for SARS-CoV-2 shows that when pushed, we find ways of how to do it. When I started my career, a very large clunky computer that was incredibly slow and had internal memory measured in bytes occupied a room. Now we have memory that stores terrabytes in something you can hold in your hand. So yes, we have learned how to do it, and we have acquired a huge amount of knowledge. There is a glut of facts available.

The third objective is to analyse those facts and derive theories so we can understand nature, and do not have to examine that mountain of data for any reason other than to verify that we are on the right track. That is where little has happened.

As the PNAS paper points out, policy reflects the “more is better” approach. Rewards are for the number of articles, and citations reflect the quality of them. The number of publications are easily counted, but the citations are more problematical. To get the numbers up, people carry out work most likely to reach a fast result. The citations are the ones most easily found, which means those that get a good start gather citations like crazy. There are also “citation games”: you cite mine, I’ll cite yours. These citations may have nothing in particular to add in terms of the science or logic, but they do add to the career prospects.

What happens when a paper is published? As the PNAS paper says, “cognitively overloaded reviewers and readers process new work only in relationship to existing exemplars”. If a new paper does not fit the existing dynamic, it will be ignored. If  the young researcher wants to advance, he or she must avoid trying to rock the boat. You may feel that the authors of this are overplaying a non-problem. Not so. One example shows how the scientific hierarchy thinks. One of the two major advances in theoretical physics in the twentieth century was quantum mechanics. Basically, all our advanced electronic technology depends on that theory, and in turn the theory is based on one equation published by Erwin Schrödinger. This equation is effectively a statement that energy is conserved, and that the energy is determined by a wave function ψ. It is too much to go into here, but the immediate consequence was the problem, what exactly does ψ represent?

Louis de Broglie was the first to propose that quantum motion was represented by a wave, and he came up with a different equation which stated the product of the momentum and wavelength was Planck’s constant, or the quantum of action. De Broglie then proposed that ψ was a physical wave, which he called the pilot wave. This was promptly ignored in favour of a far more complicated mathematical procedure that we can ignore for the present. Then, in the early 1950s David Bohm more or less came up with the same idea as de Broglie, which was quite different from the standard paradigm. So how was that received? I found a 1953 quote from J. R. Oppenheimer: “We consider it juvenile deviationism .. we don’t waste our time … [by] actually read[ing] the paper. If we cannot disprove Bohm, then we must agree to ignore him.” So much for rational analysis.

The standard theory states that if an electron is fired at two slits it goes through BOTH of them then gives an interference pattern. The pilot wave says the electron has a trajectory, goes through one slit only, and while it forms the same interference pattern, an electron going through the left slit never ends up in the right hand pattern. Observations have proved this to be correct (Kocsis, S. and 6 others. 2011. Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer Science 332: 1170 – 1173.) Does that change anyone’s mind? Actually, no. The pilot wave is totally ignored, except for the odd character like me, although my version is a little different (called a guidance wave) and it is ignored more.

Advertisement

Ebook Discount

From October 21 – 28, “Red Gold” will be discounted to 99c/99p.

Mars is to be colonized. The hype is huge, the suckers will line up, and we will control the floats. There is money to be made, and the beauty is, nobody on Earth can check what is really going on on Mars. Meanwhile, on Mars we shall be the only ones with guns. This can’t lose.

Except that there is one person who will try to stop the fraud. Which means he cannot be allowed to live. Partly inspired by the 1988 crash, Red Gold shows the anatomy of one sort of fraud. Then there is the problem that fraudsters with guns cannot permit anyone to expose them. One side must kill the other.

If you liked The Martian where science allowed one person to survive then Red Gold is a thriller that has a touch of romance, a little economics and enough science to show how Mars might be colonised and the colonists survive indefinitely.http://www.amazon.com/dp/B009U0458Y

Nanotech Antivirals

Most by now will have heard of nanotechnology, although probably rather few actually understand its full implications. There are strict limits to what can be done, and the so-called nanotech that was to be put into vaccines to allow Bill Gates to know where you were is simply not on. (Why anyone would think Bill Gates would care where you are also eludes me.) However, nanotechnology has some interesting uses in the fight against viruses. The Pfizer and Moderna vaccines that use messenger RNA to develop cell resistance, but the RNA is delivered to the cells by being encased in lipid nanoparticles. Lipids are technically and substance from living organisms that are soluble in organic solvents and insoluble in water, but they are often just fats. The lipid breaks open the cell wall, allowing the messenger RNA to get in, and of course that is the method of the virus as well: it is RNA encased in lipid. This technique can be used in other ways, thus such nanoparticles are showing promise for acting as delivery vehicles for other drugs and vaccines.

However, there may be an even more interesting use, as outlined in Nature Biotech. 39: 1172 – 4. The idea is that such nanomaterials could engage with viruses directly, either disrupting them or binding them. A route to disruption may involve nothing more than breaking apart the virus’ outer membrane. The binding approach works because many viruses rely on glycoproteins on their surface to bind to host cells. Anything that can mimic these cellular attachment points can bind the virus, effectively “nanosponges” for mopping them up. One way to make such

 “sponges” something like red blood cells have their contents removed then the remaining membrane is broken into thousands of tiny vesicles about 100 nanometers wide. They then get these vesicles to encase a biocompatible and biodegradable polymer, with the result that each such piece of polymer is coated with genuine cell membrane. Viruses recognize the cell membrane, attach and try to enter the cell, but for them the contents are something of a disappointment and they can’t get back out.

Such membranes obtained from human lung epithelial type II cells, or from human macrophages have angiotensin-converting enzyme 2 (ACE 2)and CD147, both of which SARS-C0V-2 binds to. Potentially we have a treatment that will clean up a Covid-19 infection. According to a study with mice it “showed efficacy” against the virus and showed no evidence of toxicity. Of course, there remains a way to go.

A different approach that shows promise is to construct nano-sized particles that are coated with something that will bind the virus. One example was used in a nasal spray for mice that led to a 99% reduction in viral load when treated with SARS-CoV-2 laden air. It is claimed the particles are not absorbed by the body, although so far the clinical study has not been peer reviewed. The advantage of this approach is that it can in principle be applied to a reasonably wide range of viruses. A further approach was to make “shells” out of DNA, and coat the inner side of these with something that will bind viruses. With several attachment sites, the virus cannot get out, and because of the bulk of the shell cannot bind to a cell and hence cannot infect. In this context, it is not clear whether the other approaches that bind viruses can still infect if the bound virus can attack from its other side.

Many viruses have an outer membrane that is a phospholipid bilayer, and this is essential for the virus to be able to fuse with cell membranes. A further approach is to disrupt the viral membrane, thus stop the fusing. One example is to form a nano-sized spherical surfactant particle and coat it with entities such as peptides that bind to viral glycoproteins. The virus attaches, then the surfactant simply destroys the viral membrane. As can be seen, there are a wide range of possible approaches. Unfortunately, as yet they are still at the preliminary stages and while efficacy has been shown in vitro and in mice, it is unclear what the long-term effects will be. Of course, if the patient is dying of a viral attack, long-term problems are not on his/her mind. One of the side-effects of SARS-CoV-2 may be that it has stimulated genuine research into the topic. This the Biden administration is firing $3 billion at research. It is a pity it takes a pandemic to get us into action, though.

Plastics and Rubbish

In the current “atmosphere” of climate change, politicians are taking more notice of the environment, to which as a sceptic I notice they are not prepared to do a lot about it. Part of the problem is following the “swing to the right” in the 1980s, politicians have taken notice of Reagan’s assertion that the government is the problem, so they have all settled down to not doing very much, and they have shown some skill at doing very little. “Leave it to the market” has a complication: the market is there to facilitate trade in which all the participants wish to offer something that customers want and they make a profit while doing it. The environment is not a customer in the usual sense and it does not pay, so “the market” has no direct interest in it.

There is no one answer to any of these problems. There is no silver bullet. What we have to do is chip away at these problems, and one that indicates the nature of the problem is plastics. In New Zealand the government has decided that plastic bags are bad for the environment, so the single use bags are no longer used in supermarkets. One can argue whether that is good for the environment, but it is clear that the wilful throwing away of plastics and their subsequent degradation is bad for it. And while the disposable bag has been banned here, rubbish still has a lot of plastics in it, and that will continue to degrade. If it were buried deep in some mine it probably would not matter, but it is not. So why don’t we recycle them?

Then first reason is there are so many variations of them and they do not dissolve in each other. You can emulsify a mix, but the material has poor strength because there is very little binding at the interface of the tiny droplets. That is because they have smooth surfaces, like the interface between oil and water. If the object is big enough this does not matter so much, thus you can make reasonable fence posts out of recycled plastics, but there really is a limit to the market for fence posts.

The reason they do not dissolve in each other comes from thermodynamics. For something to happen, such as polymer A dissolving in polymer B, the change (indicated by the symbol Δ) in what is called the free energy ΔG has to be negative. (The reason it is negative is convention; the reason it is called “free” has nothing to do with price – it is not free in that sense.) To account for the process, we use an equation

            ΔG = ΔH -T ΔS

ΔH reflects the change of energy between each molecule in its own material and in solution of the other material. As a general rule, molecules favour having their own kind nearby, especially if they are longer because the longer they are the interactions per atom are constant for other molecules of the same material, but other molecules do not pack as well. Thinking of oil and water, the big problem for solution is that water, the solvent, has hydrogen bonds that make water molecules stick together. The longer the polymer, per molecule that enhances the effect. Think of one polymer molecule has to dislodge a very large number of solvent molecules. ΔS is the entropy and it increases as the degree of randomness increases. Solution is more random per molecule, so whether something dissolves is a battle between whether the randomness per molecule can overcome the attractions between the same kind. The longer the polymer, the less randomness is introduced and the greater any difference in energy between same and dissolved. So the longer the polymers, the less likely they are to dissolve in each other which, as an aside, is why you get so much variety in minerals. Long chain silicates that can alter their associate ions like to phase separate.

So we cannot recycle, and they are useless? Well, no. At the very least we can use them for energy. My preference is to turn them, and all the organic material in municipal refuse, into hydrocarbons. During the 1970s oil crises the engineering was completed to build a demonstration plant for the city of Worcester in Massachusetts. It never went ahead because as the cartel broke ranks and oil prices dropped, converting wastes to hydrocarbon fuels made no economic sense. However, if we want to reduce the use of fossil fuels, it makes a lot of sense to the environment, IF we are prepared to pay the extra price. Every litre of fuel from waste we make is a litre of refined crude we do not have to use, and we will have to keep our vehicle fleet going for quite some time. The basic problem is we have to develop the technology because the engineering data for that previous attempt is presumably lost, and in any case, that was for a demonstration plant, which is always built on the basis that more engineering questions remain. As an aside, water at about 360 degrees Centigrade has lost its hydrogen bonding preference and the temperature increase means oil dissolves in water.

The alternative is to burn it and make electricity. I am less keen on this, even though we can purchase plants to do that right now. The reason is simple. The combustion will release more gases into the atmosphere. The CO2 is irrelevant as both do that, but the liquefaction approach sends nitrogen containing material out as water soluble material which could, if the liquids were treated appropriately, be used as a fertilizer, whereas in combustion they go out the chimney as nitric oxide or even worse, as cyanides. But it is still better to do something with it than simply fill up local valleys.

One final point. I saw an item where some environmentalist was condemning a UK thermal plant that used biomass arguing it put out MORE CO2 per MW of power than coal. That may be the case because you can make coal burn hotter and the second law of thermodynamics means you can extract more energy in the form of work. (Mind you, I have my doubts since the electricity is generated from steam.) However, the criticism shows the inability to understand calculus. What is important is not the emissions right now, but those integrated over time. The biomass got its carbon from the atmosphere say forty years ago, and if you wish to sustain this exercise you plant trees that recover that CO2 over the next forty years. Burn coal and you are burning carbon that has been locked away from the last few million years.

Interstellar Travel Opportunities.

As you may have heard, stars move. The only reason we cannot see this is because they are so far away, and it takes so long to make a difference. Currently, the closest star to us is Proxima Centauri, which is part of the Alpha Centauri grouping. It is 4.2 light years away, and if you think that is attractive for an interstellar voyage, just wait a bit. In 28,700 years it will be a whole light year closer. That is a clear saving in travelling time, especially if you do not travel close to light speed.

However, there have been closer encounters. Sholz’s star, which is a binary; a squib of a red dwarf plus a brown dwarf, came within 0.82 light years 78,000 years ago. Our stone age ancestors would probably have been unaware of it, because it is so dim that even when that close it was still a hundred times too dim to be seen by the naked eye. There is one possible exception to that: occasionally red dwarfs periodically emit extremely bright flares, so maybe they would see a star appear from nowhere, then gradually disappear. Such an event might go down in their stories, particularly if something dramatic happened. There is one further possible downside for our ancestors: although it is unclear whether such a squib of a star was big enough, it might have exerted a gravitational effect on the Oort cloud, thus generating a flux of comets coming inwards. That might have been the dramatic event.

That star was too small to do anything to disrupt our solar system, but it is possible that much closer encounters in other solar systems could cause all sorts of chaos, including stealing a planet, or having one stolen. They could certainly disrupt a solar system, and it is possible that some of the so-called star-burning giants were formed in the expected places and were dislodged inwards by such a star. That happens when the dislodged entity has a very elliptical orbit that takes it closer to the star where tidal effects with the star circularise it. That did not happen in our solar system. Of course, it does not take a passing star to do that; if the planets get too big and too close their gravity can do it.

It is possible that a modestly close encounter with a star did have an effect on the outer Kuiper Belt, where objects like Eris seem to be obvious Kuiper Belt Objects, but they are rather far out and have very elliptical orbits. It would be expected that would arise from one or more significant gravitational interactions.

The question then is, if a star passed closely should people take advantage and colonise the new system? Alternatively, would life forms there have the same idea if they were technically advanced? Since if you had the technology to do this, presumably you would also have the technology to know what was there. It is not as if you do not get warning. For example, if you are around in 1.4 million years, Gliese 710 will pass within 10,000 AU of the sun, well within the so-called Oort Cloud. Gliese 710 is about 60% the mass of the sun, which means its gravity could really stir up the comets in the Oort cloud, and our star will do exactly the same for the corresponding cloud of comets in their system. In a really close encounter it is not within the bounds of possibility that planetary bodies could be exchanged. If they were, the exchange would almost certainly lead to a very elliptical orbit, and probably at a great distance. You may have heard of the possibility of a “Planet 9” that is at a considerable distance but with an elliptical orbit has caused highly elliptical orbits in some trans Neptunian objects. Either the planet, if it exists at all, or the elliptical nature of the orbits of bodies like Sedna, could well have arisen from a previous close stellar encounter.

As far as I know, we have not detected planets around this star. That does not mean there are not any because if we do not lie on the equatorial plane of that star we would not see much from eclipsing observations (and remember Kepler only looks at a very small section of the sky, and Gliese 710 is not in the original area examined) and at that distance, any astronomer with our technology there would not see us. Which raises the question, if there were planets there, would we want to swap systems? If you accept the mechanism of how planets form in my ebook “Planetary Formation and Biogenesis”, and if the rates of accretion, after adjusting for stellar mass for both were the same, then any rocky planet in the habitable zone is likely to be the Mars equivalent. It would be much warmer and it may well be much bigger than our Mars, but it would not have plate tectonics because its composition would not permit eclogite to form, which is necessary for pull subduction. With that knowledge, would you go?