Science Communication and the 2018 Australasian Astrobiology Meeting

Earlier this week I presented a talk at the 2018 Australasian Astrobiology Meeting, with the objective of showing where life might be found elsewhere in the Universe, and as a consequence I shall do a number of posts here to expand on what I thought about this meeting. One presentation that made me think about how to start this series actually came near the end, and the topic included why do scientists write blogs like this for the general public? I thought about this a little, and I think at least part of the answer, at least for me, is to show how science works, and how scientists think. The fact of the matter is that there are a number of topics where the gap between what scientists think and what the general public think is very large. An obvious one is climate change; the presenter came up with a figure that something like 50% of the general public don’t think that carbon dioxide is responsible for climate change while I think the figures she showed were that 98% of scientists are convinced it does. So why is there a difference, and what should be done about it?

In my opinion, there are two major ways to go wrong. The first is to simply take someone else’s word. In these days, you can find someone who will say anything. The problem then is that while it is all very well to say look at the evidence, most of the time the evidence is inaccessible, and even if you overcome that, the average person cannot make head or tail of it. Accordingly, you have to trust someone to interpret it for you. The second way to go wrong is to get swamped with information. The data can be confusing, but the key is to find critical data. This means that when making a decision as to what causes what, you put aside facts that can mean a lot of different things, and concentrate on those that have, at best, one explanation. Now the average person cannot recognize that, but they can recognize whether the “expert” recognizes it. As an example of a critical fact, back to climate change. The fact that I regard as critical is that there was a long-term series of measurements that showed the world’s oceans were receiving a net power input of 0.6 watt per square meter. That may not sound like much, but multiply it over the earth’s ocean area, and it is a rather awful lot of heat.

Another difficulty is that for any given piece of information, either there may be several interpretations for what caused it, or there may be issues assigning significance. As a specific example from the conference, try to answer the question, “Are we alone”? The answer from Seth Shostak, from SETI, is, so far, yes, at least to the extent we have no evidence to the contrary, but of course if you were looking for radio transmissions, Earth would have failed to show signs until about a hundred years ago. There were a number of other reasons given, but one of the points Seth made was a civilization at a modest distance would have to devote a few hundred MW power to send us a signal. Why would they do that? This reminds me of what I wrote in one of my SF novels. The exercise is a waste of time because everyone is listening; listening is cheap but nobody is sending, and simple economics kills the scheme.

As Seth showed, there are an awful lot of reasons why SETI is not finding anything, and that proves nothing. Absence of evidence is not evidence of absence, but merely evidence that you haven’t hit the magic button yet. Which gets me back to scientific arguments. You will hear people say science cannot prove anything. That is rubbish. The second law of thermodynamics proves conclusively that if you put your dinner on the table it won’t spontaneously drop a couple of degrees in temperature as it shoots upwards and smears itself over the ceiling.

As an example of the problems involved with conveying such information, consider what it takes to get a proof? Basically, a theory starts with a statement. There are several forms of this, but the one I prefer is you say, “If theory A is correct, and I do a set of experiments B, under conditions C, and if B and C are very large sets, then theory A will predict a set of results R. You do the experiments and collect a large set of observations O. Now, if there is no element of O that is not an element of R, then your theory is plausible. If the sets are large enough, they are very plausible, but you still have to be careful you have an adequate range of conditions. Thus Newtonian mechanics are correct within a useful range of conditions, but expand that enough and you need either relativity or quantum mechanics. You can, however, prove a theory if you replace “if” in the above with “if and only if”.

Of course, that could be said more simply. You could say a theory is plausible if every time you use it, what you see complies with your theory’s predictions, and you can prove a theory if you can show there is no alternative, although that is usually very difficult. So why do scientists not write in the simpler form? The answer is precision. The example I used above is general so it can be reduced to a simpler form, but sometimes the statements only apply under very special circumstances, and now the qualifiers can make for very turgid prose. The takeaway message now is, while a scientist likes to write in a way that is more precise, if you want to have notice taken, you have to be somewhat less formal. What do you think? Is that right?

Back to the conference, in the case of SETI. Seth will not be proven wrong, ever, because the hypothesis that there are civilizations out there but they are not broadcasting to us in a way we can detect cannot be faulted. So for the next few weeks I shall look more at what I gathered from this conference.

Advertisements

The Fermi Paradox and Are We Alone in the Universe?

The Fermi paradox is something like this. The Universe is enormous, and there are an astronomical number of planets. Accordingly, the potential for intelligent life somewhere should be enormous, but we find no evidence of anything. The Seti program has been searching for decades and has found nothing. So where are these aliens?

What is fascinating about this is an argument from Daniel Whitmire, who teaches mathematics at the University of Arkansas and has published a paper in the International Journal of Astrobiology (doi:10.1017/S1473550417000271 ). In it, he concludes that technological societies rapidly exterminate themselves. So, how does he come to this conclusion. The argument is fascinating relating to the power of mathematics, and particularly statistics, to show or mislead.

He first resorts to a statistical concept called the Principle of Mediocrity, which states that, in the absence of any evidence to the contrary, any observation should be regarded as typical. If so, we observe our own presence. If we assume we are typical, and we have been technological for 100 years (he defines being technological as using electricity, but you can change this) then it follows that our being average means that after a further 200 years we are no longer technological. We can extend this to about 500 years on the basis that in terms of age a Bell curve is skewed (you cannot have negative age). To be non-technological we have to exterminate ourselves, therefore he concludes that technological societies exterminate themselves rather quickly. We may scoff at that, but then again, watching the antics over North Korea can we be sure?

He makes a further conclusion: since we are the first on our planet, other civilizations should also be the first. I really don’t follow this because he has also calculated that there could be up to 23 opportunities for further species to develop technologies once we are gone, so surely that follows elsewhere. It seems to me to be a rather mediocre use of this principle of mediocrity.

Now, at this point, I shall diverge and consider the German tank problem, because this shows what you can do with statistics. The allies wanted to know the production rate of German tanks, and they got this from a simple formula, and from taking down the serial numbers of captured or destroyed tanks. The formula is

N = m + m/n – 1

Where N is the number you are seeking, m is the highest sampled serial number and n is the sample size (the number of tanks). Apparently this was highly successful, and their estimations were far superior to intelligence gathering, which always seriously overestimated.

That leaves the question of whether that success means anything for the current problem. The first thing we note is the Germans conveniently numbered their tanks, and in sequence, the sample size was a tolerable fraction of the required answer (it was about 5%), and finally it was known that the Germans were making tanks and sending them to the front as regularly as they could manage. There were no causative aspects that would modify the results. With Whitmire’s analysis, there is a very bad aspect of the reasoning: this question of whether we are alone is raised as soon as we have some capability to answer it. Thus we ask it within fifty years of having reasonable electronics; for all we know they may still be asking it a million years in the future, so the age of technological society, which is used to base the lifetime reasoning, is put into the equation as soon as it is asked. That means it is not a random sample, but causative sample. Then on top of that, we have a sample of one, which is not exactly a good statistical sample. Of course if there were more samples than one, the question would answer itself and there would be no need for statistics. In this case, statistics are only used when they should not be used.

So what do I make of that? For me, there is a lack of logic. By definition, to publish original work, you have to be the first to do it. So, any statistical conclusion from asking the question is ridiculous because by definition it is not a random sample; it is the first. It is like trying to estimate German tank production from a sample of 1 and when that tank had the serial number 1. So, is there anything we can take from this?

In my opinion, the first thing we could argue from this Principle of Mediocrity is that the odds of finding aliens are strongest on earth-sized planets around G type stars about this far from the star, simply because we know it is at least possible. Further, we can argue the star should be at least about 4.5 billion years old, to give evolution time to generate such technological life. We are reasonably sure it could not have happened much earlier on Earth. One of my science fiction novels is based on the concept that Cretaceous raptors could have managed it, given time, but that still only buys a few tens of millions of years, and we don’t know how long they would have taken, had they been able. They had to evolve considerably larger brains, and who knows how long that would take? Possibly almost as long as mammals took.

Since there are older stars out there, why haven’t we found evidence? That question should be rephrased into, how would we? The Seti program assumes that aliens would try to send us messages, but why would they? Unless they were directed, to send meaningful signals over such huge distances would require immense energy expenditures. And why would they direct signals here? They could have tried 2,000 years ago, persisted for a few hundred years, and given us up. Alternatively, it is cheaper to listen. As I noted in a different novel, the concept falls down on economic grounds because everyone is listening and nobody is sending. And, of course, for strategic reasons, why tell more powerful aliens where you live? For me, the so-called Fermi paradox is no paradox at all; if there are aliens out there, they will be following their own logical best interests, and they don’t include us. Another thing it tells me is this is evidence you can indeed “prove” anything with statistics, if nobody is thinking.