Yes, finally the Ofcom has spoken. Not very loudly, it seems. It’s really just a rap on the knuckles for “The Great Global Warming Swindle”, largely because:
“…whilst Ofcom is required by the 2003 Act to set standards to ensure that news programmes are reported with ‘due accuracy’ there is no such requirement for other types of programming, including factual programmes of this type.”
Unbelievable. What planet are they (or rather the legislators responsible for this insanity) on? One that is going to get a hell of a lot warmer, it seems, if we can’t work out how to make rational, science-based decisions. How can the category “factual programmes” even exist without “standards [of] due accuracy”? Has anyone thought about what the word “factual” actually means??
Remind me if I don’t return to this argument later on, but to state the thesis briefly, in complex domains, problems – whether big ones (like GW itself), or small ones, like “Swindle” – almost always have many causes. Dealing just with the immediate cause may be futile. In the case of “Swindle” it may be most effective putting effort into changing the rules of the media game, rather than engaging in trench warfare. Because, if the ultimate arbiter of truth is not factual accuracy then we just end up with a popularity contest. Hey, why not incorporate audience votes in science programmes? Phone-in to vote for your favourite theory of gravity!
Luckily, in the case of “The Great Global Warming Swindle”, the programme:
“…broke rules on impartiality and misrepresented the views of the government’s former chief scientist…” even though it “was ‘on balance’ cleared of ‘materially misleading the audience so as to cause harm or offence’”. (Quotes from the Guardian’s news story on the findings).
But what if they hadn’t broken any rules?
And at least in this case George Monbiot got his retaliation in first, with a comment (and CiF) piece in today’s Guardian, as well as an essay in G2. [Illustrated with the usual photographs, incidentally: someone should devise a market instrument for investors in pictures of power stations, melting ice and – my personal tip – pictures of solar panels and photogenic children in Africa. Oh, sorry, it slipped my mind for the minute that markets are in the dog-house right now.]
George does an excellent job, as usual, in his forensic G2 piece (though there’s a touch of conspiracy theory in his analysis of Channel 4) but in the very last column it all falls to pieces. [See yesterday’s post for my views on conspiracy theories and the need to read the detail – in this case right to the end – to avoid Taleb’s randomness illusion]. Even so, I urge you to read George’s dissection of “Swindle”: you may be surprised. I recollect that I had moreorless bought into the idea (which Monbiot debunks) that Thatcher’s espousal of GW science was partly due to her search for weapons to use against the UK’s coal-mining industry.
Remember, though, that, as well as the particular pathology – in this case the way “Swindle” was given a platform – we also need to look at the underlying causes.
This is where a major problem lies in George’s piece:
“[Channel 4] says [its scheduling of “Swindle” and other programmes] ‘is against the background of the IPCC [Intergovernmental Panel on Climate Change] stating that there is a 90% certainty that the causes of global warming are man-made, it follows that there is a 10% uncertainty. Yet this 10% uncertainty receives a disproportionately small amount of airtime.’ I [George continues] find this argument extraordinary. A 90% level of confidence does not mean that 10% of the evidence suggests that an effect is not occurring — in fact, there is no reliable evidence showing that man-made global warming is not taking place. It is expressed in this way because there is no absolute certainty in science. The ‘very high confidence’ the IPCC expresses in the global warming thesis is the strongest statement any reputable scientist would make about his area of study. It is legitimate and right to stress that there can be no absolute certainty about global warming.” [my italics stress].
90% is not in fact a very high probability when we are discussing scientific findings. In my opinion, it would be more than justified to say that we’re “virtually certain” that “man-made global warming is […] taking place”, and by virtually certain I mean at least 99%. A 99.9% claim would be perfectly reasonable. So why does the IPCC not say this? Saying 90% gives the green light to people like Martin Durkin (the maker of “Swindle”).
I’ve just done a bit of weight-training and consulted the IPCC’s latest massive report (The Fourth Assessment Report, or “AR4″). If we look at Table 1 on pages 120-1 of the Scientific Basis (there are 3 parts to the overall report) we see that, although the IPCC is happy to use the words “virtually certain”, it only does this when a result “can be estimated probabilistically”. For example, a particular set of data may have a definable probability of indicating a trend.
[Note that our ability to calculate such statistics requires us to make assumptions about randomness – i.e. a bell-shaped curve or Gaussian distribution. This implies that we have a theory about the causes of variation in the data in the first place! For example, if we say we’re 99% certain that the glaciers are melting this finding must have been calculated against a null hypothesis that changes in glacier volume are subject to random fluctuations. This may not be true. There could be reasons we are entirely unaware of for all the world’s glaciers to either melt or grow at the same time (on top of reasons for correlation between glaciers in the same region which have presumably already been taken into account). Such “unknown unknown” correlation would invalidate the null hypothesis and hence the 99% “virtual certainty”. If we’re 99% sure what the data tells us, then surely we must be at least 99% sure of our theoretical understanding. I’m sure Taleb would agree with me! It’s entirely illogical to have more faith in data-driven findings than in any aspect of the underlying theory explaining them! But this is not my main point today.].
No, what baffles me is why the IPCC restricts itself to a maximum of “very high”, that is, 90%, confidence when it comes to “scientific understanding”.
Politics may have played a part in the IPCC process. Some governments may have lobbied for 90% rather than 99% as the maximum possible confidence. But let’s put that to one side. I want to argue that a critical factor is widespread misunderstanding of the scientific process.
Practising scientists often cite the philosopher Karl Popper. They understand that theories can be “falsified”. Some may even have heard of Thomas Kuhn and appreciate that such “falsification” takes place in “scientific revolutions”.
But what happens in such revolutions? In fact, scientific theories are superseded rather than “falsified”. Let’s consider one or two examples very briefly. When Einstein “overturned” Newton’s theory of gravity he didn’t demonstrate that Newton’s equations were wrong. Rather, he showed the limitations of Newton’s theory. Crucial experiments (where the difference was large enough to be measurable) showed that Einstein’s theory made more accurate predictions than Newton’s. In effect, Einstein incorporated Newton’s findings in his own theory of gravity. Albert never said: “Silly old Isaac’s made a mistake there.”
A case closer to the topic in question is the oft-cited theory of the 1970s that we were about to enter a new ice age. Now this theory hasn’t gone away. The Earth would be cooling (though there is debate as to when the next ice age would occur), if it weren’t for global warming. The current theory of global warming includes the ice age cycle as well as all other prior theories for the variation in the Earth’s climate, such as the effect of volcanic eruptions. Quantitative statements about man-made global warming take into account numerous other causes of climate variation.
Now, it’s possible to imagine reasons why the Earth might not warm as much as projected. For example, the solar system could enter some as yet undetected dust cloud. But any quantitative estimates of the effect of such a dust cloud would have to include the effects of man-made GW. And if the planet cooled dramatically as we entered the dust cloud we’d still have to worry about its temperature rising beyond today’s level because of our greenhouse gas emissions when we came out again. Just the same as, if we solve the problem of global warming and get the climate back to something resembling its pre-industrial state, we will – over the longer timescale of millennia rather than decades – need to take account of the Earth’s ice age cycle which was apparently of such concern in the 1970s.
There are examples in science of theories that are (or could be) flat wrong. But these are theories for which there is no evidence or for which the evidence has been misinterpreted due to problems inherent in the data-gathering process. This is most likely when observations are difficult, such as at the frontiers of physics. For example, the infamous string theory could be wrong because it makes no new predictions.
Any replacement for a theory with lots of firm data, such as global warming, would have to provide explanations for all that data. Clearly this is easiest if the new theory explains the old theory as a special case, rather than by invalidating it entirely. In the history of science theories are almost always shown to be incomplete rather than “wrong”. In my opinion, Imre Lakatos understands this process most clearly, even though this aspect of his ideas is rarely stressed.
The probability of the theory of global warming actually being wrong is therefore vanishingly small. Our level of certainty is, in fact, far more than 99%.
So one of the underlying causes of programmes like “Swindle” is that even the scientific establishment is unclear as to the nature of its theory. Even if there are unknown unknowns and the planet does not end up warming over the 21st century and beyond this would not in itself invalidate the theory of global warming.