The Big Question

This post was originally published on Judith Curry’s blog. Reblogging it here for convenience.

In normal science, you get more certain the more the data turns out to fit the predictions of the theory. The IPCC and friends have come up with a method of calculating that turns this on its head: the weaker the match between predictions and data, the more certain you become. In this essay, I try to unravel the statistical gymnastics that help make this seem reasonable.

 

 

The Big Question

The Big Question in the climate change debate, as traditionally and conventionally posed, is: “is global warming caused by humans?”

To those of us who know a little about climate science, it’s clearly an over-simplification, since climate scientists typically consider both anthropogenic and natural factors as part of the equation. The IPCC version of the Big Question is “is more than half of global warming caused by humans?” In the IPCC universe, the quest for the answer to the Big Question is known as attribution.

Even though it’s not really black and white, the big question is used as if it is, and determines who you are. Are you a “warmist” or a “skeptic”? To be properly pigeonholed, you must answer the Big Question. And to answer it, you must understand it. That, however, may not be as easy as it seems.

Relative and absolute AGW

According to the Big Question, the key variable is the relative size of the contributions from anthropogenic and natural causes. But for practical purposes, and thus for policy, what matters is not relative but absolute change. Global average temperature, in absolute terms, is assumed to have a real effect long-term on the real world, potentially affecting humans ecosystems in the form of drought, floods, storms, sea level rise, and so on. (Or alternatively, global temperature may be considered an indicator of whatever climate processes do have an impact.)

Furthermore, it’s not just the amount of change, but also the rate of change that is important; the rate of change is relevant to our ability to adapt and to the perceived urgency to act. Two degrees of warming will have the same effect whether it’s caused by people, nature or something else.

Still, it matters how large the anthropogenic component is by itself, since a key policy issue is avoiding dangerous anthropogenic climate change. In theory at least, it is possible that it might become so large or happen so fast that it would overwhelm natural climate change and cause events that could not happen if nature were left to its own ways.

This in turn means that the absolute amount of the anthropogenic warming is the most important variable. Less of it, in absolute terms, will most likely lead to lower projections for future temperatures.

It is possible for the anthropogenic contribution to be lower in absolute terms lower while the relative contribution is higher. This may be the case if the total observed rate of warming is lower.  For a simplistic example, 100% of 1 degree is twice as much (in relative terms)  as 50% of 4 degrees.

The time frame considered in each of the three most recent IPCC reports has been from around 1950 to the “present” for that report. This means that it has been extended twice. The difference is small relative to the total period. But the change caused by considering the most recent 10–15 years or so is significant because this period is the specific subject of much discussion. And since there has been little or no warming during this time, the average rate of warming over the entire time period has decreased.

The increase in certainty claimed by the IPCC applies to a fixed range of relative contribution (more than 50%). But since the average rate of total warming is successively lower, the 50% represents successively less absolute warming, and so the increased confidence concerns successively weaker hypotheses in absolute terms.

However, this is not quite the whole story. Gavin Schmidt at RealClimate points out another difference, which pulls us in the opposite direction. He claims that the IPCC AR5 (fifth assessment report) statement is stronger than the one in AR4, since the AR5 refers to the total anthropogenic contribution, while the AR4 uses the effect of greenhouse gases alone, saying: “As I discussed last time, the GHG trend is almost certainly larger than the net anthropogenic trend because of the high likelihood that anthropogenic aerosols have been a net cooling over that time.” This could be the case, but even then it’s highly uncertain, since it’s based on the cooling effect of anthropogenic aerosols.

What is certain is that 50% of less is less. “More than half” of less warming per decade is less warming per decade.

So we have two changes, one in the direction of a weaker hypothesis, and another more uncertain one in the direction of a stronger hypothesis. Whatever the balance between them, the fact is that the IPCC is presenting confidence estimates on a hypothesis that appears to be the same but is actually changing from one report to the next.

The debate between the two “sides” seems not to recognize the fact that the answer to the Big Question may depend, at least theoretically, on the time frame under consideration.

To summarize, it’s important to understand the difference between absolute and relative contribution. If not, there may be misunderstandings. Most of the world seems to act as if the relative contribution of AGW is the one thing that matters. In principle, this is the same fallacy as if I were to believe that I can drink as much beer as I want without getting drunk, since the relative alcohol content of beer is fairly low.

But that’s not all. There is also the issue of net contributions.

The net warming model

You might think that the anthropogenic portion of global warming must be somewhere between 0 and 100%. I did until fairly recently. So, it seems, have most others, including Judith Curry: “There is general agreement that the percentages of warming each attributed to natural and anthropogenic causes is less than 100% and greater than 0%.”

But that is not necessarily the case in the IPCC universe. A contribution may be more than 100% or less than 0. This seems clear as early as the IPCC Third Assessment Report (TAR) from 2001: “Anthropogenic greenhouse gases are likely to have made a significant and substantial contribution to the warming observed over the second half of the 20th century, possibly larger than the total observed warming.” In other words, AGW might represent more than 100% of the warming that has taken place.

How can this be? It’s counterintuitive but not very advanced. Let us imagine for the sake of illustration that in the absence of AGW, the world would have cooled by an amount equal to the warming that has in fact occurred. In this case, the anthropogenic contribution would be 200%, and the natural contribution would be -100%. In Gavin Schmidt’s discussion of the AR4 and AR5 attribution statements, there is a graph that illustrates how the relative anthropogenic contribution can be higher than 100% or even lower than 0 (although this is shown as having extremely low probability).

This is not generally understood. Or perhaps it’s vaguely understood, but underappreciated. But it has huge consequences for the way data is interpreted.

No one seems to have made an effort to explain the concept of net contributions. Gavin Schmidt seems to take it for granted and does not point it out explicitly. And the IPCC’s treatment of it is confusing. Strictly speaking, the AR5 attribution statement contradicts the idea of net contributions by saying:

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.

Since it is not possible to cause events that never actually happen, there is no way greenhouse gases could have caused more warming than actually occurred. This means that in this instance “more than half” must be read as referring to a percentage between 50 and 100.

This is analogous to attempted murder. If I point a gun at you and fire it, but something stops the bullet in its path, I did not cause you to die. It’s a trivial and conclusive inference: you are still alive, therefore I did not kill you, nor did anyone else. Similarly, if a given amount of warming was prevented from happening by a cooling influence, the warming did not happen and was not caused by CO2 or anything else.

I will choose to assume that those who wrote the IPCC attribution statement intended it to refer to the net anthropogenic contribution and did not realize that a crucial aspect of its meaning had been “lost in translation”.

How is it crucial? The first thing we might notice is that what the IPCC is 95% certain about is not what we thought. It is a range from perhaps 50% to about 200% instead of a range from 50% to 100%. The ranges are on different variables, so there is no direct way to compare them. I have no idea how to interpret the difference in an intuitively meaningful way; it may not even be possible. But there is no doubt that they are different. Very few of those who are exposed to the 95% figure will have a full understanding of what it means.

After you get over the initial surprise about the more than 100%, the net contribution model seems like a perfectly valid mathematical algorithm. However, that does not mean you have understood its implications and interpretation. It is a theoretical abstraction that is counterintuitive in everyday terms. In this model, a million degrees of warming followed by a million degrees of cooling is just a net zero warming. But we don’t normally think of cooling as negative warming. Cooling is the opposite phenomenon of warming, not the same thing with the opposite sign. We think of zero as nothing, and “less than nothing” is just an exaggerated way of saying nothing.

It’s a basic mathematical property of the model that, although a contribution may be more than 100% or negative, the sum of the total contributions must always be 100%, which represents the total warming over the period. If the anthropogenic contribution is 200%, the natural contribution must necessarily be –100%.

Another characteristic of the model is illustrated by the bell curves in the RealClimate post. It shows two distributions centered on 80% and 100%. They both generate a 95% probability that the actual value is more than 50%. The curve that has 100% as its mean is significantly broader than the one on 80%, representing a greater uncertainty for the best estimate. If instead we imagine that the estimate’s uncertainty is equal in the two cases, the one with the higher best estimate is bound to generate a higher likelihood about the attribution. In other words, a higher best estimate implies greater confidence and less uncertainty that the contribution is more than 50%, all else being equal. So far, this may seem reasonable, but it has surprising consequences when we explore how it affects the interpretation of the so-called pause or hiatus in global warming.

What does the pause mean?

Skeptics claim that the pause indicates that the natural influence on climate is stronger than previously believed. But in the net warming model where cooling is negative warming, natural cooling implies a lower proportion of natural warming and a higher proportion of anthropogenic warming.

Let’s do some simple calculations with approximate numbers using the net warming model, just to try to understand how this works. Since the result will be relative percentages, we can use arbitrary units for the absolute numbers. We’ll pretend to know that during second half of the 20th century, we had 3 arbitrary units of anthropogenic warming and 1 unit of natural warming. This adds up to 4 units of total warming, making the relative anthropogenic contribution 75%.

Scenario 1: Continued warming. Let’s consider what would have happened if warming had continued at the same rate. Since the time from 2000 until now is about one fourth of the first period, this means 1 unit of total warming. As far as I can tell, this warming, if it had occurred, would likely have been attributed almost exclusively to anthropogenic forcing. (Unless, perhaps, there had been a clear, detectable increase in solar forcing.) So the recent period represents 1 unit of AGW and zero natural influence. Thus we get 4 units of AGW versus 1 unit from natural causes. That’s 80% AGW. A slight increase, which might seem intuitively correct since we’re exploring counterfactually the possibility that warming had continued somewhat as expected.

Scenario 2: The pause. Now consider a scenario more like what actually happened. If temperatures during the second period remained constant, the expected 1 unit of AGW would seem to have been neutralized by a similar amount of natural cooling. That adds up to 4 AGW units as before, but now the 1 unit of natural warming from the first period has been canceled out by the 1 unit of natural cooling from the second, for a total of 0. So now that there has been no rise in temperature for the second period, the calculated anthropogenic contribution for the entire period is 100%. This is more than for the temperature rise scenario. This may seem odd, but that’s what the numbers tell us.
Scenario 3: Cooling. We could do another scenario, imagining that there had been 1 unit of total cooling instead in the period since 2000. Presumably that would imply 2 units of natural cooling, –1 over the entire time frame. We still have 4 units of AGW, which works out to a relative net anthropogenic contribution of 133%.

The most remarkable thing about this is not the peculiar fact that cooling implies greater AGW dominance. It’s the fact that all these scenarios imply a higher best estimate for the AGW contribution. Now remember that a higher best estimate automatically leads to a higher degree of calculated confidence for the attribution statement. Therefore, it is hard to imagine what kind of change would have led to a conclusion of less confidence. Thus the increased confidence looks like an artifact of the methodology rather than a meaningful statement about our knowledge of the real world.

If the pause were to continue into the future, the calculated answer would become arbitrarily certain eventually. We could imagine it continuing for a century, for instance. (I haven’t tried to calculate how many nines we would need to express the certainty then.) This highlights the point that the skeptics are raising: sooner or later, there is no way to avoid asking whether some premise behind the model is flawed. The calculated confidence simply has to be questioned at some point even as it keeps growing.

I can see two explanations for this apparent absurdity. The first explanation is that the increase in confidence is less impressive than it seems. While the best estimate keeps increasing as we hypothetically lengthen the pause into the future, it still represents the same absolute amount of warming. On the other hand, half of it in relative terms (or any fixed percentage) represents a successively smaller absolute amount of warming. The hypothesis keeps getting more certain (allegedly) as it becomes weaker in absolute terms.

The second and probably more important explanation is circular reasoning or confirmation bias. The increase in certainty depends on the premise that AGW has continued (implicitly) somewhat as expected in spite of no observed warming. We are extrapolating from our current assumptions and having them strengthened by making a larger data set from them. (The “data” is the input to the algorithm, which is not just the temperature record itself, but also the assumptions about the relative AGW contribution.)

Instead of the skeptic view that the pause and the discrepancy between models and observations falsify or at least weaken AGW theory, the net warming model paradoxically generates what seems like the opposite result. It allows us to believe that “the truth is out there”, that there is a right answer, and that we are converging on it. This is based on the calculated higher confidence level and also on the general notion that the longer time frame, by providing a larger data set, increases our knowledge and understanding. We “should” be more confident because we have more knowledge. As Schmidt says in his discussion:

It is worth asking what the higher confidence/lower error bars are associated with. First, the longer time period (an extra 5 years) makes the trends clearer relative to the noise, multiple methodologies have been used which get the same result, and fingerprints have been better constrained by the greater use of spatial information.

On the face of it, it seems odd that the trends would become clearer when the trend changes, as the temperature curve is behaving in a way that was not expected. The opposite—that the trends would be become less clear—seems intuitively more plausible.

More generally, new knowledge and learning should increase our confidence only if the new data confirms rather than contradicts what we believed earlier. This might seem obvious, but apparently it is not obvious enough.

We could take Schmidt’s statement to mean that the additional 5 years can be considered a sample of the same population of data as the earlier period, implying that we can draw the same conclusions with greater confidence. This would imply that natural variability and natural forcings are  behaving the same way as before. If we knew that these only manifested as short-term periodic or random changes, that would be reasonable. But it is clearly not the case. According to the IPCC AR4, “Internal variability is present on all time scales”.

Since there are longer-term cycles or fluctuations, we cannot know which sections of these we are sampling unless we can characterize these cycles or fluctuations in a precise way. And when we don’t know which sections we are sampling, we have no way of guessing what their shape or trend is or how they will continue in the future.  This is a subject that has been discussed in more detail elsewhere, for instance the comments to The logic (?) of the IPCC’s attribution statement at Climate Etc.

The Big Question is the wrong question

As far as I can tell, the net warming model is a misleading statistical abstraction like many others before it. It leads us to believe that the Big Question can be answered by a mechanical, unambiguous process with quantifiable uncertainties.

It leaves the impression that our questions have been answered when they haven’t. It does not answer the question we want to ask and believe we’re asking: the intuitive question about how much of climate change we are causing. The answer we’re looking for is percentage between 0 and 100. If the answer we’re getting is something else, even potentially, it must be the answer to a different question.

The model is even further from answering the question we should be asking and need to ask: how much actual warming are we causing?

If we are told that the best estimate for the anthropogenic contribution is 100%, we may be misled into believing that internal variability plays no role whatsoever. And when the Big Question is posed in a similar way in successive IPCC reports, we get the false impression that the numbers are comparable, even though they apply to different time frames.

Focusing on the anthropogenic effect in absolute terms instead would seem to be more enlightening and useful for policy purposes. But since the estimates regarding the absolute quantities seem to be smaller and less certain (rather than the opposite) in the AR5 than in the AR4, they would be difficult to market as an increasingly strong case in favor of current climate policies. The relative net warming model may be the only defense against the assertion that IPCC AR5 weakens the case for AGW.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.