The venerable James Hansen has drafted a paper (pdf) taking a broad step-back look at global warming (GW) science. This is important, because I don’t believe all the bad news is yet in the market. Unfortunately the paper is written in the Scienglish dialect, so I will try to translate.
Estimates of the temperature rise due to emissions of greenhouse gases (GHGs) only take account of “fast feedbacks”. The expected temperature rise should therefore be doubled (it turns out) if we take long-term “slow feedbacks”, such as changes in the planet’s albedo (reflectivity) due to the melting of ice-sheets. Therefore, to keep the temperature below dangerous levels, we need to keep atmospheric CO2 below 350ppm. We can do this by not burning coal to the atmosphere – carbon capture and sequestration (CCS) would be OK – and by ensuring agriculture and forestry practices capture and retain carbon.
Point 1. The global community, e.g. the UN’s Framework Convention on Climate Change (UNFCCC) in 1992, has agreed that greenhouse gas (GHG) levels in the atmosphere must be stabilised at a level preventing “dangerous anthropogenic [caused by humans] interference with the climate system”. But what does this mean?
Point 2. The EU considers a rise of 2C or more over pre-industrial levels to be “dangerous”, the intergovernmental panel on climate change (IPCC) goes for a 2-3C rise. Hansen goes for 1.7C (or 1C after 2000). All much the same, given the uncertainties.
Point 3. The big question is what level of CO2 in the atmosphere will prevent more than a 2C rise? In this paper, Hansen et al argue that we have to take long-term feedbacks into account. The magnitude of these can be determined from looking at what has happened in the past.
Point 4. If CO2 is doubled i.e. to about 560ppm from pre-industrial levels, then temperatures would increase by 3C (a best guess). This doubling – due to fast feedbacks – represents an increase in energy (a “forcing”) of about 4W/m2 averaged over the planet. Climate sensitivity is therefore about 3/4C per W/m2 forcing. This can be validated by comparing temperature and atmosphere records over the last few ice ages (this information is known from Antarctic ice core and other records).
Point 5. But, argues Hansen, over the ice age cycle the changes in the albedo (reflectivity) of the planet were secondary or long-term feedbacks, resulting over a longer time period from the warming/cooling caused initially by changes in the Earth’s orbit and amplified by changes in GHG levels. Changes in GHGs are (according to the paper) a fast feedback and only slowly affect the distribution of ice and hence the albedo (reflectivity) of the planet. Well, the paper might be saying what I just wrote, but is relying only on the simpler point that when gauging the effect of increased GHG levels over a long period of time, we should ignore other forcings – these should be treated as secondary. Anyway, if we do this and ignore everything except GHG levels, the actual forcing changes to produce the ice age cycle temperature fluctuations of 5-6C averaged over the globe were only about half that to produce the short-term feedbacks. i.e. climate sensitivity is nearer 1.5C/W/m2.
Point 6. So, argues Hansen, as well as the 0.6C warming we’re all being told is already in the system (due basically to seas taking centuries to warm up fully), there’s also another whopping 1.4C after that [assuming atmospheric GHG levels remain where they are now (385ppm CO2, ~420ppm CO2 eq) indefinitely]. “This further 1.4C warming in the pipeline is due to the slow surface albedo feedback”.
Point 7. But is this conclusion from the ice age period valid as the Earth warms from where we are now? To decide this we have to look at the whole Cenozoic, i.e. the last 65.5 million years (my), i.e. since bye-bye dino time. Over the period from 50 million years ago (mya) to the ice ages, the global temperature fell 14C and CO2 in the atmosphere from 1000-2000ppm to less than 500ppm over the last 35my when we’ve had icecaps – the Antarctic iced up when we got down to around 450ppm is Hansen’s best estimate. So yes, is the answer. The Earth’s temperature will rise by 2.5-3C per W/m2 of forcing from the current temperature, presumably due to loss if the remaining ice-sheets and darkening of N continental areas.
Point 8. But we can overshoot the forcing that would produce long-term feedbacks, because they are um, slow.
Point 9. What should the target be? Hansen is mainly concerned about CO2. Other GHGs (principally methane, NH4 and nitrous oxide, N2O) can be controlled. Answer: 350ppm max. The current level – about 385ppm – is already too high.
Point 10. Because “a large fraction of fossil fuel CO2 emissions stays in the air a long time”, must leave some fossil fuels in the ground, by phasing out coal (unless the carbon emissions are captured and sequestrated, i.e. we employ CCS technology) by 2030.
Point 11. Just leaving coal in the ground (but burning all the gas and oil) is not quite enough. But reforestation and carbon sequestration in soil (by creating “biochar” by pyrolysis of organic material, e.g. rather than “slash and burn”) can make up the 50ppm difference by 2150. CCS from biofuel would be even more rapid.
Point 12. Need a CO2 price that forces CCS to be used, and to reward agricultural practices that sequester and preserve carbon.
The paper includes quite a bit of “Supporting Online Material”, including:
- an analysis of ice age forcings (GHGs and ice-sheets) which implies a sensitivity of 3/4 +/- 1/4C per W/m2.
- the variability of the forcing caused by orbital changes – the Milankovitch forcings which triggered the ice ages and deglaciations – are relatively small, < +/- 3W/m2. (This assumes “present-day seasonal and geographical distribution of albedo”).
- an analysis of the reserves of oil, gas and coal.
- a graph showing that the “CO2 airborne fraction”, i.e. the proportion of annual CO2 emissions remaining in the atmosphere, has been moreorless constant at about 56% since 1957 (when Keeling first measured atmospheric CO2 accurately).