30 December 2013

Jobs in the US Economy 1948-2011

The figure above shows data on employment in the US economy 1948 to 2011 across 10 sectors. The data comes from the Bureau of Economic Analysis. There is a discontinuity in the data in 1998, when category definitions were changed. and we have not tried to standardize across that discontinuity. In addition, the graph shows "full time equivalents" rather than the frequently used full- and part-time employment series.

Some numbers:
  • In 1948 the US had 48.0 million FTE jobs.
  • In 2011 the US had  122.6 million FTE jobs.
  • From 1948 to 2011 the US added 74.6 million FTE jobs, an increase of about 1.5% per year.  (It was 1.3% from 1970 to 2011).
The figure below shows how the overall rate of job growth has changed over time. It shows the trailing 10-year annual growth rates in FTE jobs.

The data clearly show the effects of the Vietnam War in the 1960s (i.e., from 1964-1968 government FTE jobs increased annually: 2.9%, 3.2%, 79.%, 5.4%, 3.2%). The data also show a spectacular slowdown in job growth from 2001-2011. In fact, the US had 1.1 million less FTE jobs in 2011 than it did in 2001. The data shows that over the past decade there is more going on in job stagnation than the effects of the financial crisis post-2007

The graph below shows the same data in percentage terms.
Some numbers:

  • In 2011 government accounted for a smaller percentage of jobs (16.5%) than it did in 1951 (17.6%).
  • Manufacturing was 31.1% of all jobs in 1950; it was 9.4% of all jobs in 2011.
  • Agriculture was 4.3% of all jobs in 1950; it was 0.9% in 2011.
  • In 1950 finance and services were 16.7% of all jobs; in 2011 they were 49.7%.

Behind these numbers are some interesting dynamics. While manufacturing has shrunk dramatically as a proportion of all jobs, since 1950 the US has only seen a decrease of 3.7 million manufacturing jobs. Over that same period the US has added 47.1 million service jobs. Government has remained remarkably stable as a proportion of jobs, but has added 13.8 million jobs since 1950, but most of those were added before 1970.

Here is an interesting thought experiment:

Consider that from 1970 to 2001 jobs in the overall economy grew by 1.8% per year. In government they grew at half that rate. From 2001 to 2011 government jobs grew at 0.4% per year -- less than half the rate of the previous three decades.

  • If government jobs had grown 2001-2011 at the same rate as government job growth 1970-2001, then the US would presently have 1 million more jobs.  The current unemployment rate would be 6.4% rather than 7.0%.
  • If government jobs had grown 2001-2011 at the same rate as overall growth of jobs in the US economy 1970-2001, then the US would presently have 3.03 million more jobs. The current unemployment rate would be 5.1%.

What could the US government do with 3 million more employees?  Oh, I'm sure we could think of something.

24 December 2013

Income Inequality in Global Perspective

In the Financial Times today, John Gapper analyzes changes in global income in recent decades. It concludes:
[T]he rise of China and India – two poor but populous countries – has made global inequality (measured by the disparity in individual incomes, regardless of where people live) less pronounced. The world’s Gini index of inequality fell between 2002 and 2008 – perhaps for the first time since the Industrial Revolution – and the growth of Indonesia and Brazil is pushing in the same direction.

“China is like a sumo wrestler who is fighting against global inequality,” says Branko Milanovic, lead economist at the World Bank. “He is standing up against all the rest of the forces, but he is a big guy. Now, India has become a second sumo wrestler.”

Those changes are seen in Chart 1 [above], which shows that two groups in the world did well between 1988 and 2008, achieving the highest real increases in their income.

The first was the rich – the top 10 per cent of earners and, within that, the 1 per cent. The other gainers were in the mid-tier – workers in emerging economies who were moving out of poverty.

The two groups that did worst were the very poor – those in the bottom 5 per cent, in sub-Saharan Africa and elsewhere, and the western middle classes, both in the US and western Europe and former Eastern Bloc countries. Their income rises did not match the luckier groups and, at the 75th percentile – including the US middle class – stagnated and even fell.
The figure above comes from a fascinating and provocative paper by Branko Milanovic (2012) of the World Bank ("Global Income Inequality by the Numbers: in History and Now" here in PDF). Milanovic concludes:
There are rich countries that have accumulated lots of wealth, and transmit  that wealth, along with many other advantages, to the next generations of their citizens. This is why, for example, the poorest Americans are relatively well-off by world standards. They are lucky to have been born in the country that is rich (or has become rich; the case was different with the poorest Americans in the 17th century). And there are also people from poor countries who do not have wealth, and advantages and opportunities it confers. But—and this is in stark difference to the within-country case —this is considered unobjectionable, or rather it is not questioned whether one may keep on benefiting from something that the previous generations have created, and she has simply inherited by virtue of birth. In one case, we frown upon the transmission of family-acquired wealth to offsprings if two different individuals belong to the same nation. In the other case, we take it as normal that there is a transmission of collectively acquired wealth over generations within the same nation, and if two individuals belong to two different nations, we do not even think, much less question, such acquired differences in wealth, income and global social position.
He continues:
If citizenship explains 50 percent or more of variability in global incomes, then there are three ways in which global inequality can be reduced. Global inequality may be reduced by high growth rates of poor countries. This requires an acceleration of income growth of poor countries, and of course continued high rates of growth of India, China, Indonesia, etc. The second way is to introduce global redistributive schemes although it is very difficult to see how that could happen. Currently, development assistance is a little over 100 billion a year. This is just five times more than the bonus Goldman Sachs paid itself during one crisis year. So we are not really talking about very much money that the rich countries are willing to spend to help poor countries. But the willingness to help poor countries is now, with the ongoing economic crisis in the West, probably reaching its nadir. The third way in which global inequality and poverty can be reduced is through migration. Migration is likely to become one of the key problems—or solutions, depending on one’s viewpoint— of the 21st century. To give just one stark example: if you classify countries, by their GDP per capita level, into four “worlds”, going from the rich world of advanced nations, with GDPs per capita of over $20,000 per year, to the poorest, fourth, world with incomes under $1,000 per year, there are 7 points in the world where rich and poor countries are geographically closest to each other, whether it is because they share a border, or because the sea distance between them is minimal. You would not be surprised to find out that all these 7 points have mines, boat patrols, walls and fences to prevent free movement of people. The rich world is fencing itself in, or fencing others out. But the pressures of migration are remaining strong, despite the current crisis, simply because the differences in income levels are so huge.

I conclude with something that resembles a slogan: either poor countries will become richer, or poor people will move to rich countries. Actually, these two developments can be seen as equivalent. Development is about people: either poor people have ways to become richer where they are now, or they can become rich by moving somewhere else. Looked from above, there is no real difference between the two options. From the point of view of real politics, there is a whole world of difference though.
Some perspective on where the world may be heading be found in the following figure, which shows an estimate of global income distribution from 1820 to 2000 (Luiten van Zanden et al. 2011,"The Changing Shape of Global Inequality 1820-2000: Exploring a new dataset," here in PDF).
That figure shows both accumulating wealth and greater global equality (and some readers may recognize it as a version of a similar figure which appears in The Climate Fix). Luiten van Zanden and colleagues conclude:
Our most striking results point to important changes in the structure of global inequality. It was a clear unimodal distribution in the 19th century, but it became increasingly bi-modal during the middle decades of the 20th century, when a clear separation between ‘rich’ and ‘poor’ peaks in the global income distribution emerged. This is a striking result, because at the same time, as we saw, the share of the very poor fell rapidly during this period, both in absolute terms and as a share of the world population. Between 1980 and 2000, the shape of the global distribution changed ‘suddenly’ from a bi-modal to a unimodal distribution, mainly due to the rapid growth in countries such as China, India and Indonesia. Our speculation that these changes in the global income distribution were linked to processes of globalization and de-globalization in the world economy, clearly require further explanation. The globalized world of the (late) nineteenth century produced a unimodal distribution. Processes of de-globalization in the middle decades of the twentieth century had two effects on global inequality: nation states acquired the freedom to build a welfare state that sharply reduced income inequality within countries (in the richer part of the world), but at the same time it seems to have lead to the emergence of a bi-model distribution on a global scale. The dramatic process of globalization of the final decades of the 20th century reversed both changes: it led to a strong increase in within country inequality (bringing it back to its level from before the ‘egalitarian revolution’ of the twentieth century), and it resulted in the sudden appearance of a unimodal income distribution on a global scale (and a small decline in between country inequality). 
From this perspective, an important question is how then to balance within-country inequality with global inequality, if indeed, as the FT argues:
The forces producing the dispersion of income and wealth in western countries are hard to reverse. They are also the forces that have helped the emerging middle class of China, India or Brazil.
Implicated are not just policies on growth and (re)distribution, but as Milanovic notes, immigration and globalization.

Expect more discussion of these themes on this blog in 2014.

19 December 2013

Graphs of the Day: US GDP 1947-2011

Total US GDP in constant 2011 dollars.
Proportion of GDP by sector.
  • Government in 1960: 13.2%; in 2011: 13.2%
  • Agriculture & manufacturing in 1950: 34%; in 2011: 13%
  • Finance & services in 1950: 26%; in 2011: 52%

Data source: US BEA

17 December 2013

Tenure and Due Process at the University of Colorado

UPDATE Dec 18: It appears that CU officials have come to their senses and are wisely backing down, at least according to this new article in The Daily Camera. Kuods to Sarah Kuta for her reporting on this.

Original post follows . . .

The Boulder Daily Camera reports:
University of Colorado officials acknowledged Monday that sociology professor Patti Adler's lecture on prostitution led them to suspend her from teaching her popular "Deviance in U.S. Society" course next spring -- but they denied firing her or forcing her into retirement.
The issue here involves a skit that Professor Patricia Adler put on in her class back in November in which students dress up as different types of prostitutes and engage in a role play before the class. Adler is a full professor here at CU in the Department of Sociology, and while a colleague of mine, I don't think we've ever met.

The "suspension" is apparently a matter of sexual harassment. CU Provost Russ Moore wrote to the campus community yesterday explaining:
A number of you have raised concerns about academic freedom and how it may connect to this situation. Academic freedom protects faculty who teach controversial and uncomfortable/unpopular subjects. However, academic freedom does not allow faculty members to violate the University's sexual harassment policy by creating a hostile environment for their teaching assistants, or for their students attending the class.

In this case, University administrators heard from a number of concerned students about Professor Adler's "prostitution" skit, the way it was presented, and the environment it created for both students in the class and for teaching assistants. Student assistants made it clear to administrators that they felt there would be negative consequences for anyone who refused to participate in the skit. None of them wished to be publicly identified.
As described by Provost Moore, the environment created by Professor Adler would indeed be covered by the University's policy on sexual harassment. The relevant part of that policy states:
Sexual harassment consists of interaction between individuals of the same or opposite sex that is characterized by unwelcome sexual advances, requests for sexual favors, and other verbal or physical conduct of a sexual  nature when . . . such conduct has the purpose or effect of unreasonably interfering with an individual's work or  academic performance or creating an intimidating, hostile, or offensive working or educational environment.
However, at this time it does not appear that Professor Adler has actually been charged with any violation of this policy, much less been investigated, found guilty or sanctioned. Provost Moore explains:
Professor Adler has not been fired or forced to retire. As to comments she has made that she might be fired in the future, I should note that any employee at the University -- including faculty members -- found responsible for violating the University's sexual harassment policy, is subject to discipline up to and including termination.
Here is where it gets a bit unclear and definitely troubling:
Adler, however, told the Daily Camera on Monday that university administrators gave her an ultimatum: take a buyout and retire, or stay at the university but not teach her signature class next semester. . .

Adler said during a meeting with CU administrators earlier this month she was offered a buyout consisting of two years' salary paid over five years. The alternative was to stay at the university, but not teach her deviance course next semester. . .

The second option came with a caveat, Adler said. If the administration received even one complaint about her, Adler said she was told she would be fired immediately, without retirement benefits.
This is weird for several reasons.

First, department chairs usually determine what classes their faculty teach based on all sorts of criteria. I've even had the experience of a chair telling me that I could no longer teach a "signature class." I didn't like it, and I suppose I could have appealed at various levels -- CU even has a faculty ombudsman. Sociology could have easily ginned up an air-tight justification for taking Adler off this course -- teaching needs, faculty balance, program emphasis, whatever. That they didn't speaks either to some serious ham-handedness or other things going on.

Second, the alleged "buyout" coupled with the threat of being fired without retirement benefits (We get retirement benefits!?) is simply bizarre. Professors simply cannot be fired because of a complaint. There is a very involved set of procedures which govern "Faculty Dismissal for Cause." The idea that the administration would threaten a faculty member in this way sounds like the Keystone Cops, or even worse, an invitation to be sued.

Also strange is the fact that there is apparently no complainant and no actual accusations related to sexual harassment:
Adler said, and two investigators from the [University of Colorado Office of Discrimination an Harassment] attended the Nov. 5 lecture on prostitution.

During the lecture, many of Adler's assistant teaching assistants portrayed prostitutes ranging from sex slaves to escorts, and described for the class their lifestyles.

On Dec. 5, Adler said she was invited to a meeting that included the two investigators, College of Arts and Sciences Dean Steven Leigh, Associate Dean Ann Carlos and a member of the university's legal team.

"They said this skit was a risk to the university," Adler said. "(The two investigators) scared the administrators so much that the administrators said I have to be taken out of the deviance class and that they offered me a buyout. I could get this two-for-five deal, but I have to take it right now.

"And it just felt like an ignominious push out the door."

Adler said the Office of Discrimination and Harassment had received no complaints and there was no complainant in the investigation. Adler said the investigators told her they waited a few weeks, but no one came forward saying they were offended by the skit.
Instead, there is more than a hint of political correctness in the air. The Daily Camera reports:
Administrators allegedly told Adler that in the era of sex scandals at schools like Penn State University, they couldn't let her keep teaching.
As a University of Colorado faculty member, a tenured full professor like Adler, I find this situation to be extremely concerning. I am also concerned because next semester I am teaching a course in which issues of gender, sex, discrimination, race and other potentially sensitive topics appear throughout the syllabus. Will I be at risk of losing my job if university officials don't like how I teach these issues? What if a student is "uncomfortable" because of the material or exercises in the class?

Adler says of this situation:
"They are witch hunters. And to be accused, to be investigated, is to be guilty. You're assumed to be guilty with no due process. It's a culture of fear, a culture of political correctness and power of (the Office of Discrimination and Harassment)."
I hope that the Boulder Faculty Assembly takes on this case in defense of Adler's academic freedom and right to due process. Based on the information available, it looks like campus leadership has made some serious mistakes. They need to be rectified. University officials might find that with more episodes like the one currently involving Professor Adler they won't need to offer its faculty buy-outs to get them to leave.

16 December 2013

Decarbonization Updates

While I was working on The Climate Fix I published several peer reviewed articles on climate policies of the United Kingdom, Japan and Australia. In recent months I have updated these analyses and summarize the updates here. (Note: for information and links to data sources, just click through to the various analyses referenced below).

United Kingdom

In my 2009 paper (here, open access) on the emissions reduction targets mandated by the UK Climate Change Act I wrote:
Given the magnitude of the challenge and the pace of action, it would not be too strong a conclusion to suggest that the Climate Change Act has failed even before it has gotten started. The Climate Change Act does have a provision for the relevant minister to amend the targets and timetable, but only for certain conditions. Failure to meet the targets is not among those conditions. It seems likely that the Climate Change Act will have to be revisited by Parliament or simply ignored by policy makers. Achievement of its targets does not appear to be a realistic option.
In a recent update to this analysis I wrote:
If the UK is to hit its 2022 emissions target, then assuming a 2 percent annual GDP growth implies a rate of decarbonization of the economy of 4.4 percent per year over the next 9 years (for 1 percent annual GDP growth it is 3.3 percent and for 3 percent GDP growth it is 5.4 percent). Since the Climate Change Act was passed in 2008 the UK economy has actually decarbonized at a rate of 1.1 percent per year.
The magnitude of the challenge can be seen in the graph at the top of this post which shows how much carbon-free energy the UK would need if it is to meet the 2022 targets of the Climate Change Act. Even though the proportion of carbon-free energy in 2012 is the highest since 1965, that proportion whould have to more than double in less than a decade while retiring an equivalent amount of coal (i.e., almost all of it).

Hitting the targets seems impossible, however there are of course alternative outcomes:
Of course it is possible that targets could be met via creative strategies such as the use of offsets or the counting of emissions reductions in gases other than carbon dioxide. However, such strategies, irrespective of their inherent merits, would only postpone the decarbonization needed in the power sector if deep emissions reductions proposed to 2050 are to be met. Of course, the main consequence of adopting a decarbonization target for 2030 might be to simply move the goal posts further into the future to reset the emissions challenge, as it appears that the shorter-term targets are likely to be missed.
The politics of the UK climate targets will be interesting to watch as it becomes increasingly clear that the targets cannot be met, at least in terms of their intended purpose of achieving a faster rate of decarbonization. Signs of life in the UK economy suggest that meeting the targets may be even more difficult that suggested here (e.g., if growth exceeds 2% per year) but at the same time would likely make their abandonment or alteration much more politically acceptable.

In my 2009 paper on Japan's emissions reduction targets (here, open access), which at that time had been just made less unrealistic, I wrote:
Japan faced a range of criticism when it announced its 2020 target to reduce its domestic emissions by 15% from 2005 levels by 2020. The analysis in this letter shows that such criticism was unfounded for several reasons.

First, the rate of decarbonization implied by the 2020 target is about 1.4–1.8% yr–1 less than that implied by the UK Climate Change Act (Pielke 2009) and that Act is certain to fail (e.g., it requires that the UK decarbonize its economy to French levels within 6 years, requiring an effort equivalent to the deployment of about 30 new nuclear power plants in that time). Because no one knows how fast a major economy can decarbonize there seems little point in arguing about proposed rates of decarbonization well outside historical experience. Policy implementation will be the ultimate arbiter of such proposals. There is essentially no qualitative difference between the Japanese and United Kingdom policies, as both are outside the range of experience.
At the time, as part of meeting its proposed targets the Japanese government proposed building new nuclear power plants at the rate of one per year. That was before Fukushima. Even then I wrote:
 [T]he proposal to deploy nine new nuclear power plants within a decade appears to stretch the bounds of credulity, even though Japan does have the third most nuclear plants in the world (after the United States and France) . . .
Recently, I provided an update of this analysis post-Fukushima and on the occasion of Japan announcing a new target of reducing emissions to 3.8% below 2005 emissions. The implications of that target of Japan's energy supply can be seen in the graph above. Here are a few points I made about that graph:
  • In 2010, prior to the Fukushima nuclear disaster, Japan got 18.5% of its total energy consumption from carbon-free sources. In 2012, it was 6.4%.
  •  Post-Fukushima the nuclear shutdown has lead to energy consumption being replaced by fossil fuel (95% - coal, gas, petroleum) with wind and solar contributing 1.5%. 2012 consumption is the same as it was in 2009.
  • To meet its new emissions reduction target (in terms of carbon dioxide) Japan will need to increase its proportion of carbon-free energy from 6.4% in 2012 to 9.1% in 2020, assuming no increase in energy consumption.
  • Japan can hit its 2020 target by restarting 9 of its 50 shut nuclear power plants (again, assuming no new increase in demand), replacing an equivalent amount of coal.
  • Setting aside technical, political, social and other considerations, Japan would need to increase wind and solar by ~720% from 2012 levels by 2020 (replacing equivalent coal) to meet the emissions target.

The future role of nuclear power in Japan remains uncertain. However, as indicated above, it would not take much for Japan to hit its new emissions targets which appear to be based on an assumption of minimal nuclear power over the next decade. For reasons well beyond climate policy, I'd take the over on nuclear generation -- Japan just doesn't have better alternatives.

In my 2011 paper on Australia's climate policies (here and here in PDF) I wrote:
Australia has a very carbon intensive economy, thus its ability to dramatically accelerate the decarbonization of its economy offers the promise of many valuable lessons for other countries around the world. However, a focus on targets and timetables for emissions reduction that will be impossible to meet in practical policy implementation runs the risk of engendering public cynicism and even opposition.
Since that paper Australia has implemented a carbon tax, had a palace coup, seen its most recent two prime ministers (Labor's Rudd, Gillard) leave politics, and had an election in which Tony Abbott and the Conservative-Liberal coalition campaigned on a promise to rescind the carbon tax.

In a recent update to my analysis, illustrated by the graph above, I wrote:
To achieve a reduction in carbon dioxide emissions of 5% from 2000 levels would require a rate of decarbonization of the Australian economy (measured as a reduction in the amount of carbon dioxide emissions per A$1000 of GDP) of greater than 5% per year from 2013 to 2020. This is consistent with my earlier analysis that looked at data through 2006. For comparison, Australia averaged a 2.9% annual rate of decarbonization from 2007 to 2012. 
Despite changing political leadership, the Australian government maintains a commitment to a 5% reduction in emissions from 2000 levels. That commitment remains as fanciful as ever.


Here are a few summary points looking at these three cases together:
  • The Kaya Identity, on which these analyses are based, is a clear and straightforward way to evaluate emissions reductions proposals and performance.
  • Outside the specialist literature, it is rarely used. Governments and their advisors of various political leanings often do not play things straight -- this goes for the Abbott government and the UK's Climate Change Committee. Japan's government stands out here as playing things straight.
  • The three analyses I did several years ago have proven extremely accurate in their anticipation of climate policy trends in diverse political settings. 
  • Of the three cases only the United Kingdom remains to deal explicitly with what will be an inevitable failure of target setting. Australia is halfway there and Japan used the occasion of Fukushima to justify a climate policy reset.
  • The nature of the energy economy is that the trends and projections discussed here are unlikely to change much on short time scales (i.e., less than a decade).
  • There will always be surprises of course -- global financial crises, tsunamis.
I welcome comments. 

11 December 2013

House Environment Subcommittee Testimony

My testimony delivered this morning before the Environment Subcommittee of the House Committee on Science, Space and Technology is now available (here in PDF). I welcome your comments.

The hearing itself was largely uneventful. One of the other witnesses John Christy, a professor at the University of Alabama-Huntsville, summarized several areas where his views depart from the IPCC consensus. His testimony did not overlap with mine.

The third witness was David Titley, formerly a Rear Admiral in the Navy and currently a "professor of practice" at Penn State, and he did make several comments related to my testimony. Here are my unordered thoughts:
  • Titley repeatedly compared the issue of climate change and extreme events to terrorism and specifically 9/11. I'm not sure this is a useful (or wise) comparison. He asked what the trend data on terrorism would have said in 2000 as an analogy to the lack of discernible trends in hurricanes, floods, tornadoes and drought. I think he uttered the phrase "Roger is right" about the trends.
  • Later though he invoked the raw Munich Re loss data as suggestive of unexplained trends, apparently unaware of the peer reviewed research on that dataset. No, there are not residual trends after nomalization, or at least that is what peer-reviewed research funded by Munich Re has concluded.
  • Titley was asked directly by Representative Mark Takano (D-CA) if he would like to take issue with the claims that I made in my testimony. Titley passed on the opportunity -- which Mr. Takano offered up twice -- and instead talked about global temperature trends and the probability of getting heads if you flip a coin 36 times. Nolo contendre.
  • Finally, Titley did work into his testimony the incantation, "absence of evidence is not evidence of absence" - for what purpose I am not sure. I doubt he realizes that the phrase originates in debates over the existence of God (e.g., here in PDF).  Richard Dawkins has taken issue with this line of argument in that context. Titley did not provide evidence of a teapot orbiting the sun.
The hearing ended on a bit of a religious argument between Titley and Spencer. Representative Dana Rohrabacher (R-CA) invited me to join in, but I passed.

As always, it is an honor to be asked to give testimony before the US Congress, and a special treat to return to the Science Committee, which was so influential in my own career track and my strong appreciation of the work of elected officials and their staff.

10 December 2013

A Blast from the Past

Today I am at a workshop on climate adaptation. The organizers passed around a copy of the long essay in The Atlantic Monthly that Dan Sarewitz and I wrote back in 2000. I hadn't looked at it in a while. The opening, as someone just remarked, could be re-written today simply changing out Haiyan for Mitch.

Here is that opening:
In the last week of October, 1998, Hurricane Mitch stalled over Central America, dumping between three and six feet of rain within forty-eight hours, killing more than 10,000 people in landslides and floods, triggering a cholera epidemic, and virtually wiping out the economies of Honduras and Nicaragua. Several days later some 1,500 delegates, accompanied by thousands of advocates and media representatives, met in Buenos Aires at the fourth Conference of the Parties to the United Nations Framework Convention on Climate Change. Many at the conference pointed to Hurricane Mitch as a harbinger of the catastrophes that await us if we do not act immediately to reduce emissions of carbon dioxide and other so-called greenhouse gases. The delegates passed a resolution of "solidarity with Central America" in which they expressed concern "that global warming may be contributing to the worsening of weather" and urged "governments, ... and society in general, to continue their efforts to find permanent solutions to the factors which cause or may cause climate events." Children wandering bereft in the streets of Tegucigalpa became unwitting symbols of global warming.

But if Hurricane Mitch was a public-relations gift to environmentalists, it was also a stark demonstration of the failure of our current approach to protecting the environment. Disasters like Mitch are a present and historical reality, and they will become more common and more deadly regardless of global warming. Underlying the havoc in Central America were poverty, poor land-use practices, a degraded local environment, and inadequate emergency preparedness -- conditions that will not be alleviated by reducing greenhouse-gas emissions.

At the heart of this dispiriting state of affairs is a vitriolic debate between those who advocate action to reduce global warming and those who oppose it. The controversy is informed by strong scientific evidence that the earth's surface has warmed over the past century. But the controversy, and the science, focus on the wrong issues, and distract attention from what needs to be done. The enormous scientific, political, and financial resources now aimed at the problem of global warming create the perfect conditions for international and domestic political gridlock, but they can have little effect on the root causes of global environmental degradation, or on the human suffering that so often accompanies it.
Read the whole essay here.

06 December 2013

Global Tropical Cyclone Landfalls 2013

Last year Jessica Weinkle, Ryan Maue and I published a paper in the Journal of Climate on trends in global landfalling hurricanes (the paper and data can be found here). At the global level, our paper concludes that the data is good from 1970. Our analysis went through 2010.
Weinkle, J, R Maue and R Pielke (2012), Historical Global Tropical Cyclone Landfalls. J. Clim. 25:4729-4735
With 2013 almost in the books I asked Ryan if he could provide a preliminary tabulation of the 2013 data (note that the data could be revised from these initial estimates, and 2013 is still not quite over).

At the top of this post is the dataset from 1970 first presented in our paper, updated using the same methods through 2013 (remember that there are a few weeks left in the year). In short, 2013 is an average year with 15 total landfalls (15.4 is average) of which 5 characterized as major (4.7 is the average).

Here are some updated statistics summarized from the data:
  • Over 1970 to 2013 the globe averaged about 15 TC landfalls per year (Category 1-5)
  • Of those 15, about 5 are intense (Category 3, 4 or 5) 
  • 1971 had the most global landfalls with 30, far exceeding the second place, 25 in 1996
  • 1978 had the fewest with 7
  • 2011 tied for second place for the fewest global landfalls with 10 (and 3 were intense, tying 1973, 1981 and 2002)
  • Five years share the most intense TC landfalls with 9, most recently 2008.
  • 1981 had the fewest intense TC landfalls with zero
  • The US is currently in the midst of the longest streak ever recorded without an intense hurricane landfall 
  • The past 4 years have seen 15 major landfalling hurricanes, very much on the lower end of activity but not unprecedented -- 1984-1987 had just 11. The most is 35 (2005-2008). 
  • The past 4 years have seen 51 total landfalling hurricanes, also on the low end -- the least is 41 (1978-1981) and the most is 80 (four periods, most recently 2004-2007).
  • There have been frequent four-year periods with more than 25 landfalling major hurricanes, or more than a 60% increase of what has been observed over the past 4 years. 
There is even evidence in our paper (see our Figure 2) that the period before 1970 saw more intense hurricane landfalls than the period since. Older data from the North Atlantic and Western North Pacific (which together represents 64% of all global intense landfalling hurricanes 1970-2010 and 69% of all hurricanes) indicates that landfalling intense hurricanes in these two basins occurred at a 40% higher rate from 1950-1969 than 1970-2010. There were 9 intense landfalls in 1964 and 1965 in just these two basins, which equals the global record for all basins post-1970.

Here is a montage of 2013 activity:

For those interested in the details, here from Ryan are the preliminary details for 2013 to date:

2013 North Atlantic:  1 minor hurricane
  •   Ingrid 65-knots
2013 North East Pacific: 2 minor hurricanes
  •   Barbara 65-knots
  •   Manuel 65-knots
2013 Northern Indian: 1 major hurricane
  •   Phailin 120-knots 
2013 North Western Pacific: 6 minor hurricanes, 3 major hurricanes = total 9 hurricanes (typhoons)
  •   Utor 120-knots (major)
  •   Trami 75-knots
  •   Usagi 80-knots
  •   Wutip 80-knots
  •   Nari 105-knots (major)
  •   Krosa 90-knots
  •   Haiyan 160-knots (major) 
  •   Fitow 65-knots
  •   Soulik 80-knots
2013 Southern Hemisphere: 2 minor hurricanes
  •   Haruna 90-knots
  •   Rusty 95-knots 

1. We are not finished with 2013.  This is calendar year only.
2. Tracks outside of NHC responsibility are not best-tracks or post-storm reanalysis but real-time ATCF.  NHC has mostly finalized their East Pac & NATL tracks, however.
3. All intensity values are 1-minute maximum sustained wind which is used by NHC and JTWC, the source of our historical TC dataset (and this one).
4. Actual instantaneous landfall intensity is unknowable.

Lots more great data and graphs, including the one below on total global tropical cyclone activity (not just those that make landfall) at his website here.

02 December 2013

Causality and Policy Outcomes: The Case of Presidents and Economic Growth

The WSJ reports today (and also the Washington Post) on a new paper by Alan Blinder and Mark Watson (here in PDF, hereafter BW13) which tackles what would seem to be a straightforward question: Why is it that since World War II the US economy has grown significantly faster under Democratic presidents than Republican presidents? This post looks at this question from the broader standpoint of policy research methods. I conclude that BW13 have asked the wrong question, one that lends itself to many answers or none at all, and perhaps it tells us more about policy research methods than anything else.

I use a variant of the BW13 analysis as an introduction to my graduate seminar on quantitative methods of policy research, focusing on budget deficits rather than growth rates. One can correlate either party with budgetary restraint, simply by choosing to focus on the White House versus Congress. The BW13 paper is interesting not just because it asks a provocative question, but also because that question neatly wraps up partisan preferences with answers that are extremely difficult to answer unambiguously using empirical observations.

Teasing out causality is central to both policy making and policy research. As Steinberg (2007, PDF) writes:
Central to the aims of public policies, and the political constituencies supporting them, is the hope of having a causal impact on some aspect of the world. It is hoped that welfare-to-work programs will lead to a decline in chronic unemployment; that the international whaling regime will cause threatened species to rebound; and that health education campaigns will reduce HIV transmission. As Pressman and Wildavsky (1973, p. xxi) observed, “Policies imply theories. Whether stated explicitly or not, policies point to a chain of causation between initial conditions and future consequences. If X, then Y.” Accordingly, while causal theories play a role in many areas of social inquiry, they are vital to the practice of policy analysis, where they are used to diagnose problems, project future impacts of new regulations, and evaluate the effectiveness of—and assign responsibility for—past interventions (Chen, 1990; Lin, 1998; Young, 1999). Causal assessment plays an equally important role in the policy process tradition, as researchers identify the causal factors shaping policy agendas, decision-making styles, state–society relations, and the dynamics of stability and change (Baumgartner & Jones, 1993; Rochon & Mazmanian, 1993; Sabatier, 1999).
In short, the question of attribution of cause and effect is a key one in many policy settings. Here I discuss the paper in the context of broader methodological and epistemological issues associated with evaluating cause and effect in policy settings.

1. Risks of Data-Before-Theory

In starting with a correlation rather than a theory of causality, BW13 start off down a potentially treacherous methodological path, but one that they ultimately negotiate fairly well. Here is how BW13 explain the observations which lead to their central question:
[E]conomists have paid virtually no scholarly attention to predictive power running in the opposite direction: Do election outcomes help predict subsequent macroeconomic performance? The answer, which while hardly a secret is not nearly as widely known as it should be, is a resounding yes. Specifically, the U.S. economy performs much better when a Democrat is president than when a Republican is.
This paper begins in Section 1 by documenting this stunning fact. The fact is not “stylized.” The superiority of economic performance under Democrats rather than Republicans is nearly ubiquitous; it holds almost regardless of how you define success. By many measures, the performance gap is startlingly large--so large, in fact, that it strains credulity, given how little influence over the economy most economists (or the Constitution, for that matter) assign to the President of the United States. . .

During the 64 years that make up these 16 [presidential] terms, real GDP growth averaged 3.33% at an annual rate. But the average growth rates under Democratic and Republican presidents were starkly different: 4.35% and 2.54% respectively.3 This 1.80 percentage point gap (henceforth, the “D-R gap”) is astoundingly large relative to the sample mean. It implies that over a typical four-year presidency the U.S. economy grew by 18.6% when the president was a Democrat, but only by 10.6% when he was a Republican.
The potential problem here is that BW13 have generated a hypothesis -- Political party of the US president influences differentially subsequent US economic growth -- based on an examination of an existing correlation. Given that spurious relationships show up all the time (and are well understood, if not always appreciated -- see, e.g., Yule 1926, Hendry 1980, DeLong and Kang 1982,  Ioannidis 2012), researchers have to exercise extreme caution in the generation of hypothesis based on observations rather than on theories of causality.

Of course, in doing policy-related research as a practical matter it is the unpredicted or surprising observation that typically generates research questions deemed important. Why did the train crash? Why is the unemployment rate high? Why did the typhoon disaster occur?

Right away you see that typical questions important to policy and decision making deviate from the textbook model of hypothesis generation in that they start with a correlation or an outcome and work backwards to a theory of causality. This makes attention to methods that much more important to avoid being fooled by randomness, or other pathologies of thinking, such as using data selectively to favor certain theories (e.g., here in PDF).

2. What is "Significant"?

BW13 find a very strong statistical relationship between political party of the president and the rate of GDP growth, p = 0.01. By contrast, they find much weaker evidence of a relationship between the height of the president and the rate of GDP growth, p = 0.39. What the p-value tells us is the probability of observing the difference between the two variables, assuming that we know the distributions of outcomes from which each comes, a point I'll return to below..

While the literature is chock full of discussions of the use and abuse of p-values in the interpretation of statistics, the idea that a strong p-value is either necessary or sufficient to denote a causal relationship persists. (Anyone doubting this claim need merely browse the archived discussions on this blog related to trends in extreme weather.)

One proposed method to deal with the challenges of data-before-theory is to implement more stringent levels of statistical significance. For instance, DeLong and Kang 1982 warn of the problem of data-mining by researchers (and on the relationship of p and t statistics, see this recent essay).
Most of us suspect that  most empirical researchers engage consciously or unconsciously in data mining. Researchers share a small number of common data sets;  they are therefore aware of regularities in the data even if they do  not actively search for the "best" specification. There seems to be no  practical way of establishing correct standard errors when researchers  have prior knowledge of the data . . .

One possible reaction is to adjust standard errors by some multiplicative factor that "compensates" for this abuse of classical procedures.  Along these lines, we can use our data to ask the question, By what  factor would we have to divide reported t-statistics so that one-ninth  of unrejected nulls would exhibit a marginal significance level of .9  or more? The answer is about 5.5. The t-statistic of two rule of thumb would then suggest that only unadjusted t-statistics of 11 or more should be taken seriously, in which case hypothesis testing-especially  in macroeconomics-would become largely uninformative. Empirical work would play only a very minor role in determining the theories  that economists believe. Some claim that at present empirical work does play a very minor role in determining the theories that economists believe (see McCloskey 1985). 
Even if researchers are able to avoid data mining, any observations from a complex system -- like the US economic and political systems -- will be explainable by a large number of plausible theories and relationships. The set of explanations can be said to be overdetermined in the sense that there will be more hypotheses supported by evidence than can plausibly be correct. For instance, BW13 test 27 variables (Table A.3) against GDP growth rates over different time periods. They could plausibly come up with many more variations to test, given that there is no generally accepted theory of economic growth.

When there are multiple statistical tests available, one method for dealing with this situation is called the Bonferroni correction, which "compensates" for the multiple tests by increasing the threshold of statistical significance by the factor of the number of possible tests. (See Appendix A in this CCSP report in PDF on climate extremes to see a nice example its application in the context of time series of climate extremes.)

The methodological approaches recommended by DeLong and Kang 1982 and under the Bonferroni correction both suggest making tests of statistical significance much more rigorous, even to the point of making the tests practically irrelevant. From this perspective, a p result of 0.011 may be qualitatively no different than 0.39 as both may be orders of magnitude away from a more appropriately calculated threshold.

For a small-N study where relationships are many, co-determined, non-stationary and contingent, what BW13 might actually tell us is that conventional methods of econometrics are very limited in their ability to say anything much about causality. This is especially the case when exploring a process (economic growth) that is in general poorly understood in terms of cause and effect.

3. Limits of Statistics in Small-N Studies

The discussion so far has taken us to an uncomfortable place: it may be that conventional economic research methods and statistics offer limited help in the challenge of untangling causality in policy settings. The very idea of statistical methods in such a context is worth unpacking. The idea that our observations of society -- in this case elected presidents and economic performance -- can be said to be samples which come from a distribution (much less, distributions which we might characterize accurately) requires a giant metaphysical leap into alternative universes of counterfactuals. Such a leap leads to questions fundamentally unresolvable using empirical methods. It is no wonder that many policy arguments reduce to competing views on esoteric theories.

In a 2007 paper (here in PDF) in The Policy Studies Journal Steinberg provides a nice overview of the importance of "small-N" studies:
Another great attraction of small-N approaches, both in theoretical and applied settings, is their ability to trace causal mechanisms. The design of intelligent policy interventions requires analyses that move beyond mere patterns of correlation to include reasonably precise characterizations of the mechanisms through which posited causal variables exert their effects. Similarly, credible theories of political behavior and policy processes must not only demonstrate correlations but must establish a logic of association (George & Bennett, 2005, pp. 135–47). Yet it is widely recognized that statistical analysis, for all of its analytic power, is of limited value in tracing causal processes . . . 
Getting back to BW13, which is certainly a small-N study, they take a smart approach by using the data to generate alternative hypotheses about the possible chain of causality between the election of a president and subsequent economic growth. However, as discussed frequently on this blog, their analysis is hampered by the fact that no one actually knows where economic growth comes from. BW13 don't either.

BW13 explore several variables that impact economic growth and come to the following conclusions:
Much of the D-R growth gap in the United States comes from business fixed investment and spending on consumer durables. And it comes mostly in the first year of a presidential term. Yet the superior growth record under Democrats is not forecastable by standard techniques, which  means it cannot be attributed to superior initial conditions. Nor does it stem from different trend  rates of growth at different times, nor to any (measureable) boost to confidence from electing a  Democratic president.

Democrats would no doubt like to attribute the large D-R growth gap to better macroeconomic policies, but the data do not support such a claim. Fiscal policy reactions seem close to “even” across the two parties, and monetary policy is, if anything, more pro-growth when a Republican is president—even though Federal Reserve chairmen appointed by Democrats outperform Federal Reserve chairmen appointed by Republicans. It seems we must look instead to several variables that are mostly “good luck.” Specifically, Democratic presidents have experienced, on average, better oil shocks than Republicans, a better legacy of (utilization-adjusted) productivity shocks, and more optimistic consumer expectations (as measured by the Michigan ICE).
Their bottom line? The residual difference in economic growth observed between Republican and Democrat presidents is a "mystery." What BW13 may have actually rediscovered is that we don't know where economic growth actually comes from. Solving that riddle will require going beyond simple statistics.

25 November 2013

The Most Important Question Facing the American Economy

The WSJ today interviews Larry Summers who makes a case for why policy makers should be focusing more attention on growth policy rather than deficits:
We've had 10 bipartisan budget processes. We've had zero bipartisan growth processes. We've had budget summits up the yin-yang. We've had no growth summits. We have gotten the idea that addressing the deficit is the defining challenge facing the country. . .

. . . if you take the longest-run deficit and take the official forecasts, if we increase the growth rate by two-tenths of 1%, you solve the entire identified fiscal-gap problem.

If we get the growth rate up, the debt problem will stay in control. If we continue to be a country that doesn't increase the fraction of adults that are working, that doesn't catch up with its GDP potential, that grows at 2% or less, we can have all the entitlement summits in the world, and we're gradually going to accumulate debt and have a serious debt problem.

We should be focusing on growth. Growth creates a virtuous circle, which creates more growth.

In a growing economy, employers work harder to train the next generation of workers. In a growing economy, there are more ladders for kids to get on, which puts them in a better position to lead 10 years down the road.
It is an argument that Summers has been making for a while:
[T]he national debt is an asset that we owe ourselves. To the extent that it is not an asset that we owe ourselves, it represents borrowing that we, as a country, have an opportunity to invest in growing our economy and borrowing, actually, at a remarkably low interest rate. And so if we can invest that money well, we can actually generate large returns that can provide the wherewithal to pay the debt back.

Are we devoting too much to uncontrollable mandatory and entitlement programs relative to the discretionary programs - like the national parks, like the FBI, like basic cancer research - that have to be appropriated every year? Yes, we are. So I don't want to be heard as saying that we have no budget issues; far from it. The point I want to stress is that the most important question facing the American economy is, can we accelerate the growth rate?
One reason that politicians shy away from addressing this question explicitly is that they know, in principle, what sorts of actions lead to reducing deficits and debt -- governments should spend less. Such actions have a short-term, demonstrable effect and are also politically popular in an era where government is decidedly not popular.

In contrast, the actions that policy makers might take to increase long-term growth rates by 0.2% do not have a clear consensus among economists and also have a far more tenuous causal connection to outcomes, especially on sort-term, political time scales.  There is also the problem that one commonly recommended set of actions to boost long-term growth -- more investment by governments -- would actually require more government spending, and a likely increase deficits in the short term, presenting a political problem.

Writing in the FT, Martin Wolf explains:
[A]nother possibility, discussed by Mr Summers and supported by many economists (including myself), is to use today’s glut of savings to finance a surge in public investment. That might be partly linked to a shift to lower-carbon growth. Another possibility is to facilitate capital flows to emerging and developing countries, where the best investment opportunities must lie. It makes no sense for so much of the world’s savings to seek investment opportunities where they do not apparently exist and shy away from places where, one hopes, they do.

The underlying argument that more has happened to high-income economies than just a financial crisis is persuasive. It is also hard to believe that a surge in business investment in these countries would manage to absorb the excess desired savings of the world. Why, after all, should one expect any such thing to happen in countries with ageing populations, high wages and sluggish economies? But these countries do then confront a challenge far bigger than the damage done by the crisis alone, big though that is. They may face a far longer-term future of weak demand and enfeebled supply.

The best response, then, is measures aimed at raising productive private and public investment. Yes, mistakes will be made. But it will be better to risk mistakes than accept the costs of an impoverished future.
The political question focused narrowly on debt gets reduced to the content-free issue of more or less spending. It is important to emphasize, as Wolf does, that any significant program of government investment will have its share of mistakes (Obamacare rollout, Solyndra etc.). However, such mistakes are not a case for not making investments, they are a case for making investments smarter.

Of course, if governments don't invest there will be no such mistakes. There will also be much lower long-term growth as well -- a cut-off-your-nose-to-spite-your-face argument if there ever was one. Given the long-term history of government investments in stimulating growth, those who argue against such investments going forward must also present an alternative explanation for how economic growth will increase going forward. A debt policy is not a growth policy.

As Summers argues, the most important question to ask and answer is: What can be done to accelerate the growth rate?  The answer is politically inconvenient, because a big part of it involves smart government investments and policies, which in today's political environment is not a topic that gets much serious attention. It should.

22 November 2013

Graphs of the Day: Major US Hurricane Drought Continues

Data here.

The US good luck with respect to hurricane landfalls -- yes, good luck -- continues. The graph below shows total US hurricane landfalls 1900 through 2013.
The five-year period ending 2013 has seen 2 hurricane landfalls. That is a record low since 1900. Two other five-year periods have seen 3 landfalls (years ending in 1984 and 1994). Prior to 1970 the fewest landfalls over a five-year period was 6. From 1940 to 1957, every 5-year period had more than 10 hurricane landfalls (1904-1920 was almost as active).

The red line in the graph above shows a decrease in the number of US landfalls of more than 25% since (which given variability, may just be an artifact and not reflecting a secular change). There is no evidence to support more or more intense US hurricanes. The data actually suggests much the opposite.

If you are interested in a global perspective, Ryan Maue keeps excellent data. Here is his latest graph on global ACE (accumulated cyclone energy, an overall measure of storm intensity).
To date 2013 is at 73% of the global average and the North Atlantic is at 30%. We'll post up our updated data for global landfalls through 2013 before the end of the calendar year.

20 November 2013

A Fork in the Road

At Guardian Political Science I have a piece up on the stark difference between the conclusions of the IPCC on extreme events and the views being expressed by many climate campaigners, some very prominent.  For those who follow these issues, there will be nothing new or interesting in the piece. The data shows what it shows.

A feeble response to my piece by the Guardian's in-house climate blogger invokes the Google-based research of Chris Mooney, confuses trends and projections yet still grants the central point of my argument: "that the data aren't good enough to confidently link rising hurricane intensity to human greenhouse gas emissions so far doesn't mean there isn't a link ... It's not so much the climate change we've seen so far that we're worried about." Exactly (emphasis added).

More significantly, the issue of attribution of the costs of present extreme events to historical greenhouse gas emissions threatens to collapse the entire climate negotiations. As I write in my piece linked above, the demands from poor countries are a direct result of rich-world leaders asserting their nation's responsibilities for those damages. Connie Hedegaard, EU climate commissioner, opened the climate conference last week making a non-too-veiled association of Typhoon Haiyan with emissions. Now she says: "We cannot have a system where we have automatic compensation when severe events happen around the world."

But why not? If the rich world is indeed causing climate disasters via its emissions, then of course it should being paying compensation to poor nations victimized by those actions. At present, the state of climate science doesn't presently support such claims as a matter of causality (don't take my word for it, read the IPCC).  Climate campaigners will have to decide which way to go, as they can't have it both ways.

18 November 2013

Japan's New Emissions Targets

A government panel on measures to tackle climate change approved on Friday a new goal to reduce the nation’s greenhouse gas emissions by 3.8 percent by 2020 from the 2005 level. . .

Prime Minister Shinzo Abe said he is sure that Japan can substantially contribute to global efforts to tackle climate change. The government will steadily implement necessary measures to achieve the new emissions reduction target, he said.

The new goal means a setback from the target of reducing emissions by 25 percent from 1990 by 2020, which was set in 2009 by the administration led by the Democratic Party of Japan. The DPJ is now an opposition party.
In a 2009 paper I explained why Japan's proposed emissions targets were unrealistic and would almost certainly not be met (paper here, open access, note a figure correction here).

The graph at the top of this page shows the implications of Japan's new emissions reduction target (3.8% below 2005 levels) in terms of implied carbon-free energy. The graph considers only carbon dioxide and is based on an assume of no change in demand from 2012 to 2020.

The graph shows a few things worth pointing out (data from BP 2013):
  • In 2010, prior to the Fukushima nuclear disaster, Japan got 18.5% of its total energy consumption from carbon-free sources. In 2012, it was 6.4%.
  •  Post-Fukushima the nuclear shutdown has lead to energy consumption being replaced by fossil fuel (95% - coal, gas, petroleum) with wind and solar contributing 1.5%. 2012 consumption is the same as it was in 2009.
  • To meet its new emissions reduction target (in terms of carbon dioxide) Japan will need to increase its proportion of carbon-free energy from 6.4% in 2012 to 9.1% in 2020, assuming no increase in energy consumption.
  • Japan can hit its 2020 target by restarting 9 of its 50 shut nuclear power plants (again, assuming no new increase in demand), replacing an equivalent amount of coal.
  • Setting aside technical, political, social and other considerations, Japan would need to increase wind and solar by ~720% from 2012 levels by 2020 (replacing equivalent coal) to meet the emissions target.
The new emissions target is perhaps the clearest indication yet that Japan's government, despite being pro-nuclear, does not see a return to the levels of nuclear power that existed prior to Fukushima. The Japanese government has said as much:
Japan’s environment minister, Nobuteru Ishihara, said that the new target “does not consider the possible effect of nuclear power plants reducing emissions” and that Japan “would set a more definite target” once it settled on what sources of energy it would use in the future.
The Japanese government appears to have read The Climate Fix;-)
"We're down to zero nuclear; anyone doing the math will find that target impossible now," AFP reported Ishihara saying in Tokyo after announcing the new target. The original goal was "unrealistic in the first place," he said. "The current government seeks economic growth while doing our best to meet emissions targets."
Japan is back to "mamizu" climate policy.

No THB2 ... At Least for Now

Not long ago I mentioned on this blog I that I had set to work on an 2nd edition of The Honest Broker, my 2007 book with Cambridge University Press After a lot of work and thought I've decided at this point not to press forward with it, for two reasons.

One is that after going through the first five chapters I made a few revisions, but mostly very small and very nuanced. I actually really like these chapters as they are. The second is that getting into further details on each of the four ideal types (well, five) means ... getting into the details.  I have a lot of interesting material on science advisory processes, but it is wonky and technical. These issues are important, and I will be writing on them, but first in the academic literature. One of the things that people seem to like about THB is that it is short and accessible. So I am taking the approach that if it is not broken, don't fix it.

Will the be a THB2? Probably down the road, but more likely as a sequel than as a second edition. So readers are stuck with The Skeptical Environmentalist and the Iraq war as the major case studies. For today's students that is ancient history. Fortunately, new cases arise where science meets politics almost daily.

Thanks to all the readers and colleagues who wrote in with comments and advice!

12 November 2013

Are Typhoon Disasters Getting More Common?

This morning I have been engaged in a Twitter debate with Jeff Sachs, of Columbia Earth Institute, motivated by his tweet as follows.
The reference is to a paper by Elsner et al. (2008) in Nature which shows an increase in the strongest tropical cyclones in some basins over the sub-climate time period of 1981-2005. Unfortunately for Sachs that paper does not show trends significant at the >90% level for the strongest cyclones in the western North Pacific basin (the world's most active and where Haiyan occurred). The lesson here is that if you are going to pick cherries, make sure that the fruit is not a lemon.

Fortunately, there is a more relevant study (Weinkle et al. 2012, here in PDF) which looks specifically at landfalls in the western North Pacific basin. Landfalls are of course what cause disasters. The data from that paper for the major landfalling tropical cyclones (i.e., Category 3+) is shown at the top of this post. The trend line is added by Excel, and shows a decline. However, the western North Pacific basin has been shown to exhibit very large variability, so I wouldn't put much weight into any claims of trends up or down (but don't believe me, check IPCC). That said, recent research has looked at the recent decline in activity in that basin.

Given this data, substantial research on it and a strong IPCC consensus does anyone really want to debate that typhoon disasters have become more common? If so, my comments are open to you.

11 November 2013

Deeply Conflicted About Weather Extremes

Last week I gave a talk on how the IPCC has treated the issue of extreme weather events over the years. I concluded that despite some very positive signs that the community is reclaiming scientific integrity on this issue (see IPCC SREX and AR5), some part of the community -- especially visible leaders -- remain deeply conflicted. Following my talk a few colleagues asked me for evidence of that deep conflict. Here is a good one.

Have a look at the picture above (courtesy @JPvanYpersele on Twitter) of IPCC Chairman Rajendra Pachauri presenting the findings of AR5 and SREX to COP19 delegates earlier today in Poland at this year's big UN FCCC climate confab.

For those curious, here is what the IPCC AR5 actually reported (here in PDF) on tropical cyclones (of the sort emerging from the smokestack in the image from Al Gore's movie shown above in Pachauri's slide):
"Current datasets indicate no significant observed trends in global tropical cyclone frequency over the past century … No robust trends in annual numbers of tropical storms, hurricanes and major hurricanes counts have been identified over the past 100 years in the North Atlantic basin... In summary, this assessment does not revise the SREX conclusion of low confidence that any reported long-term (centennial) increases in tropical cyclone activity are robust, after accounting for past changes in observing capabilities"
Does the image chosen as the top line representation reflect the science reported by the IPCC? You be the judge. It is never too late for climate scientists to start demanding greater scientific integrity from the public faces of their community. Silence speaks loudly too.

07 November 2013

Responses to Bazilian and Pielke on Global Energy Access

Issues in Science and Technology has just published in its Fall 2013 issue four letters in response to our article, Making Energy Access Meaningful (here in PDF). Three of the responses are constructive, one is somewhat less so. This post offers a summary of our original paper, the letters and a few thoughts in response.

In a nutshell here is what our paper argued:
Our distinctly uncomfortable starting place is that the poorest three-quarters of the global population still use only about 10% of global energy—a clear indicator of deep and persistent global inequity. Because a modern energy supply is foundational for economic development, the international development and diplomatic community has rightly placed the provision of modern energy services at the center of international
attention focused on a combined agenda of poverty eradication and sustainable development. This priority has been expressed primarily in the launching of the UN Sustainable Energy for All initiative (SE4All). Still, areas of tension and conflict in such an agenda demand further attention, particularly in relation to climate change, as we discuss later in this essay.

Compounding the difficulty of decisionmaking in such a complex space is that the concept of “energy access” is often defined in terms that are unacceptably modest. Discussions about energy and poverty commonly assume that the roughly two to three billion people who presently lack modern energy services will only demand or consume them in small amounts over the next several decades. This assumption leads to projections of future energy consumption that are not only potentially far too low but therefore imply, even if unintentionally, that those billions will remain deeply impoverished. Such limited ambition risks becoming self-fulfilling, because the way we view the scale of the challenge will strongly influence the types of policies, technologies, levels of investment, and investment vehicles that analysts and policymakers consider to be appropriate.
A first letter in response comes from  S. Vijay Iyer, Director of the Sustainable Energy Department at The World Bank. Iyer writes to further explain the five-tier definition of energy access used by the World Bank and its partners. He also raises the importance of political considerations:
I share the sense of justice that animates the authors’ contention that an electricity access goal of modest ambition would reflect a “poverty management” over a “development” approach. But their contention is misplaced. The level of ambition for SE4ALL goals in each country needs to be defined by governments, in consultation with civil society, entrepreneurs, developmental agencies, and the public. The GTF’s multi-tier framework for access seeks to facilitate this process so that each country can define its own electricity access targets.

The chasm between high ambition and unfeasible goals must be filled by pragmatically designed intermediate steps.
I think that it is safe to conclude that we'd agree with all of this, especially the need to recognize the practical dimensions of political realities. That said, we'd also argue that the "high ambition" part is non-negotiable as a valued outcome, regardless of the practical challenges of the day.
A second letter comes from Reid Detchon, Vice President, Energy and Climate for the United Nations Foundation. Detchon writes to place the high (estimated) costs of securing true, global energy access into a bit more context:
Morgan Bazilian and Roger Pielke Jr. get the essential point right: that meaningful access to modern energy services must go beyond lighting to other productive uses, such as water pumping for irrigation. Like many observers, however, they seem daunted by the scale of investment required, which they estimate at $1 trillion to achieve a minimal level of energy access and 17 times more to reach a level comparable to that in South Africa or Bulgaria.

Those numbers are very large compared to the amount that might plausibly be available through conventional development assistance, but that is the wrong lens to use. Electricity is a commodity that even very poor people are willing to pay for. Indeed, they are already paying more than $37 billion a year for dirty, expensive fuels (kerosene for lighting and biomass for cooking), according to an International Finance Corp. report. With the right financing, solar energy in rural areas is cheaper than these sources or diesel fuel for generators. The availability of energy is a spur to economic development that can quickly become self-sustaining.
To that -- thanks and amen.
A third letter comes from Dan Kammen, a professor at the University of California-Berkeley, writes to highlight the importance of micro-grids:
The central message of the paper by Morgan Bazilian and Roger Pielke Jr., at least to me, is to highlight the explosion of demand and the diverse modes of energy consumption that are possible and should be anticipated as energy access (and in particular electricity access) is expanded. This is an important observation that ties together many stories: (1) the value of energy access for both basic services and economically productive activities; (2) the need to analyze and to plan around unmet demand, so that a small taste of energy services does not lead to unrealized expectations and then dissatisfaction; (3) the complexity of building and planning an energy system capable of providing services reliably once the “access switch” has been turned on. . .

What is unresolved, and where readers of Bazilian and Pielke’s paper need to keep a watchful and innovative eye, is on the tools that energy planners use to build adaptive energy networks. Although “simply” plugging into the grid may be the ideal (or in the past has been the aspirational goal), the record of the past decades is that this has not been technically, economically, or politically easy. National grids have not expanded effectively in many nations, in terms of coverage or service reliability or cost, so new models are needed.
Kammen's comments are welcomed. One of the points of our paper was to highlight the importance of holding further debate and discussion about what it might take to actually secure true, global energy access. The role of grid technology in meeting that challenge is crucially important.
The fourth letter comes form Alan Miller, Principal Climate Change Specialist at the International Finance Corporation. This letter betrays more than it says, though what it says is revealing enough. Miller writes:
Morgan Bazilian and Roger Pielke Jr. provide a valid long-term perspective on the energy needs of the developing world, but the critical question is what practical difference does it make to think in the more ambitious, longer term way that they propose? We don’t do that for any of the other United Nations Millennium Development Goals such as those for health, education, and water. Why, then, should we do so for energy? What investment choices or policies would we approach differently?
I'd respectfully disagree with Miller about the presence of long-term thinking in many areas of policy, most notably of course his own area of specialty, climate change. Miller also brings out the old trope about how understanding the magnitude of a problem may work against effective action, so therefore we should perhaps remain ignorant:
[T]heir effort to emphasize the enormity of the long-term challenge could make the task seem so daunting that it will discourage the development agencies from taking the smaller steps that must be taken now to put the developing nations on the path that will ultimately lead to an adequate energy supply. Thinking about the trillions of dollars that will be needed over many decades rather than the billions of dollars that can make a difference now could be paralyzing. . . is there any point in worrying about what it will cost to provide everyone with an electric stove?
As a policy scholar, I teach my students that understanding the magnitude of the policy challenge being faced is an important first step toward designing both short- and long-term policies that can be effective. Ignorance, especially willful, is never a good idea. More generally, Miller risks reinforcing the all-too-real stereotype of the climate activist who cares deeply about counting carbon molecules, but not about counting up what energy access for real people may imply for the future. Climate policies won't succeed unless achieving true, global energy access is central, but that is another discussion.

We thank all four letter writers for taking the time to engage with our piece!

05 November 2013

JRC Guidelines for Scientific Integrity

A few weeks ago I had the pleasure of attending an excellent workshop on connecting science and policy at the Joint Research Centre (JRC) of the European Commission. There a colleague shared with me the JRC's guidelines for integrity in scientific support activities -- "Robust Science for Policy Making: A guideline towards integrity and veracity in scientific support and advice."

The document offers an excellent statement of values and principles which underlie the JRC's mission to serve and an "in-house science service" to the European Commission. The document is not otherwise available online, and with permission, I share it here in PDF.

If you or your organization is in the business of providing scientific advice to policy makers, how does your organization's position on scientific integrity stack up?