Combined UCS Blogs

California’s Cap-and-Trade Program and Low Carbon Fuel Standard Go Together Like Peanut Butter and Jelly

UCS Blog - The Equation (text only) -

This year is shaping up to be another action-packed year on climate change in the California Legislature. Last year, legislators passed a sweeping commitment to cut California’s global warming emissions 40 percent between 2020 and 2030, and this year policy makers are considering how California should achieve these big goals. At the center of that conversation is a debate about whether to extend the state’s cap-and-trade program beyond 2020.(For a quick primer on cap-and-trade, check out our Carbon Pricing 101 webpage.)

Lawmakers should extend and refine California’s cap-and-trade program

California’s cap-and-trade program is an important tool for addressing climate change. This is because it sets a price on global warming emissions and that price helps incorporate the costs of climate change and the value of low carbon technologies into the decisions businesses and consumers make.

In addition, the program’s revenues have proven to be a critical source of funds for investments in clean vehicle, fuel, and energy technologies, particularly in communities that are most impacted by fossil fuel pollution.

In short, California’s cap-and-trade program, while not perfect, is helping to address climate change. As the state sets a course to make big cuts in pollution over the next decade, the program’s price signal and investments in clean technologies will become even more important.

However, given that the cap-and-trade program is now in its fifth year and needs updating for the post-2020 period, it also makes sense to consider refinements to the program. The Air Resources Board has proposed some changes, but we support lawmakers taking a closer look at further improvements.

In particular, UCS supports AB 378 (C. Garcia, Holden, E. Garcia), which seeks to better align the cap-and-trade program with air quality goals. This legislation aims to promote strategies that deliver equitable reductions in criteria emissions, toxic contaminants, and global warming pollution that also benefit low income communities and communities of color. Additionally, we advise the legislature to consider:

  • Raising the cap-and-trade program’s price floor (or “auction reserve price”)—This will ensure the price signal from the program is adequately driving investments in clean technologies.
  • Requiring auctioning of allowances except for proven leakage risks—This will ensure that the value of allowances is being used for the benefit of the public.
  • Taking a cautious approach to offsets—It is difficult to make sure some offset projects represent additional and permanent emission reductions. An abundant use of offsets would also outsource the co-benefits that come with emission reductions from covered sectors.
  • Pursuing opportunities to link with other jurisdictions—In addition to Quebec and Ontario, Canada, several U.S. states, including neighboring Oregon, may seek to link future economy-wide cap-and-trade programs with California’s large and proven market. The opportunity for linkage is one important way for California’s leadership to spread to other jurisdictions.
The oil industry supports cap-and-trade too?

The politics of extending the cap-and-trade program are starting to get interesting.

For example, the program has picked up new and unlikely supporters. First on that list is the Western States Petroleum Association (WSPA), the oil industry’s trade group. (A few Republicans have also started to voice support for the concept.) Just last year WSPA opposed setting limits on climate pollution in California and previously it fought vehemently against including gasoline and diesel fuel in the cap-and-trade program. Nonetheless, WSPA now supports extending cap-and-trade to 2030.

What’s going on here? Well, the oil industry’s idea of how to best design California’s cap-and-trade program looks quite different from UCS’s vision for the program. WSPA wants to see a limit on the price of allowances, more free allowances to refineries and other industrial sources, and greater use of offsets to maximize flexibility, among other changes. If they succeed, we will see fewer emission reductions from the oil industry, the largest sector of emissions in California.

But the biggest prize on WSPA’s wish list in cap-and-trade negotiations is to roll back California’s Low Carbon Fuel Standard (LCFS), a program that focuses directly on the transportation fuel industry. Earlier this month, WSPA launched a website devoted to ending the LCFS. And just last week, at an informational hearing about cap-and-trade, the oil industry’s lobbyist spent half of his testimony talking about the need to eliminate “redundant” programs such as the LCFS.

LCFS guarantees a market for cleaner fuels

In order to understand the importance of the LCFS—and why the oil industry has consistently sought to undermine the program—one must understand the basics of the program.

The LCFS requires petroleum refiners and fuel importers to reduce global warming pollution associated with the fuels they sell. The program regulates the “carbon intensity” of fuels, which is a measurement of global warming emissions per unit of fuel. Moreover, the program looks at emissions over the fuel’s entire life cycle, which means the emissions that come from both producing and using the fuel.

The LCFS requires a gradual reduction in carbon intensity, reaching a 10 percent reduction in 2020, relative to 2010. (ARB plans to extend the program to 2030.) Refineries and fuel importers can meet the requirement by selling fuels that, on average, meet the carbon intensity standard, or by selling fuels over the standard while also purchasing credits generated by sellers of lower-carbon fuels, such as biodiesel or electricity.

Since gasoline and diesel are above the standard, the LCFS creates a dependable market for cleaner fuels, which drives steady investment into non-petroleum fuel sources. The program’s performance speaks for itself. Between 2011 and 2016, use of alternative fuels grew by 50 percent in California, while the average carbon intensity of these fuels declined by 30 percent. All told, the program reported 25 million tons of reduced carbon emissions.

California’s LCFS has helped grow the state’s clean fuels market by 50 percent.

Like peanut butter and jelly

The oil industry argues the cap-and-trade program and LCFS just don’t mix—like oil and water, you might say. However, I see the two policies more like peanut butter and jelly—they are good on their own but so much better together.

The two programs fulfill different niches in California’s climate-fighting repertoire. The LCFS is fostering research, development, and deployment of new and better clean fuel options. That’s why more than 150 clean fuel producers, vehicle manufacturers, and fleet operators recently voiced their support for the program.

Meanwhile, the cap-and-trade program is helping to integrate the costs of climate change into business decisions throughout the economy while also supporting investments in deployment of clean technologies through the program’s revenues.

The two programs also complement one another because compliance with the LCFS eases compliance with cap-and-trade. For example, recent research showed that extending the LCFS to 2030 would cut cap-and-trade allowance prices, reducing compliance costs for all sources covered by the cap-and-trade program.

While the oil industry would love to rely only on cap-and-trade to cut carbon pollution from cars and trucks, the reality is that a carbon price alone is not enough to decarbonize our transportation system. The cap-and-trade program and LCFS are two key components of the state’s multifaceted approach reduce the carbon content of fuels, improve the fuel efficiency of vehicles, and reduce vehicle use.  It’s critical that both policies are designed wisely and extended to 2030, even if that means overcoming the oil industry’s opposition.

Three Steps Shell Can Take for the Climate—and to Earn Public Trust

UCS Blog - The Equation (text only) -

“Trust has been eroded to the point where it is an issue for our long-term future.”

—Ben van Beurden, Royal Dutch Shell CEO, at CERAWeek in March 2017

Royal Dutch Shell holds its Annual General Meeting (AGM) tomorrow in the Netherlands, and like other major fossil fuel producers the company is under pressure from its investors to do more to address climate risks.

UCS took an in-depth look at Shell’s climate-related positions and actions for The Climate Accountability Scorecard last year. We found a few bright spots, and we made several recommendations for improvement. Here are three steps company decision makers could take at tomorrow’s AGM to signal that Shell wants to earn the trust of investors, the public, and policy makers.

1) Stop supporting disinformation

A 1991 video recently unearthed by The Guardian shows that Shell clearly recognized the risks of climate change decades ago. The film, titled “Climate of Concern,” warned of climate change “at a rate faster than at any time since the end of the ice age—change too fast perhaps for life to adapt, without severe dislocation.”

Yet despite this knowledge, Shell funded—and continues to fund—trade associations and industry groups that spread climate disinformation and seek to block climate action.

For a decade, Shell was part of the Global Climate Coalition, which presented itself as an umbrella trade association coordinating business participation in the international debate on global climate change. As we now know, its real purpose was to oppose mandatory reductions in carbon emissions.

Shell was also a member of the American Legislative Exchange Council (ALEC), a US-based lobbying group that peddles disinformation about climate science and tries to roll back clean energy polices. In announcing its decision to leave ALEC in 2015, the company said that ALEC’s stance on climate change “is clearly inconsistent with our own.” In an interview aired last week, CEO van Beurden reiterated that “we could not reconcile ourselves” with ALEC’s position.

Indeed, in The Climate Accountability Scorecard UCS scored Shell “advanced” for its own public statements on climate science and the consequent need for swift and deep reductions in emissions from the burning of fossil fuels.

However, Shell has not applied the same standard to other trade associations and industry groups that it did to ALEC. Shell still plays leadership roles in the American Petroleum Institute (API), the National Association of Manufacturers (NAM), and the Western States Petroleum Association (WSPA)—all of which take positions on climate science and/or climate action that are inconsistent with Shell’s stated position. Shell has not taken any steps to distance itself from climate disinformation spread by these groups.

2) Set company-wide emissions reduction targets consistent with 2°C

Shell scored “fair” in The Climate Accountability Scorecard in the area of planning for a world free from carbon pollution. The company was ahead of most of its peers in expressing support for the Paris Climate Agreement and its goal of keeping warming well below a 2°C increase above pre-industrial levels.

Since the Scorecard release, Shell has made a couple of positive moves in this area:

  • In March, the company announced plans to sell most of its production assets in Canada’s oil sands in a deal worth $7.25 billion. Oil sands are among the most carbon-intensive fuel sources to extract and refine, and thus clearly disadvantaged in the transition to a low-carbon energy future. (Shell will maintain an interest in the Athabasca oil sands project, directly and via its proposed acquisition of Marathon Oil Canada Corp.)
  • Also in March, Shell announced that climate-related metrics will be factored into executive pay: 10% of bonuses will be based on how well the company manages heat-trapping emissions in its operations.

Some shareholders, however, don’t believe these steps go far enough. The Dutch organization Follow This has filed a shareholder resolution calling on Shell to “set and publish targets for reducing greenhouse gas (GHG) emissions that are aligned with the goal of the Paris Climate Agreement to limit global warming to well below 2°C.”

Shell’s directors unanimously oppose the resolution, arguing it would have a detrimental impact on the company. While affirming Shell’s support for the Paris Climate Agreement, they maintain that “in the near term the greatest contribution Shell can make is to continue to grow the role of natural gas.” Yet as my UCS colleagues have demonstrated, there are tremendous risks to our growing over-reliance on natural gas.

Meanwhile, the UK responsible investment charity ShareAction is urging shareholders to reject Shell’s proposed remuneration policy in a binding vote, and engage with the company over the need to make a clearer commitment to the low-carbon transition. Among other arguments, ShareAction notes that “Shell fails to include indicators that meaningfully focus executive attention on transitioning the firm’s business model for <2°C resilience. The 10% weighted GHG metric focuses on operational emissions, rather than long-term strategic changes required in the context of the transition.”

3) Stand up for full disclosure of climate-related risks

Shell lagged behind other major fossil fuel companies in disclosing climate-related risks to investors, scoring “poor” overall in this category. For example, the company generally acknowledges physical risks—such as weather—to its operations, but does not include discussion of climate change as a contributor to those risks.

Recognizing the potential systemic risks posed by climate change to the global economy, the Task Force on Climate-Related Financial Disclosures (TCFD) is recommending consistent, comparable, and timely disclosures of climate-related risks and opportunities in public financial filings. UCS participated in the TCFD’s public consultation process, through which a broad range of respondents were generally supportive of its recommendations.

Unfortunately, several of Shell’s competitors (BP, Chevron, ConocoPhillips, and Total SA) funded a report attacking the TCFD’s recommendations, which was rolled out last week at an event hosted by the US Chamber of Commerce.

Shell has an opportunity to demonstrate leadership on transparency and disclosure by publicly supporting the TCFD’s recommendations—including transparent discussion of the business implications of a 2° Celsius scenario.

I’ll be keenly awaiting the results of Shell’s AGM tomorrow to see whether the company rises to these challenges. And I look forward to discussing developments at recent and upcoming annual shareholders’ meetings of fossil fuel producers at an event in Houston on Wednesday night, organized by UCS in collaboration with Rice University faculty concerned about climate change.

Climate Change and Climate Risk: Critical Challenges for Fossil Fuel Companies and Their Investors will feature a distinguished panel of scientists, public health experts, investment experts, and community leaders exploring the fossil fuel industry’s role in transitioning to a carbon-constrained future. The event will be live-streamed and available for viewing at this link.

Congress vs. Trump: Are the President’s Anti-Science Budget Priorities Headed for Another Defeat?

UCS Blog - The Equation (text only) -

The president’s “America First” budget blueprint, a.k.a. the “skinny budget,” made a lot of noise when it was introduced two months ago and brought focus to the administration’s upcoming FY2018 budget priorities. The administration followed up shortly after by requesting reductions in the 2017 budget for the remaining five months of this fiscal year.

But then it came time for Congress to act, and they said, “Thank you for the very amusing budget Mr. President, but we are going to do our own thing …and incidentally, thank you for uniting Republicans and Democrats in opposition to your draconian cuts.”

After all, it’s members of Congress that have to figure out how to keep the federal government operating. So with a government shutdown looming, Congress effectively ignored the administration’s requests, and on May 4 passed a bill to fund the government for the rest of the fiscal year through September 30, 2017. The bill was a repudiation of the president’s budget priorities, as it increased funding to many agencies, offices, and programs that the administration specifically targeted for cuts or elimination.

The president is expected to release his full fiscal year 2018 budget this week (fleshing out the details of his “skinny budget”), and there aren’t expected to be any surprises. It will likely track the skinny budget pretty closely, which means it’s going nowhere in Congress.

To get a clearer sense of the prospects for the president’s FY2018 budget, let’s look at some of the budget choices Congress made for the FY2017 Omnibus Spending Bill that are at odds with what President Trump proposed for 2018:

Department of Energy (DOE)

President Trump’s FY18 budget request proposes to eliminate ARPA-E, DOE’s innovative clean energy technology R&D program; and the Loan Programs Office, which provides credit support to help deploy innovative clean energy technologies. Additionally, it targeted critical programs in the Office of Energy Efficiency and Renewable Energy (EERE), like the Weatherization Assistance Program (which funds energy efficiency improvements for low-income households) and the State Energy Program (which provides funding and technical assistance to states for increasing energy efficiency or renewable energy). The president’s FY17 request specifically targeted EERE for a 25% cut ($516 million).

Instead of eliminating ARPA-E, congress gave it a 5% increase in funding in FY17 (from $291 million to $306 million) and also provided an extension of current funding for the Loan Programs Office. The Weatherization Assistance Program was given a 6% increase while the State Energy Program received sustained funding at the 2016 level. EERE ultimately received a very slight increase instead of a devastating cut.

National Oceanic and Atmospheric Administration (NOAA)

The president’s FY18 budget request proposed to cut over $250 million “in grants and programs supporting coastal and marine management, research, and education,” which essentially constituted 23% of the combined budget for the Office of Oceanic and Atmospheric Research (OAR) and the National Ocean Service (NOS).

The administration was more specific in their FY17 budget request, calling for cuts to coastal zone management grants, regional coastal resilience grants, and climate research grants. The administration also proposed reducing satellite capacity at the National Environmental Satellite, Data, and Information Service (NESDIS), which provides the data needed to produce National Weather Service forecasts.

Instead of cuts, in the FY17 Omnibus bill that Congress provided a slight increase in funding for Coastal Science and Assessment, as well as for Ocean and Coastal Management Services, at NOS. OAR received a 6.6% increase in funding (from $482 million to $514.1 million), with the climate research budget untouched.  And Congress increased funding for Environmental Satellite Observing Systems at NESDIS by 25% (from $130.1 million to $163.4 million).

Environmental Protection Agency (EPA)

The president’s FY18 budget request proposed cutting the EPA’s budget by 31% and eliminating 3,200 staff and over 50 programs, including those supporting international and domestic climate change research and partnership programs. His budget also reduces funds allocated to Superfund, Brownfields, compliance monitoring, and enforcement, which further endangers economically vulnerable communities and communities of color. While the administration would have the states take on more of the EPA’s responsibility, the president’s budget eliminates geographic programs and reduces funding for state categorical grants by a whopping 45 percent.

The EPA was spared any drastic cuts and staff layoffs in FY17. Its clean air and climate programs were funded at the previous year’s levels, as was the Compliance Monitoring Program (which helps ensure our environmental laws are followed), enforcement, and Superfund. State and Tribal Assistance Grants and Geographic Programs, which support Brownfields Projects, local air management, water protection, and lead and hazardous waste programs, actually received a slight increase in funding.

FEMA, NASA and more…

Congress rebuffed the president’s request to eliminate FEMA’s Pre-Disaster Mitigation Grant Program, which helps bring down the cost of disasters and protects communities by supporting preparedness efforts. Also escaping cuts was NASA’s Earth Science Program, which develops, launches, and maintains a network of satellites that collect data on Earth’s surface and atmosphere—a critical tool for improving predictive capacity for everything from agricultural commodities and water management to infrastructure.

There are examples like these all throughout the FY17 Omnibus spending bill that Congress passed two weeks ago. Some say the president was rebuffed because congress was in no mood to shut down the government over spending, but it’s also true that there were many congressional Republicans who opposed large parts of the president’s budget.

Appropriators are not interested in gutting the institutions they fund, and House Speaker Ryan is not interested in shutting down the government, which would call into question his party’s ability to govern. You can bet many Republicans breathed a private sigh of relief when leadership reached a deal on what effectively was another “continuing resolution” (CR).

It wasn’t all good

One significant flaw in the budget deal is the insertion of an anti-science policy rider that instructs the Departments of Agriculture and Energy to work with the EPA to establish policies that “reflect the carbon neutrality of forest bioenergy.”

Unfortunately, burning forest biomass to make electricity is not inherently carbon-neutral because “removing the carbon dioxide released from burning wood through new tree growth requires many decades to a century. All the while the added carbon dioxide is in the atmosphere trapping heat.“

Congress should not be legislating science, and this is a cautionary tale for the FY18 budget fight. Special interest amendments, or “riders,”,have the ability to make a reasonable budget an unsavory bill. The biomass rider got in because it had bipartisan support, but going forward, both parties will need to reach a clear understanding on what constitutes a “clean budget” if they want to eventually reach an agreement. Constituents will also need to hold their members of congress accountable if they don’t want government funding bills to become delivery devices for bad, long-lived policy.

The 2018 budget fight: government shutdown, continuing resolution, or “the nuclear option”?

So what does this mean for the 2018 budget? Where are we headed?

If Congress can’t pass another bill to fund the government for the 2018 fiscal year before October 1, the government will effectively shut down (and we all know what that looks like).

While the president has said “our country needs a good shutdown,” most Americans would strongly disagree …as would most members of Congress. But the president is angling to give himself some breathing room because he knows it is impossible for his budget priorities to pass the Senate’s 60-vote threshold for a filibuster.

A bill that continues funding the government at last year’s spending levels is a loss for the president, and there aren’t enough Democrats that would support a budget deal with the kinds of cuts to discretionary spending that he is proposing.

But the president is negotiating, and this tactic is straight out of “The Art of the Deal.” He’s betting that if he proposes extremely deep cuts, Congress will move slightly more in his direction on spending levels …and that government shutdowns don’t last forever.

The most likely outcome is a continuing resolution or “CR,” which would keep the federal government functioning at current spending levels for a limited period of time. Some Republican appropriators have already given up on the prospect of moving their subcommittee’s spending bills through the chambers and are instructing their staff to start developing a list of add-ons to the current spending package.

It takes Democratic votes to pass a spending bill out of the Senate and they will not support budget cuts. Shutting down the government is bad for both the president and the majority party in congress so most Republicans don’t want to go in that direction. Continuing funding at existing spending levels would prevent the president from advancing his domestic agenda and would be a big loss, but it’s also the most likely outcome …that is, unless the Senate changes the rules.

Senate Majority Leader Mitch McConnell (R-KY) could potentially employ “the nuclear option” and get rid of the 60-vote requirement (the Senate filibuster), taking away the need for Democratic votes to pass a budget. McConnell has already done this once this year to get the Gorsuch Supreme Court nomination through the Senate. It’s possible that when faced with a choice between a CR the president won’t sign, a bill the Democrats won’t pass, and a government shutdown, McConnell could set aside his institutionalist tendencies and do away with the filibuster on federal spending.

Going nuclear is an unlikely outcome, but it’s definitely a possibility. Do most Americans see the Senate as the greatest deliberative body in the world? Do they even know what the filibuster is? I suspect not, and that means that the only political downside to changing the rules for the budget would be reciprocity by the Democrats at a future time when they have control of Congress. Is that enough to keep Senator McConnell from doing it?

What you can do to protect critical programs and spending

Watchdog the appropriations process this year and weigh in throughout the summer with your members of Congress on the spending priorities you care about. Tell them not to vote for a budget that cuts those priorities, and if there are no appropriators in your congressional delegation, tell them to weigh in with the appropriations subcommittees and advocate for your priorities.

If we get a CR, that’s a good thing because federal spending would be set at current levels; no cuts. But CR’s don’t last forever; eventually Congress will pass another budget. Advocating with appropriators increases the likelihood of higher funding levels in those subcommittee appropriations bills for the things you care about. If you don’t work the appropriations process, if you don’t engage with your members of Congress, you get what you get (it may be cuts), and all you can do is pray for a never-ending CR.

We may be looking at a scenario where a federal budget is voted on by a simple majority, in which case the funding levels coming out of the appropriations subcommittees really matter. If you care about federal spending priorities, depending on the Senate filibuster as protection may not turn out to be a prudent strategy. Consider that there are also Republicans that care about some of these spending priorities, like research and innovation.

If constituents are actively engaged in communicating spending priorities with their members of Congress, even without the 60-vote hurdle, meaningful cuts to programs and agencies that support things like scientific research, clean energy innovation, public health, and community preparedness for climate change, won’t come to fruition.

So call your members of congress! Show up to those town halls! And drop by your local congressional office!

North Korea’s May 21 Missile Launch

UCS Blog - All Things Nuclear (text only) -

A week after the test launch of an intermediate range Hwasong-12 missile, North Korea has tested a medium-range missile. From press reports, this appears to be a Pukguksong-2 missile, which is the land-based version of the submarine launched missile it is developing. This appears to be the second successful test of this version of the missile.

South Korean sources reported this test had a range of 500 km (300 miles) and reached altitude of 560 km (350 miles). If accurate, this trajectory is essentially the same as the previous test of the Pukguksong-2 in February (Fig. 1). Flown on a standard trajectory, this missile carrying the same payload would have a range of about 1,250 km (780 miles). If this test was conducted with a very light payload, as North Korea is believed to have done in past tests, the actual range with a warhead could be significantly shorter.

Fig. 1: The red curveis reportedly the trajectory followed on this test. The black curve (MET=minimum-energy trajectory) is the same missile on a maximum range trajectory.

The Pukgukgsong-2 uses solid fuel rather than liquid fuel like most of North Korea’s missiles. For military purposes, solid-fueled missiles have the advantage that they have the fuel loaded in them and can be launched quickly after moving them to the launch site. With large liquid-fuel  missiles you instead need to move them without fuel and then fuel them once they are in place at the launch site. That process can take an hour or so, and the truck carrying the missile must be accompanied by a number of trucks containing the fuel. So it is easier to spot a liquid missile before launch and there is more time available to attack it.

However, it is easier to build liquid missiles, so that is typically where countries begin. North Korea obtained liquid fuel technology from the Soviet Union in the 1980s, and built its program up from there. It is still in early stages of developing solid missiles.

Building large solid missiles is difficult. If you look at examples of other countries building long-range solid missiles, e.g., France and China, it took them several decades to get from the point of building a medium-range solid missile, like North Korea has, to building a solid ICBM. So this is not something that will happen soon, but with time North Korea will be able to do it.

Infrastructure Spending Is Coming. Climate Change Tells Us to Spend Wisely

UCS Blog - The Equation (text only) -

The news of new federal infrastructure proposals landed in a timely fashion with this year’s Infrastructure Week, including a bill introduced by the House Democrats (LIFT America Act, HR 2479) and another expected shortly from Trump’s administration. For years now, the American Society of Civil Engineers has graded the U.S.’s infrastructure at near failing (D+). With the hashtag #TimetoBuild, Infrastructure Week participants are urging policymakers to “invest in projects, technologies, and policies necessary to make America competitive, prosperous, and safe.”

We must build for the future

Conversations in Washington, D.C. and across the country over the coming weeks and months are sure to focus on what projects to build. But we first need to ask for what future are we building? Will it be a version based on similar assumptions and needs as those we experience today, or a future radically shaped by climate change? (Changing demographics and technologies will undoubtedly shape this future as well.)

It’s imperative that this changing climate future is incorporated into how we design and plan infrastructure projects, especially as we consider investing billions of taxpayer dollars into much needed enhancements to our transportation, energy, and water systems.

Climate change will shape our future

A vehicle remained stranded in the floodwater of Highway 37 on Jan. 24, 2017. Photo: Marin Independent Journal.

Engineers and planners know that, ideally, long-lived infrastructure must be built to serve needs over decades and withstand the ravages of time—including the effects of harsh weather and extended use—and with a margin of safety to account for unanticipated risks.

Much of our current infrastructure was built assuming that past trends for climate and weather were good predictors of the future. One example where I currently live would be the approach to the new Bay Bridge in Oakland, California, which was designed and built without consideration of sea level rise and will be permanently under water with 3 feet of sea level rise, a likely scenario by end of this century. Currently, more than 270,000 vehicles travel each day on this bridge between San Francisco and the East Bay.

Another near my hometown in New Jersey is LaGuardia Airport in Queens, NY, which accommodated 30 million passengers in 2016. One study shows that if seas rise another 3 feet, it could be permanently inundated; the PATH and Hoboken Terminal are at risk as well.

Instead, we must look forward to what climate models and forecasts tell us will be the “new normal”- higher temperatures, more frequent and intense extreme weather events like droughts and flooding, larger wildfires, and accelerated sea level rise. This version of the future will further stress our already strained roads, bridges, water and energy systems, as well as the natural or green infrastructure systems that can play a key role in limiting these climate impacts (e.g. flood protection). As a result, their ability to reliably and safely provide the critical services that our economy, public safety, and welfare depend on is threatened.

The reality is we are not yet systematically planning, designing and building our infrastructure with climate projections in mind.

Recent events as a preview

We can look at recent events for a preview of some of the infrastructure challenges we may face with more frequency and severity in the future because of a changing climate. (These events themselves are not necessarily the direct result of climate change but studies do show that climate change is making certain extreme events more likely, like the 2016 Louisiana floods). For example:

  • In September 2015, the Butte and Valley Fires destroyed more than one thousand structures and damaged hundreds of power lines and poles, leaving thousands of Californians without power.
  • Earlier this year, more than 188,000 residents downstream of Oroville Dam were ordered to evacuate as water releases in response to heavy rains and runoff damaged both the concrete spillway and a never-before-used earthen emergency spillway, threatening the dam.
  • Winter storms also resulted in extreme precipitation that devastated California’s roads, highways, and bridges with flooding, landslides, and erosion, resulting in roughly $860 million in repairs.

View of the Valley Fire, which destroyed nearly 77,000 acres in Northern California from Sept. 12, 2015 to Oct. 15, 2015. Photo: U.S. Coast Guard.

Similar events have been occurring all over the country, including recent highway closures from flooding along the Mississippi River. Other failures are documented in a Union of Concerned Scientists’ blog series “Planning Failures: The Costly Risks of Ignoring Climate Change,” and a report on the climate risks to our electricity systems.

Will the infrastructure we start building today still function and meet our needs in a future affected by climate change? Maybe. But unlikely, if we don’t plan differently.

Will our taxpayer investments be sound and will business continuity and public safety be assured if we don’t integrate climate risk into our infrastructure decisions? No.

If we make significant federal infrastructure investments over the next few years without designing in protections against more extreme climate forces, we risk spending much more of our limited public resources on repair, maintenance, and rebuilding down the line–a massively expensive proposition.

Building for our climate future

UCS has recently joined and started to amplify a small but growing conversation about what exactly climate-resilient infrastructure entails. This includes several of the Steering Committee Members and Sponsors of Infrastructure Week, including Brookings Institute, American Society of Civil Engineers, AECOM, WSP, and HTNB. The LIFT America Act also includes some funding dedicated to preparing infrastructure for the impacts of climate change.

For example, last year, UCS sponsored a bill, AB 2800 (Quirk), that Governor Brown signed into law, to establish the Climate-Safe Infrastructure Working Group. It brings together climate scientists, state professional engineers, architects and others to engage in a nuts-and-bolts conversation about how to better integrate climate impacts into infrastructure design, examining topics like key barriers, important information needs, and the best design approach for a range of future climate scenarios.

UCS also successfully advocated for the California State Water Resources Control Board to adopt a resolution to embed climate science into all of its existing work: permits, plans, policies, and decisions.

A few principles for climate resilient infrastructure

At UCS, we have also been thinking about key principles to ensure that infrastructure can withstand climate shocks and stresses in order to minimize disruptions to the system and safety (and the communities that depend on it) as well as safety and rebound quickly. Our report, “Towards Climate Resilience: A Framework and Principles for Science-Based Adaption”, outlines fifteen key principles for science-based adaptation.

We sought input from a panel of experts, including engineers, investors, emergency managers, climate scientists, transportation planners, water and energy utilities, and environmental justice organizations, at a recent UCS convening in Oakland, California focused on how we can start to advance policies and programs that will result in infrastructure that can withstand climate impacts.

The following principles draw largely from these sources. They are aspirational and not exhaustive, and will continue to evolve. To be climate-resilient, new and upgraded infrastructure should be built with these criteria in mind:

  • Scientifically sound: Infrastructure decisions should be consistent with the best-available climate science and what we know about impacts on human and natural systems (e.g., flexible and adaptive approaches, robust decisions, systems thinking, and planning for the appropriate magnitude and timing of change).
  • Socially just: New or upgraded infrastructure projects must empower communities to thrive, and ensure vulnerable groups can manage the climate risks they’ll face and share equitably in the benefits and costs of action. The historic under-investment in infrastructure in low-income and communities of color must be addressed.
  • Fiscally sensible: Planning should consider the costs of not adapting to climate change (e.g., failure to deliver services or costs of emergency repairs and maintenance) as well as the fiscal and other benefits of action (e.g., one dollar spent preparing infrastructure can save four dollars in recovery; investments in enhancing and protecting natural infrastructure that accommodates sea level rise, absorbs stormwater runoff, and creates parks and recreation areas).
  • Ambitiously commonsense: Infrastructure projects should avoid maladaptation, or actions that unintentionally increase vulnerabilities and reduce capacity to adapt, and provide multiple benefits. It should also protect what people cherish, and reflect a long-term vision consistent with society’s values.
  • Aligned with climate goals: Since aggressive emissions reductions are essential to slowing the rate that climate risks become more severe and common and we need to prepare for projected climate risks, infrastructure projects should align with and complement long-term climate goals – both mitigation and adaptation.
Americans want action for a safer, more climate resilient future

A 2015 study found that the majority of Americans are worried about global warming, with more than 40% believing it will harm them personally. As we engage in discussions around how to revitalize our economy, create jobs, and protect public safety by investing in infrastructure, climate change is telling us to plan and spend wisely.

From the current federal proposals to the recently enacted California transportation package, SB 1 ($52 billion) and hundreds of millions more in state and federal emergency funds for water and flood-protection, there is a lot at stake: taxpayer dollars, public safety and welfare, and economic prosperity. We would be smart to heed this familiar old adage when it comes to accounting for climate risks in these infrastructure projects: a failure to plan is a plan to fail.

No Rest for the Sea-weary: Science in the Service of Continually Improving Ocean Management

UCS Blog - The Equation (text only) -

Marine reserves, or no-fishing zones, are increasing throughout the world. Their goals are variable and numerous, often a mix of conserving our ocean’s biodiversity and supporting the ability to fish for seafood outside reserves for generations to come. California is one location that has seen the recent implementation of marine reserves, where the California Marine Life Protection Act led to the establishment of one of the world’s largest networks of marine reserves.

A number of scientific efforts have informed the design of marine reserves throughout the world and in California. Mathematical models were central to these research efforts as they let scientists and managers do simulated “experiments” of how different reserve locations, sizes, and distances from each other affect how well reserves might achieve their goals.

While a PhD student in the early 2000s, I began my scientific career as one of many contributing to these efforts. In the process, a key lesson I learned was the value of pursuing partnerships with government agencies such as NOAA Fisheries to ensure that the science I was doing was relevant to managers’ questions, an approach that has become central to my research ever since.

Map of the California Marine Protected Areas; courtesy of California Department of Fish and Wildlife

A transition from design to testing

Now, with many marine reserves in place, both managers and scientists are turning to the question of whether they are working. On average (but not always), marine reserves harbor larger fish and larger population sizes for fished species, as well as greater total biomass and diversity, compared both to before reserves were in place and to areas outside reserves. However, answering a more nuanced question—for a given reserve system, is it working as expected?—can help managers engage in “adaptive management”: using the comparison of expectations to data to identify any shortfalls and adjust management or scientific understanding where needed to better achieve the original goals.

Mathematical models are crucial to calculating expectations and therefore to answering this question. The original models used to answer marine reserve design questions focused on responses that might occur after multiple decades. Now models must focus on predicting what types of changes might be detectable over the 5-15 year time frame of reserve evaluation. Helping to develop such modeling tools as part of a larger collaboration, with colleagues Alan Hastings and Louis Botsford at UC Davis and Will White at the University of North Carolina, is the focus of my latest research on marine reserves in an ongoing project that started shortly after I arrived as a professor at UC Davis.

To date we have developed new models to investigate how short-term expectations in marine reserves depend on fish characteristics and fishing history. Now we have a new partnership with California’s Department of Fish and Wildlife, the responsible management agency for California’s marine reserves, to collaboratively apply these tools to our statewide reserve system. This application will help rigorously test how effective California’s marine reserves are, and therefore help with continually improving management to support both the nutrition and recreation that Californians derive from the sea. In addition, it will let California serve as a leading example of model-based adaptive management that could be applied to marine reserves throughout the world.

The role of federal funding

The cabezon is just one type of fish protected from fishing in California’s marine reserves. Photo credit: Wikimedia Commons.

Our project on models applied to adaptive managed started with funding in 2010–2014 from NOAA SeaGrant, a funding source uniquely suited to support research that can help improve ocean and fisheries management. With this support, we could be forward-looking about developing the modeling tools that the State of California now needs.  NOAA SeaGrant would be eliminated under the current administration’s budget proposal.

My other experience with NOAA SeaGrant is through a graduate student fellowship program that has funded PhD students in my (and my colleagues’) lab group to do a variety of marine reserve and fisheries research projects. This fellowship funds joint mentorship by NOAA Fisheries and academic scientists towards student research projects relevant to managing our nation’s fisheries. Along with allowing these students to bring cutting-edge mathematical approaches that they learn at UC Davis to collaborations with their NOAA Fisheries mentors, this funding gives students the invaluable experience I had as a PhD student in learning how to develop partnerships with government agencies that spur research relevant to management needs. Both developing such partnerships and training students in these approaches are crucial elements to making sure that new scientific advancements are put to use. This small amount of money goes a long way towards creating future leaders who will continue to help improve the management of our ocean resources.

 

Marissa Baskett is currently an Associate Professor in the Department of Environmental Science and Policy at the University of California, Davis.  Her research and teaching focus on conservation biology and the use of mathematical models in ecology.  She received a B.S. in Biological Sciences at Stanford University and both an M.A. and Ph.D. in Ecology and Evolutionary Biology at Princeton University, and she is an Ecological Society of America Early Career Fellow.  

The views expressed in this post solely represent the opinions of Marissa Baskett and do not necessarily represent the views of UC Davis or any of her funders or partners.

Science Network Voices gives Equation readers access to the depth of expertise and broad perspective on current issues that our Science Network members bring to UCS. The views expressed in Science Network posts are those of the author alone.

New Study on Smart Charging Connects EVs & The Grid

UCS Blog - The Equation (text only) -

We know that electric vehicles (EVs) tend to be more environmentally friendly than gasoline cars. We also know that a future dominated by EVs poses a problem—what happens if everyone charges their cars at the same time (e.g., when they get home from work)?

Fortunately, there’s an answer: smart charging. That’s the topic of a report I co-authored, released today.

As a flexible load, EVs could help utilities balance supply and demand, enabling the grid to accommodate a larger fraction of variable renewable energy such as wind and solar. As well, the charging systems can help utilities and grid operators identify and fix a range of problems. The vehicles can be something new, not simply an electricity demand that “just happens,” but an integral component of grid modernization.

Where the timing and power of the EV charging automatically adjust to meet drivers’ needs and grid needs, adding EVs can reduce total energy system costs and pollution.

This idea has been around since the mid-1990s, with pilots going back at least to 2001. It has been the focus of many recent papers, including notable work from the Smart Electric Power Alliance, the Rocky Mountain Institute, the International Council on Clean Transportation, the Natural Resources Defense Council, the National Renewable Energy Laboratory, Synapse Energy Economics, and many more.

Over the past two years, I’ve read hundreds of papers, talked to dozens of experts, and convened a pair of conferences on electric vehicles and the grid. I am pleased to release a report of my findings at www.ucsusa.org/smartcharging.

Conclusions, but not the end

This is a wide-ranging and fast-moving field of research with new developments constantly. As well, many well-regarded experts have divergent views on certain topics. Still, a few common themes emerged.

  • Smart charging is viable today. However, not all of the use cases have high market value in all regions. Demand response, for example, is valuable in regions with rapid load growth, but is less valuable in regions where electricity demand has plateaued.
  • The needs of transportation users take priority. Automakers, utilities, charging providers, and regulators all stress the overriding importance of respecting the needs of transportation users. No stakeholder wants to inconvenience drivers by having their vehicles uncharged when needed.
  • Time-of-use pricing is a near-term option for integrating electric vehicles with the grid. Using price signals to align charging with grid needs on an hourly basis—a straightforward implementation of smart charging—can offer significant benefits to renewable energy utilization.
  • Utilities need a plan to use the data. The sophisticated electronics built into an EV or a charger can measure power quality and demand on the electric grid. But without the capabilities to gather and analyze this data, utilities cannot use it to improve their operations.

The report also outlines a number of near-term recommendations, such as encouraging workplace charging, rethinking demand charges, and asking the right questions in pilot projects.

Defining “smart”

One important recommendation is that “smart” charging algorithms should consider pollution impacts. This emerged from the analytical modeling that UCS conducted in this research.

Basic applications of “smart charging” lower electric system costs by reducing peak demand and shifting the charging to off-peak periods, reducing need for new power plants and reducing consumer costs.  But, in some regions that have lagged in the transition to cleaner electricity supplies, “baseload” power can be dirtier than peak power. Our model of managed charging shifted power demand by the hour, without regard to lowering emissions or the full range of services that smart charging performs today (like demand response or frequency regulation), let alone adding energy back with two-way vehicle-to-grid operation.

The model illustrated that encouraging off-peak charging without attention to emissions might, at a national scale, slightly increase pollution compared to unmanaged charging. Both charging strategies would reduce pollution compared to relying on internal-combustion vehicles, and the managed case would have lower system costs.

This is not a prediction, but one possible outcome under certain circumstances—a possibility also noted by NREL and by other research teams. It is a consequence of off-peak power that is cheap but dirty, and of a model that does not yet properly represent the full capabilities of smart charging. Charging when renewables are greatest, or employing policies that assign a cost to pollution, would change this outcome.

Fortunately, even before we have such policies, we have existing systems that can selectively charge when the greenest power is “on the margin.” This technology and other systems are discussed in the report.

The broader context

Smart charging of electric vehicles has a key role to play in the grid modernization initiatives happening around the country. EVs can be a flexible load that communicates with the grid, incorporates energy storage, benefits from time-varying rates, and participates in ancillary services markets, representing many of the innovations that can improve the economic and environmental performance of our electricity system.

Photo: Steve Fecht/General Motors

There’s an Elephant in the Room, and It Smells Like Natural Gas

UCS Blog - The Equation (text only) -

A curious thing happened in the aftermath of President Trump attempting to sign away the past eight years of work on climate and clean energy: the public face of progress didn’t flinch. From north to south and east to west, utilities and businesses and states and cities swore their decarbonization compasses were unswerving; yes, they said, we’re still closing coal plants, and yes, yes!, we’re still building ever more wind and solar—it just makes sense.

But here’s why all the subsequent commentary reiterating the inevitability of coal’s decline and cheering the unsinkable strength of renewables’ rise was right in facts, but incomplete in message:

Coal is closing. Renewables are rising. But right now, we need to be talking about natural gas.

We’re fine without a map…

President Trump accompanied his signature on the Executive Order on Energy Independence with a vow that the order would put the coal industry “back to work.” But  shortly thereafter, even those in the business reported they weren’t banking on a turn-around. Coal plants just keep shutting down:

This map shows coal units that have retired just between 2007 and 2016—many more have been announced for closure in the near future.

At the same time, renewable resources have been absolutely blowing the wheels off expectations and projections, with costs plummeting and deployment surging. The renewable energy transformation is just that—a power sector transformation—and it certainly appears there’s no going back:

Wind and solar capacity has been growing rapidly since the early 2000s.

Now when you put these two trajectories together, you end up with an electric power sector that has, in recent years, steadily reduced its carbon dioxide emissions:

Three positive charts, and three tremendous reasons to cheer (which we do a lot, and won’t soon stop—clean energy momentum is real and it’s rolling). The problem is, these charts only capture part of the energy sector story.

What’s missing? Natural gas. Or, what is now the largest—and still growing—source of carbon emissions in the electric power sector.

…Until we finally realize we’re lost

There are two phases to climate change emissions reductions conversations. In Phase 1, we acknowledge that a problem exists, we recognize we’re a big reason for that problem, and we take action to initiate change. With the exception of just a few of the most powerful people in our government (ohthem), we seem to have Phase 1 pretty well in hand. Cue the stories about the triumphant resilience of our climate resolve.

The trouble is Phase 2.

In Phase 2, we move to specifics. Namely, specifics about what the waypoints are, and by when we need to reach them. This is the conversation that produces glum replies—and it’s the source of those weighty, distraught affairs scattered among the buoyant takes on the recent executive order—because the truth is:

  • We know what the waypoints are,
  • We know by when we need to reach them, and
  • We know that currently, we’re not on track.

Without a map, we’re left feeling good about the (real and true) broad-brush successes of our trajectory—emissions reductions from the retirement of coal plants; technology and economic improvements accelerating the deployment of renewables—but we have no means by which to measure the adequacy of our decarbonization timeline.

As a result, we put ourselves at grave risk of failing to catch the insufficiency of any path we’re on. And right now? That risk has the potential to become reality as our nation, propelled by the anti-regulatory, pro-fossil policies of the Trump administration, lurches toward a wholesale capitulation to natural gas.

Natural gas and climate change

Last year, carbon dioxide emissions from coal-fired power plants fell 8.6 percent. But take a look at the right-hand panel in the graph below. See what’s not going down? Emissions from natural gas. In fact, carbon dioxide emissions from natural gas overtook coal emissions last year, even omitting the additional climate impacts from methane released during natural gas production and distribution.

Bridge fuel? Not so much.

There’s no sign of the trend stopping, either. Natural gas plants have been popping up all across the country, and new plants keep getting proposed—natural gas generators now comprise more than 40 percent of all electric generating capacity in the US.

Natural gas plants are located all across the country, and new projects keep getting proposed.

And all those natural gas plants mean even more gas pipelines. According to project tracking by S&P Global Market Intelligence, an additional 70 million Dth/d of gas pipeline capacity has been proposed to come online by the early 2020s (subscription). That is a lot of gas, and would require the commitment of a lot of investment dollars.

When plants are built, pipelines are laid, and dollars are committed, it becomes incredibly hard to convince regulators to force utilities to let it all go.

Still, that’s what the markets—and the climate—will demand. As a result, ratepayers may be on the hook for generators’ bad bets.

The thing is, we know today the external costs of these investments, and the tremendous risks of our growing overreliance on natural gas. So why do these assets keep getting built?

Because many of our regulators, utilities, and investors are working without a map.

Now there are a growing number of states stepping up where the federal government has faltered, and beginning to make thoughtful energy decisions based on specific visions of long-term decarbonization goals, like in California, the RGGI states, and as recently as this week, Virginia. Further, an increasing number of insightful and rigorous theoretical maps are being developed, like the US Mid-Century Strategy for Deep Decarbonization, amongst many others (UCS included).

But for the vast majority of the country, the main maps upon which decarbonization pathways were beginning to be based—the Clean Power Plan and the Paris Climate Agreement—are both at immediate risk of losing their status as guiding lights here in the US, sitting as they are beneath the looming specter of the Trump administration’s war on facts.

Plotting a course to a better tomorrow

So where to from here? Ultimately, there is far too much at stake for us to simply hope we’re heading down the right path. Instead, we need to be charting our course to the future based on all of the relevant information, not just some of it.

To start, we recommend policies that include:

  • Moving forward with implementation of the Clean Power Plan, a strong and scientifically rigorous federal carbon standard for power plants.
  • Developing, supporting, and strengthening state and federal clean energy policies, including renewable electricity standards, energy efficiency standards, carbon pricing programs, and investment in the research, development, and deployment of clean energy technologies.
  • Defending and maintaining regulations for fugitive methane emissions, and mitigating the potential public health and safety risks associated with natural gas production and distribution.
  • Improving grid operation and resource planning such that the full value and contributions of renewable resources, energy efficiency, and demand management are recognized, facilitated, and supported.

We need to show that where we’re currently heading isn’t where we want to be.

We need to talk about natural gas.

Zorandim/Shutterstock.com U.S. EIA, Generator Monthly U.S. EIA U.S. EIA U.S. EIA U.S. EIA

April 2017 Was the Second Hottest April on Record: We Need NOAA More Than Ever

UCS Blog - The Equation (text only) -

Today, NOAA held its monthly climate call, where it releases the previous month’s global average temperature, and discusses future weather and climate outlooks for the US. According to the data released today, April 2017 was the second warmest April on record after only April 2016, with a temperature 0.90°C (1.62°F) above the 20th century April average. Data for the contiguous US was released earlier, and found April 2017 to be the 11th warmest on record, and 2017 to be the second warmest year to date (January to April data).

That means that, yes, we are still seeing warming that is basically unprecedented.

Photo: NOAA

Today’s data release was just one of the myriad ways NOAA’s data and research touches our lives in important ways. I can’t help but wonder if, before someone leaves their house in the morning, and checks the weather forecast—will it rain? Will it be hot or cold?— do they wonder how those numbers come about? Do they realize the sheer amount of science that goes into saying what will happen in every small town across the country (and the world)?

Do people think about science at all when they go about their lives? And do they wonder how that science comes to be?

Probably not. But here is why they should.

Science is essential for climate and weather predictions

NOAA (short for the “National Oceanic and Atmospheric Administration”) is one the lead agencies that helps provide that science. But NOAA’s mission and budget are increasingly under attack under the Trump administration. President Trump’s pick for the new NOAA administrator will soon be announced, and it’s critical that s/he take a strong stance to defend the mission and the budget of the agency.

The National Weather Service, administered by NOAA, is one of the most essential federal institutions for regular citizens’ everyday lives. It is there (and at the Climate Prediction Center) that the data collected by instruments managed by Federal agencies all over the globe, on air, land, and sea, turns into something as important as weather forecasts and seasonal climate outlooks. Data from satellites is routinely used by local stations for tornado warnings, and hurricane tracking is also provided courtesy of those satellites and other instruments, like tide gauges that show the water rising to a flooding threshold, which in turn triggers warnings from the NWS for the affected areas.

It takes very specific and detailed scientific and engineering training to build those instruments in the first place—tide gauges, satellites, thermometers, you name it. And then, science is needed to interpret and make sense of the raw data. And because most people would agree that better forecasts make for improved planning of one’s life—from daily activities to crop planting to storm preparedness—yes, you guessed it, we need better science.

Unfortunately, what we are seeing in this administration is not very promising when it comes to leveraging and supporting science. On many fronts—NASA, NOAA, EPA, DOI, DOE, to name a few—science is being dismissed or ignored, to the detriment of the environment and people like you and me. Proposed budgets include cuts to many scientific programs within agencies. One can’t help but wonder what the consequences (especially unforeseen ones) would be.

NOAA needs more, not less funding

Current funding is already strained to produce enough research to prepare for the increased seasonal variability that we are observing, and that is expected to increase with climate change. We are seeing more devastating floods and worsening wildfire seasons, and many of our coastal cities are seeing significantly more flooding at high tides and during storms, due to sea level rise.

The weather that makes up our climate is behaving so erratically, we need more, not less resources to help predict and prepare appropriately. Fortunately, Congress has held the line so far on keeping budgets for FY17 close to prior year levels rather than accepting the drastic reductions proposed by the administration. We are working hard to help ensure that this trend continues when Congress appropriates the FY18 budget. NOAA needs more funding to continue its climate monitoring program and to improve seasonal forecasts and operational programs, which in turn are essential for planning budgets at state and local levels, and for preparedness measures that can save resources, lives and property.

Wouldn’t it be great if we could tell how much snow is REALLY coming so the right amount of road treatments can be allocated? Or how much rain is going to fall in a very short period of time and how much that river is going up after that rain? I think we can all agree on that.

The Weather Research and Forecasting Innovation Act of 2017, which was signed into law in April 2017, is a breath of fresh air into NOAA’s forecasting lungs—but it is not enough. It focuses on research into sub-seasonal to seasonal prediction, and better forecasts of tornadoes, hurricanes, and other severe storms, as well as long-range prediction of weather patterns, from two weeks to two years ahead. One important aspect of the Act is its focus on communication of forecasts to inform decision-making by public safety officials.

The Act had bipartisan support and was applauded by the University Corporation for Atmospheric Research (UCAR), a well-respected research institution. It was also championed by Barry Myers, the CEO of Accuweather and a frontrunner for the position of NOAA administrator. It is definitely a good step, and a long time coming, but we need more. We need continued support for these types of initiatives, and for the broader mission of NOAA.

We need a vision, and the resources to make it happen. We need an administrator who will turn that vision into reality.

NOAA is a lot more than weather forecasts

NOAA plays a large role in the US economy. It supports more than one-third of the US GDP, affecting shipping, commerce, farming, transportation, and energy supply. The data coming from NOAA also helps us maintain public safety and public health, and enable national security.

In addition to the NWS, other programs within NOAA are essential to track climate change and weather, such as the National Environmental Satellite, Data, and Information Service (NESDIS), which supports weather forecasts and climate research through the generation of over 20 terabytes of data daily from satellites, buoys, radars, models, and many other sources. Other important programs are the Office of Oceanic and Atmospheric Research (OAR); and the Coastal Zone Management Program at the Office of Coastal MGMT (OCM), at the National Ocean Service (NOS).

Those programs provide state-of-the-art data that directly or indirectly affect all the aforementioned segments of Americans daily lives.

The US needs talent and resources to continue its top-notch work

In a recent blog, Dr. Marshall Shepherd laid out  the five things that the weather and climate communities need from a NOAA administrator: to offer strong support for research; to support the NWS; to fight back against the attack on climate science; to protect the satellite and Sea Grant programs; and to value external science expertise. I couldn’t agree more!

NOAA can be the cutting-edge science agency for a “weather ready nation” helping communities become more resilient as they prepare for climate change risks. All it needs is a great administrator, who will stand up for science and fight for the needed budget for the agency’s ever growing needs. Will the nominee be up for the job? And will Congress and the Trump administration continue to provide the budget the agency needs to do its job well?

Warhead Reentry: What Could North Korea Learn from its Recent Missile Test?

UCS Blog - All Things Nuclear (text only) -

As North Korea continues its missile development, a key question is what it may have learned from its recent missile test that is relevant to building a reentry vehicle (RV) for a long-range missile.

The RV is a crucial part of a ballistic missile. A long-range missile accelerates its warhead to very high speed—16,000 mph—and sends it arcing through space high above the atmosphere. To reach the ground it must reenter the atmosphere. Atmospheric drag slows the RV and most of the kinetic energy it loses goes into heating the air around the RV, which then leads to intense heating of the surface of the RV. The RV absorbs some of the heat, which is conducted inside to where the warhead is sitting.

So the RV needs to be built to (1) withstand the intense heating at its outer surface, and (2) insulate the warhead from the absorbed heat that is conducted through the interior of the RV.

The first of these depends on the maximum heating rate at the surface and the length of time that significant heating takes place. Number (2) depends on the total amount of heat absorbed by the RV and the amount of time the heat has to travel from the surface of the RV to the warhead, which is roughly the time between when intense heating begins and when the warhead detonates.

I calculated these quantities for the two cases of interest here: the highly lofted trajectory that the recent North Korean missile followed and a 10,000 km missile on a normal (MET) trajectory. The table shows the results.

The maximum heating rate (q) is only about 10% higher for the 10,000 km range missile than the lofted missile. However, the total heat absorbed (Q) is nearly twice as large for the long-range missile and the duration of heating (τ) is more than two and a half times as long.

This shows that North Korea could get significant data from the recent test—assuming the RV was carrying appropriate sensors and sent that information back during flight, and/or that North Korea was able to recover the RV from the sea. But it also shows that this test does not give all the data you would like to have to understand how effective the heatshield might be before putting a nuclear warhead inside the RV and launching it on a long-range missile.

Some details

The rate of heat transfer per area (q) is roughly proportional to ρV3, where ρ is the atmospheric density and V is the velocity of the RV. Since longer range missiles reenter at higher speeds, the heating rate increases rapidly with missile range. The total heat absorbed (Q) is the integral of q over time during reentry.

This calculation assumes the ballistic coefficient (β) of the RV is 48 kN/m2 (1,000 lb/ft2). The heating values in the table roughly scale with β. A large value of β means less atmospheric drag so  the RV travels through the atmosphere at higher speed. That increases the accuracy of the missile but also increases the heating. The United States worked for many years to develop RVs with special coatings that allowed them to have high β and therefore high accuracy, but  could also withstand the heating under these conditions.

The results in the table can be understood by looking at how RVs on these two trajectories slow down as they reenter. Figs. 1 and 2 plot the speed of the RV versus time; the x and y axes of the two figures have the same scale. The maximum deceleration (slope of the curve) is roughly the same in the two cases, leading to roughly the same value of q. But the 10,000 km range missile loses more total energy—leading to a larger value of Q—and does so over a longer time than the lofted trajectory.

Ad Hoc Fire Protection at Nuclear Plants Not Good Enough

UCS Blog - All Things Nuclear (text only) -

A fire at a nuclear reactor is serious business. There are many ways to trigger a nuclear accident leading to damage of the reactor core, which can result in the release of radiation. But according to a senior manager at the US Nuclear Regulatory Commission (NRC), for a typical nuclear reactor, roughly half the risk that the reactor core will be damaged is due to the risk of fire. In other words, the odds that a fire will cause an accident leading to core damage equals that from all other causes combined. And that risk estimate assumes the fire protection regulations are being met.

However, a dozen reactors are not in compliance with NRC fire regulations:

  • Prairie Island Units 1 and 2 in Minnesota
  • HB Robinson in South Carolina
  • Catawba Units 1 and 2 in South Carolina
  • McGuire Units 1 and 2 in North Carolina
  • Beaver Valley Units 1 and 2 in Pennsylvania
  • Davis-Besse in Ohio
  • Hatch Units 1 and 2 in Georgia

Instead, they are using “compensatory measures,” which are not defined or regulated by the NRC. While originally intended as interim measures while the reactor came into compliance with the regulations, some reactors have used these measures for decades rather than comply with the fire regulations.

The Union of Concerned Scientists and Beyond Nuclear petitioned the NRC on May 1, 2017, to amend its regulations to include requirements for compensatory measures used when fire protection regulations are violated.

Fire Risks

The dangers of fire at nuclear reactors were made obvious in March 1975 when a fire at the Browns Ferry nuclear plant disabled all the emergency core cooling systems on Unit 1 and most of those systems on Unit 2. Only heroic worker responses prevented one or both reactor cores from damage.

The NRC issued regulations in 1980 requiring electrical cables for a primary safety system to be separated from the cables for its backup, making it less likely that a single fire could disable multiple emergency systems.

Fig. 1 Fire burning insulation off cables installed in metal trays passing through a wall. (Source: Tennessee Valley Authority)

After discovering in the late 1990s that most operating reactors did not meet the 1980 regulations, the NRC issued alternative regulations in 2004. These regulations would permit electrical cables to be in close proximity as long as analysis showed the fire could be put out before it damaged both sets of cables. Owners had the option of complying with either the 1980 or 2014 regulations. But the dozen reactors listed above are still not in compliance with either set of regulations.

The NRC issued the 1980 and 2004 fire protection regulations following formal rulemaking processes that allowed plant owners to contest proposed measures they felt were too onerous and the public to contest measures considered too lax. These final rules defined the appropriate level of protection against fire hazards.

Rules Needed for “Compensatory Measures”

UCS and Beyond Nuclear petitioned the NRC to initiate a rulemaking process that will define the compensatory measures that can be substituted for compliance with the fire protection regulations.

The rule we seek will reduce confusion about proper compensatory measures. The most common compensatory measure is “fire watches”—human fire detectors who monitor for fires and report any sightings to the control room operators who then call out the onsite fire brigades.

For example, the owner of the Waterford nuclear plant in Louisiana deployed “continuous fire watches.” The NRC later found that they had secretly and creatively redefined “continuous fire watch” to be someone wandering by every 15 to 20 minutes. The NRC was not pleased by this move, but could not sanction the owner because there are no requirements for fire protection compensatory measures. Our petition seeks to fill that void.

The rule we seek will also restore public participation in nuclear safety decisions. The public had opportunities to legally challenge elements of the 1980 and 2004 fire protection regulations it felt to be insufficient. But because fire protection compensatory measures are governed only by an informal, cozy relationship between the NRC and plant owners, the public has been locked out of the process. Our petition seeks to rectify that situation.

The NRC is currently reviewing our submittal to determine whether it satisfies the criteria to be accepted as a petition for rulemaking. When it does, the NRC will publish the proposed rule in the Federal Register for public comment. Stay tuned—we’ll post another commentary when the NRC opens the public comment period so you can register your vote (hopefully in favor of formal requirements for fire protection compensatory measures.)

BP Hosts Annual General Meeting Amid Questions on Climate Change

UCS Blog - The Equation (text only) -

Tomorrow, BP holds its Annual General Meeting (AGM) in London. BP shareholders are gathering at a time of mounting pressure on major fossil fuel companies to begin to plan for a world free from carbon pollution—as evidenced by last week’s vote by a majority of Occidental Petroleum shareholders in favor of a resolution urging the company to assess how the company’s business will be affected by climate change.

BP was one of eight companies that UCS assessed in the inaugural edition of The Climate Accountability Scorecard, released last October. BP responded to our findings and recommendations, but left important questions unanswered. Here are four questions that we hope BP’s decision makers will address at the AGM.

1) What is BP doing to stop the spread of climate disinformation—including by WSPA?

BP 2016 Score: Fair

In its own public communications, BP consistently acknowledges the scientific evidence of climate change and affirms the consequent need for swift and deep reductions in emissions from the burning of fossil fuels. BP left the climate-denying American Legislative Exchange Council (ALEC) in 2015 (without explicitly citing climate change as its reason for leaving).

Still, the company maintains leadership roles in trade associations and industry groups that spread disinformation on climate science and/or seek to block climate action.

For example, the Western States Petroleum Association (WSPA) made headlines in 2015 for spreading blatantly false statements about California’s proposed limits on carbon emissions from cars and trucks. The association employed deceptive ads on more than one occasion to block the “half the oil” provisions of a major clean-energy bill enacted by California lawmakers.

In response to a question at last year’s AGM about the misleading tactics of WSPA in California, CEO Bob Dudley said, “of course we did not support that particular campaign.” Yet according to the most recent data available, BP remains a member of WSPA and is represented on its board of directors.

Shareholders should be asking how BP communicated its disapproval of WSPA’s tactics in California to the association, and how WSPA responded. And how is BP using its leverage on the board of WSPA to end the association’s involvement in spreading climate disinformation and blocking climate action?

BP is also represented on the boards of the American Petroleum Institute (API) and the National Association of Manufacturers (NAM), both of which are named defendants in a lawsuit brought by youth seeking science-based action by the U.S. government to stabilize the climate system.

UCS’s 2015 report, “The Climate Deception Dossiers,” exposed deceptive tactics by the Western States Petroleum Association (WSPA).

2) Why did BP fund an attack on disclosure of climate-related risks and opportunities?

BP 2016 Score: Fair

BP—along with Chevron, ConocoPhillips, and Total SA—funded a new report criticizing the recommendations of the Task Force on Climate-Related Financial Disclosures (TCFD). The TCFD was set up by the Financial Stability Board (FSB), an international body that monitors and makes recommendations about the global financial system, in recognition of the potential systemic risks posed by climate change to the global economy and economic system. Through an open, collaborative process, the TCFD is recommending consistent, comparable, and timely disclosures of climate-related risks and opportunities in public financial filings.

A broad range of respondents in the TCFD’s public consultation supported its recommendations, and on Monday the We Mean Business coalition issued a statement expressing support for the TCFD recommendations and calling for G20 governments to endorse them. Members of We Mean Business include BSR (Business for Social Responsibility) and the World Business Council for Sustainable Development—both of which, in turn, count BP among their members.

Meanwhile, US Chamber of Commerce will reportedly roll out the oil and gas company-sponsored report at an event this week. (We found no evidence that BP is a member of the US Chamber).

In its own financial reporting, BP provides a detailed analysis of existing and proposed laws and regulations relating to climate change and their possible effects on the company, including potential financial impacts, and generally acknowledges physical risks to the company, including “adverse weather conditions,” but does not include discussion of climate change as a contributor to those risks.

So where does BP stand on climate-related disclosures? The company’s shareholders and the business community at large deserve to know, and tomorrow’s AGM is a good opportunity for CEO Bob Dudley to explain why BP’s funding isn’t aligned with its stated positions.

3) How is BP planning for a world free from carbon pollution?

BP 2016 Score: Poor

Both directly and through its membership in the Oil and Gas Climate Initiative, BP has expressed support for the Paris Climate Agreement and its goal of keeping warming well below a 2°C increase above pre-industrial levels.

Last month, the company signed a letter to President Trump supporting continued U.S. participation in the Paris Climate Agreement.

BP has adopted some modest measures to reduce greenhouse gas emissions from its internal operations. The company has set a cost assumption of $40 per tonne of CO2-equivalent for larger products in industrialized countries, but it is not clear whether BP applies the price to all components of the supply chain.

The company has undertaken efforts to reduce emissions as part of the “Zero Routine Flaring by 2030” pledge, reports annually on low-carbon research and development, and offers a limited breakdown of greenhouse gas emissions from direct operations and purchased electricity, steam, and heat for a year.

Yet BP has no company-wide plan for reducing heat-trapping emissions in line with the temperature goals set by the Paris Climate Agreement. BP’s April 2017 Sustainability Report does little to address BP’s long-term planning for a low-carbon future. CEO Bob Dudley continues to insist that “we see oil and gas continuing to meet at least half of all demand for the next several decades.”

BP’s Energy Outlook webpage confirms that the company’s “Most Likely” demand forecasts, plans for capital expenditure, and strategic priorities plan on a greater-than-3°C global warming scenario. BP also fails to provide a corporate remuneration policy that incentivizes contributions toward a clean energy transition (read ShareAction’s thorough and thoughtful analysis of BP’s remuneration policy here).

We look forward to hearing how BP responds to shareholder questions about the misalignment of its business plans and executive incentives with its stated commitment to keeping global temperature increase well below 2°C.

4) When will BP advocate for fair and effective climate policies?

BP 2016 Score: Good

BP consistently calls for a government carbon policy framework, including a global price on carbon, as a policy it supports, and touts its membership in the Carbon Pricing Leadership Coalition.

The question here is simple: when will BP identify specific climate-related legislation or regulation that it supports, and advocate publicly and consistently for those policies?

We will be awaiting answers from BP’s leadership at tomorrow’s AGM.

Three Reasons Congress Should Support a Budget Increase for Organic Agriculture Research

UCS Blog - The Equation (text only) -

Recent headlines about the US Department of Agriculture’s leadership and scientific integrity have been unsettling, as have indications that the Trump administration intends to slash budgets for agriculture and climate research and science more generally. But today there’s a rare piece of good news: a bipartisan trio in Congress has introduced legislation that would benefit just about everyone—farmers and eaters, scientists and food system stakeholders, rural and urban Americans. Not only that, but the new bill promises to achieve these outcomes while maintaining a shoestring budget.

Organic dairy producers need sound science to be able to make informed decisions about forage production for their herds. At this on-farm demonstration at the Chuck Johnson farm in Philadelphia, Tennessee, Dr. Gina Pighetti and her research team from the University of Tennessee and the University of Kentucky grow organic crimson clover (right) and wheat to develop best management practices that will help farmers make production decisions. Source: University of Tennessee.

Representatives Chellie Pingree (D-ME), Dan Newhouse (R-WA), and Jimmy Panetta (D-CA) are sponsoring the Organic Agriculture Research Act of 2017, which calls for an increase in mandatory funding for a small but crucial USDA research program, the Organic Research Extension Initiative (OREI). Congress allocated this program a mere $20 million annually in both the 2008 and 2014 Farm Bills, but that small investment stretched across the country with grants awarded in more than half of all states. The new bill proposes to increase that investment to $50 million annually in future years.

While a $30 million increase to a $20 million program may seem like a lot, it is worth noting that these numbers are small relative to other programs. For example, the USDA recently announced that its flagship research program, the Agriculture and Food Research Initiative (AFRI), will receive $425 million this year (another piece of good news, by the way). And many R&D programs at other agencies have much higher price tags (e.g., the NIH will receive $34 billion this year). But the return on investment of agricultural research and investment is very high, so this increase could do a lot of good.

Students at UC Davis, under the leadership of Charles Brummer, Professor of Plant Sciences, examine their “jalapeño popper” crop, a cross between a bell pepper and a jalapeño pepper. This public plant breeding pipeline supports organic farming systems by designing new vegetable and bean cultivars with the particular needs of the organic farming community in mind. Source: UC Davis.

While there are many reasons we are excited about a possible budget boost for the Organic Research Extension Initiative (OREI), I’ll highlight just three:

1)  We need more agroecologically-inspired research. More than 450 scientists from all 50 states have signed our expert statement calling for more public support for agroecological research, which is urgently needed to address current and future farming challenges that affect human health, the environment, and urban and rural communities. This call is built upon agroecology’s successful track record of finding ways to work with nature rather than against it, producing nutritious food while also boosting soil health, protecting our drinking water, and more. Unfortunately, the diminishing overall support for public agricultural research is particularly problematic for agroecology, because this research tends to reduce farmers reliance on purchased inputs, which means that gaps in funding are unlikely to be filled by the private sector. So, programs that direct public funding more toward agroecological research and practice are particularly needed, and OREI is one of these.

2)  When it comes to agroecology, this program is a rock star. The OREI funds some of the most effective federal agricultural research, especially around ecologically-driven practices that can protect our natural resources and maintain farmer profits.  One highlight of the program is that it stresses multi-disciplinary research; according the USDA “priority concerns include biological, physical, and social sciences, including economics”, an approach that can help ensure that research leads to farming practices that are both practical and scalable. Importantly, this program also targets projects that will “assist farmers and ranchers with whole farm planning by delivering practical information”, making sure that research will directly and immediately benefit those who need it most. But it’s not just the program description that leads us to believe this is a strong investment. In fact, our own research on competitive USDA grants found that OREI is among the most important programs for advancing agroecology.  And this in-depth analysis of USDA’s organic research programs by the Organic Farming Research Foundation further highlighted the vital importance of OREI.

3) Research from programs like OREI can benefit all farmers, while focusing on practices required for a critical and growing sector of US agriculture. The OREI program is designed to support organic farms first and foremost, funding research conducted on certified organic land or land in transition to organic certification. However, the research from OREI can benefit a much wider group of farmers as well, as such results are relevant to farmers of many scales and farming styles, organic or not. Of course, directing funds to support organic farmers makes lots of sense, since this sector of agriculture is rapidly growing and maintaining high premiums that benefit farmers. But it’s important to recognize that the benefits of the research extend far beyond the organic farming community.

For all of the reasons listed above, this bill marks an important step in the right direction. It is essential that the next farm bill increases support for science-based programs that will ensure the long-term viability of farms while regenerating natural resources and protecting our environment. Expanding the OREI is a smart way forward.

 

One of Many Risks of the Regulatory Accountability Act: Flawed Risk Assessment Guidelines

UCS Blog - The Equation (text only) -

Tomorrow, the Senate will begin marking up Senator Rob Portman’s version of the Regulatory Accountability Act (RAA), which my colleague Yogin wrote a primer about last week. This bill is an attempt to impose excessive burdens on every federal agency to freeze the regulatory process or otherwise tie up important science-based rules in years of judicial review.

One of the most egregious pieces of this bill as an affront to the expertise at federal agencies is the provision ordering the White House Office of Management and Budget’s (OMB) Office of Regulatory and Information Affairs (OIRA) to establish guidelines for “risk assessments that are relevant to rulemaking,” including criteria for how best to select studies and models, evaluate and weigh evidence, and conduct peer reviews. This requirement on its own is reason enough to reject this bill, let alone the long list of other glaring issues that together would fundamentally alter the rulemaking process.

The RAA is a backdoor attempt at giving OIRA another chance to try and prescribe standardized guidelines for risk assessment that would apply to all agencies, even though each agency conducts different types of risk assessments based on statutory requirements.

OIRA should not dole out science advice

The way in which agencies conduct their risk assessments should be left to the agencies and scientific advisory committees, whether it is to determine the risks of a pesticide to human health, the risks of a plant pest to native plant species, the risks of a chemical to factory workers, or the risks of an endangered species determination to an ecosystem. Agencies conduct risk assessments that are specific to the matter at hand; therefore an OIRA guidance prescribing a one-size-fits-all risk assessment methodology will not be helpful for agencies and could even tie up scientifically rigorous risk assessments in court if the guidelines are judicially reviewable.

OIRA already tried writing guidance a decade ago, and it was a total flop. In January 2006, OMB released its proposed Risk Assessment Bulletin which would have covered any scientific or technical document assessing human health or environmental risks. It’s worth noting that OIRA’s main responsibilities are to ensure that agency rules are not overlapping in any way before they are issued and to evaluate agency-conducted cost-benefit analyses of proposed rules. Therefore OIRA’s staff is typically made up of economists and lawyers, not individuals with scientific expertise appropriate for determining how agency scientists should conduct risk assessments.

OMB received comments from agencies and the public and asked the National Academy of Sciences’ National Research Council (NRC) to conduct an independent review of the document. That NRC study gave the OMB a failing grade, calling the guidance a “fundamentally flawed” document which, if implemented, would have a high potential for negative impacts on the practice of risk assessment in the federal government. Among the reasons for their conclusions was that the bulletin oversimplified the degree of uncertainty that agencies must factor into all of their evaluations of risk. As a result, the document that OIRA issued a year later, under Portman’s OMB, was heavily watered down. In September 2007, OIRA and the White House Office of Science and Technology Policy (OSTP) released a Memorandum on Updated Principles for Risk Analysis to “reinforce generally-accepted principles for risk analysis upon which a wide consensus now exists.”

Luckily, in this case, the OMB called upon the National Academies for an objective review of the policy, which resulted in final guidelines that were far less extreme. As the RAA is written, it does not require that same check on OIRA’s work, which means that we could end up with highly flawed guidelines with little recourse. And the Trump administration’s nominee for OIRA director is Neomi Rao, a law professor whose work at the George Mason University Law School’s Center for the Study of the Administrative State emphasizes the importance of the role of the executive branch, while describing agency policymaking authority as “excessive.” I think it’s fair to say that under her leadership, OIRA will not necessarily scale back its encroachment into what should be expert-driven policy matters.

Big business is behind the RAA push

An analysis of OpenSecrets lobbying data revealed that trade associations, PACs and individuals linked to companies that have lobbied in support of the RAA also contributed $3.3 million to Senator Rob Portman’s 2016 campaign. One of the most vocal supporters of the bill is the U.S. Chamber of Commerce, whose support for the bill rests on the assumption that we now have a “federal regulatory bureaucracy that is opaque, unaccountable, and at times overreaching in its exercise of authority.” Yet this characterization actually sounds a lot to me like OIRA itself, which tends to be fairly anti-regulatory and non-transparent, and has a history of holding up science-based rules for years without justification (like the silica rule). Senator Portman’s RAA would give OIRA even more power over agency rulemaking by tasking the agency with writing guidelines on how agencies should conduct risk assessments and conveniently not requiring corporations to be held to the same standards.

When OIRA tried to write guidelines for risk assessments in 2006, the Chamber of Commerce advocated for OIRA’s risk assessment guidelines to be judicially reviewable so they could be “adequately enforced,” claiming that agencies use “unreliable information to perform the assessments,” which can mean that business and industry are forced to spend millions of dollars to remedy those issues. It is no wonder, then, that the Chamber would be so supportive of the RAA, which would mandate OIRA guideline development for risk assessments, possibly subject to judicial review. OIRA issuing guidelines is one thing, but making those guidelines subject to judicial review ramps up the egregiousness of this bill. All sorts of troubling scenarios could be imagined.

Take, for example, the EPA’s human health risk assessment for the pesticide chlorpyrifos, which is just one study that will be used for the agency’s registration review of the chemical, which has been linked to developmental issues in children. The EPA sought advice from the FIFRA Scientific Advisory Panel on using a particular model to better determine a chemical’s effects on a person based on their age or genetics and to predict how different members of a population would be affected by exposure, called the physiologically-based pharmacokinetic and pharmacodynamic (PBPK/PD) model. The agency found that there is sufficient evidence that neurodevelopmental effects may occur at exposure levels that are well below previously measured exposure levels.

If OIRA were to produce risk assessment guidelines that were judicially reviewable, the maker of chlorpyrifos, Dow Chemical Company, could sue the agency on the grounds that it did not use an appropriate model, consider the best available studies, or that its peer review was insufficient. This would quickly become a way for industry to inject uncertainty into the agency’s process and tie up regulatory decisions about its products in court for years, delaying important public health protections. A failure to ban a pesticide like chlorpyrifos based on inane risk assessment criteria would allow more incidences of completely preventable acute and chronic exposure, like the poisoning of 50 farmworkers in California from chlorpyrifos in early May.

“Risk assessment is not a monolithic method”

A one-size fits all approach to government risk assessments is a bad idea, plain and simple. As the NRC wrote in its 2007 report:

Risk assessment is not a monolithic process or a single method. Different technical issues arise in assessing the probability of exposure to a given dose of a chemical, of a malfunction of a nuclear power plant or air-traffic control system, or of the collapse of an ecosystem or dam.

Prescriptive guidance from OIRA would serve to squash the diversity and flexibility that different agencies are able to use depending on the issue and the development of new models and technologies that best capture risks. David Michaels, head of OSHA during the Obama Administration, wrote in his book Doubt Is Their Product that regulatory reform, and in this case the RAA, offers industry a “means of challenging the supporting science ‘upstream.’” Its passage would allow industry to exert more influence in the process by potentially opening up agency science to judicial review. Ultimately, the RAA is a form of regulatory obstruction that would make it more difficult for agencies to issue evidence-based rules by blocking the use of science in some of the earliest stages of the regulatory process.

The bill will be marked up in the Senate Homeland Security and Governmental Affairs Committee tomorrow, and then will likely move onto the floor for a full senate vote in the coming months. Help us fight to stop this destructive piece of legislation by tweeting at your senators and telling them to vote no on the RAA today.

Reduce Risk, Increase Benefits: More Energy Progress for Massachusetts?

UCS Blog - The Equation (text only) -

A new analysis shows how strengthening a key Massachusetts energy policy can create jobs, cut pollution, and manage risks. Here are 5 questions (and answers) about what’s at stake and what the study tells us.

The study, prepared for the Northeast Clean Energy Council (NECEC) in partnership with Mass Energy Consumers Alliance, was carried out by two leading Massachusetts-based energy consulting firms, Synapse Energy Economics and Sustainable Energy Advantage (SEA). (UCS was part of an advisory working group providing input on assumptions and analytical approaches.)

An Analysis of the Massachusetts Renewable Portfolio Standard looks at what kind of benefits could come from strengthening that key policy. And the results look pretty attractive.

Why do we want more renewables?

First, back to basics: Why do we want renewables?  Turns out there are a lot of problems that renewables are a great answer to, from financial risks associated with potentially volatile fuel pricing (think natural gas), to pollution and associated negative public health impacts, to not enough jobs.

That was where we were coming from when we did a study last year about how we could cut our risk of overreliance on natural gas, make progress on climate change, and bring about other benefits. That study showed that a combination of policies to drive renewables could do all that, and at really reasonable costs.

How do we get renewable energy?

So if renewables are a good thing, how do we make them happen?

One of the most important policies for driving renewable energy in the US over the last two decades has been the renewable portfolio standard (RPS; also known as the renewable electricity standard). Under RPSs, utilities have targets for how much renewable energy they need to get for their customers by certain dates, and then let the market figure out the actual technology mix (wind, solar, etc.).

And Massachusetts has a particular leadership role for this particular policy. The Bay State was the first to put in place a state-wide RPS, and now 29 states have RPSs. They work, and they can do even more: a recent analysis by two premier national energy labs found good benefits from stronger RPSs: less pollution, potential savings, more jobs.

States can complement RPSs with policies aimed at particular technologies or approaches. Massachusetts has done that in a big new energy law that incorporated some of the policies we modeled in our study.

More clean energy leadership to come? (Credit: Tony Hisgett)

The 2016 Act to Promote Energy Diversity requires the state’s utilities to enter into cost-effective long-term contracts for renewable energy totaling 15-20% of the state’s electricity demand. It also requires utilities to go after offshore wind, to kick-start a major new source of clean energy, for another 10-15%.

Is too much of a good thing a bad thing?

In our 2016 study, before the legislation happened, we modeled versions of those polices coupled with a strengthening of the RPS. Why is that increase important? Because much of that renewable energy (not including large hydro, which is allowed to compete for those contracts) would count for meeting the Massachusetts RPS.

Alas, while the RPS increase was supported by the state senate, it didn’t make it in to the final bill. Without that piece, we’re on track to end up with more renewable energy credits than the policy calls for (each megawatt-hour of renewable energy is worth one REC, and that’s what utilities use to show that they’ve complied with the RPS).

So are too many RECs a bad thing? No—except that if supply outpaces demand, REC prices fall (sometimes precipitously). And we need REC prices to be high enough to not only keep existing renewable energy projects online, but also drive new renewables. RPSs work, and part of keeping them working is keeping them just out in front of the market.

So, why this study?

That’s what makes this new study so important. To use the RPS to best effect for Massachusetts, we need to understand what level of RPS will be enough to keep the market for renewables strong across the board, to complement the long-term contracts for land-based renewables and offshore wind under the Energy Diversity Act.

The study looked at a base case and compared it with a range of possible approaches to keeping REC prices driving renewables by increasing RPS demand in Massachusetts (and in Connecticut, as the next biggest electricity market in New England). Specifically, Synapse/SEA modeled the Massachusetts RPS increasing 2% or 3% per year (instead of the current 1%), combined in some cases with a continuation of the Connecticut RPS’s 1.5%-per-year growth past its current 2020 end date.

They also looked at what would happen under those scenarios if natural gas prices were to increase, and what it might mean to move more quickly to electric vehicles.

Can we drive more renewables, and what do we get from them?

So what does all this mean?

Renewable energy demand – What would it mean for the REC supply-demand picture—specifically, would there be enough demand because of the RPS to drive the additional renewables we know we need?

Here’s what the analysis found:

Synapse/Sustainable Energy Advantage

As the graph shows, the RPS base case wouldn’t be expected to drive additional renewables beyond that required under the offshore wind and other long-term contracting provisions of the Energy Diversity Act. The higher RPS targets, on the other hand, could do the trick in terms of keeping REC prices able to drive more renewables.

Global-warming pollution reduction – How would that extra growth in renewable energy match our needs, in terms of the requirements under the state’s landmark Global Warming Solutions Act (GWSA), for example?

Good news there, too:

Synapse/Sustainable Energy Advantage

As the graph shows, if the increase in the RPS is paired with more vehicle electrification, it gets us most of the way to where we’ll probably need to be in 2030 based on the GWSA.

Electricity price and bill impacts – What about the finances? While getting the RPS in balance means that REC prices will go up, those increases are partially balanced by decreases in wholesale electricity prices because of the added renewables. For the average Massachusetts homeowner/billpayer, they project that the overall effect would be an electricity bill increase of $0.15 to $2.17 per month.

More renewables can also mean less natural gas, and a corresponding drop in risks from natural gas overreliance, which would be particularly important if gas prices were to rise:

Between 2018 and 2030, increasing the diversity of New England’s electricity mix by adding more renewables and reducing reliance on natural gas could save New England up to $2.1 billion in wholesale energy costs, in the face of a higher natural gas price.

Job creation – What would these policies do for employment? One (other) great thing about renewable energy is that it means jobs. In this case, even when taking into account reduced jobs in the fossil fuel sector, it could mean something like 37,000 extra jobs (job-years) between 2018 and 2030—on top of jobs created by the requirements under the 2016 Energy Diversity Act.

Seeking harmony

The overall conclusion of the study is that balance is better:

Two of Massachusetts’ key renewable energy policies—the RPS and long-term contracting authorizations—require harmonization in order for the Commonwealth to meet its long-term clean energy and climate goals.

The numbers suggest that getting that “harmonization” right would bring a load of benefits to Massachusetts and the region, and provide extra oomph for a state on the move toward a truly clean energy future.

See here for the study press release.

ConocoPhillips Shareholders to Consider Climate-Related Lobbying and Executive Perks

UCS Blog - The Equation (text only) -

Today ConocoPhillips holds a virtual annual shareholders’ meeting, where the company will face two significant climate-related resolutions. These resolutions intersect with some of the key findings and recommendations of UCS’s 2016 report The Climate Accountability Scorecard. ConocoPhillips responded to the report shortly after its release, and UCS has been engaging with company officials over the company’s climate-related positions and actions since then. We’ll be following the shareholders’ meeting with keen interest.

1) Lobbying disclosure

This proposal, filed by Walden Asset Management, calls for ConocoPhillips to report on its direct and indirect lobbying expenditures and grassroots lobbying communications at the federal, state, and local levels. It received support of one-quarter of ConocoPhillips shareholders last year.

The resolution highlights ConocoPhillips’s representation on the Board of the US Chamber of Commerce (US Chamber) and the lack of transparency about the company’s payments to the US Chamber—including the portion of those payments used for lobbying. Last year alone, the US Chamber spent $104 million on lobbying.

While the US Chamber claims to represent the interests of the business community, few companies publicly agree with the group’s controversial positions on climate change. Last month, a range of civil society organizations urged Disney, Gap, and Pepsi to withdraw from US Chamber because of the inconsistency between their positions on climate change and the US Chamber’s lobbying on the issue.

Today the US Chamber is also reportedly hosting an event to highlight a new oil and gas-industry sponsored report attacking the recommendations of the Task Force on Climate-Related Financial Disclosures (TCFD), on which UCS submitted comments.

Chaired by former New York City Mayor and businessman Michael Bloomberg, the TCFD has conducted an open, collaborative process through which it is recommending consistent, comparable, and timely disclosures of climate-related risks and opportunities in public financial filings.

Implementation of these common-sense, mainstream recommendations by companies across all sectors of the economy—including transparent discussion of the business implications of a 2° Celsius scenario—would begin to fill gaps in existing disclosures and provide necessary data to investors and other stakeholders.

Indeed, some companies are already following such guidelines, and the broad range of respondents to the TCFD’s public consultation process were generally supportive of the recommendations. However, the IHS Markit report (funded by ConocoPhillips along with BP, Chevron, and Total SA) claims that adoption of the TCFD recommendations could obscure material information, create a false sense of certainty about the financial implications of climate-related risks, and distort markets.

This effort by the fossil fuel industry and the US Chamber to resist transparency is alarming, particularly in light of the oil and gas companies’ limited disclosure of physical and other climate-related risks to investors and in light of evidence that climate change poses financial risks to the fossil fuel industry. (And this pushback against corporate transparency is particularly alarming under a Trump administration that has close ties to the fossil fuel industry and has shown no inclination to hold these companies accountable).

ConocoPhillips’s affiliation with the US Chamber contributed to its “Poor” score in the area of Renouncing disinformation on climate science and policy in UCS’s Climate Accountability Scorecard.

ConocoPhillips is also represented on the Boards of the American Petroleum Institute (API) and the National Association of Manufacturers (NAM), two other trade associations that UCS has found to spread disinformation on climate science and/or block climate action. Both API and NAM are named defendants in Juliana vs. United States, a lawsuit through which 21 young people supported by Our Children’s Trust are seeking science-based action by the U.S. government to stabilize the climate system.

UCS recommends that ConocoPhillips use its role as chair of API and its leverage as a leader within NAM and the US Chamber to demand an end to the groups’ disinformation on climate science and policy, and speak publicly about these efforts.

The company has shown some discretion in managing its public policy advocacy: ConocoPhillips confirmed in 2013 that it was no longer a member of the American Legislative Exchange Council (ALEC), it provides good disclosure of its political spending, and it has extensive policies and oversight related to political activities in general.

2) Executive compensation link to 2 degrees transition

Click here to read ConocoPhillips Accountability Scorecard

Another resolution, filed by the Unitarian Universalist Association (on whose Socially Responsible Investing Committee I serve), calls for a report to shareholders on the alignment of ConocoPhillips’s executive compensation incentives with a low-carbon future. Proponents are seeking information, for example, on the ways the company’s incentive compensation programs for senior executives link the amount of incentive pay to the volume of fossil fuel production or exploration and/or encourage the development of a low-carbon transition strategy.

In March, ConocoPhillips CEO Ryan Lance expressed support for the U.S. staying in the Paris Climate Agreement. However, in UCS’s Climate Accountability Scorecard, ConocoPhillips ranked “poor” in the area of Planning for a world free from carbon pollution.

On the positive side, the company provides details about efforts to improve energy efficiency, reduce natural-gas flaring, and reduce the intensity of emissions from oil sands. It uses carbon scenarios, including a low-carbon scenario, to evaluate its current portfolio and investment options. And ConocoPhillips has set limited, short-term emissions reduction targets—but not in the service of the Paris Climate Agreement goal of keeping warming well below a 2°C increase above pre-industrial levels.

Building on discussions at today’s annual shareholders’ meeting, UCS looks forward to further dialogue with ConocoPhillips over its response to our Scorecard findings and recommendations, toward improvements in the company’s climate-related positions and actions.

©corlaffra/Shutterstock.com

Five Ways to Move Beyond the March: A Guide for Scientists Seeking Strong, Inclusive Science

UCS Blog - The Equation (text only) -

The March for Science took place April 22 in locations all over the world — an exciting time for scientist-advocates and a proud moment for the global scientific community.

As we reflect on the March, we must also reflect on the fact that organization of the March on Science 2017 has been a microcosm of the structural, systemic challenges that scientists continue to face in addressing equity, access, and inclusion in the sciences.

Others have written eloquently regarding the steep learning curve that the March on Science Washington DC organizers faced in ensuring an inclusive and equitable March. The organizers’ initial missteps unleashed a backlash on social media, lambasting their failure to design a movement for all scientists and exhorting them to consider more deeply the ways in which science interacts with the varying experiences of language, race, economic status, ableness, gender, religion, ethnic identity, and national origin.

The March has taken steps to correct these initial missteps, correctly choosing to engage directly with the issue and consult with historically excluded scientists to better understand and examine the ways in which science interacts with the ongoing political reality of bias in society.  It must be noted, however, that improvements like their new Diversity and Inclusion Principles, though an excellent initial step, still mask the unheralded efforts of multiple scientists of color to correct the narrative.

At the core of the controversy, and perhaps underlying its intellectual origins, is the popular fiction among scientists that Science can (or should) be apolitical.

Science is never apolitical.

It is, inherently, a system of gaining knowledge that has been devised by, refined by, practiced by, misused by, and even (at times) weaponized by human beings — and as human beings, we are inherently political.

Therefore science is not a completely neutral machine, functioning of its own volition and design; but rather a system with which we tinker and adjust; which we tune to one frequency or the other; and by dint of which we may or may not determine (or belatedly rationalize) the course of human action.

And so when we understand that science is not apolitical, we are freed to examine the biases, exclusions, and blind spots it may create — and then correct for them. In doing so, we can improve ourselves, broaden the inclusivity of our work (and potentially improve its usefulness and/or applicability), and advance the quest of scientific inquiry: to find the unwavering truths of the universe.

The March on Science organizers have come a long way in recognizing the importance of diversity, equity, and inclusion in science, but what comes next? How can scientists living in this current political moment engage in individual and collective action (hint: it’s not just about calling your representatives). What can we do?

  1. Study the history and culture of science. As scientists, we are natural explorers and inherently curious. We ought to direct some of that curiosity toward ourselves; toward better understanding where we come from, who we are, and why we think the way we do. Historians of science and those engaged in social study of science have demonstrated how science is a human enterprise, influenced by culture and politics of specific times and places. These studies have shown how blind spots — in language, in culture, in worldview, in political orientation—can change our results, skew our data, or put a foot on the scales of measurement. At times, these biases have caused great harm, and at others have been fairly benign—but these analyses together all point out how science is more robust for recognizing sociocultural impacts on its practice.
  2. Understand our own political reality, and seek to understand the realities of others. Take some time — even ten minutes a week — to ask yourself if your actions reflect your beliefs. What beliefs do you hold dear, both as a scientist and as a person? How do they influence the way you think about, study, and conduct science? What do you assume to be true about the world? How does that impact the way in which you frame your scientific questions? How does it influence the methods, study sites, or populations you choose? How does the political reality which you inhabit—and its associated privileges and problems—direct your attention, shape your questions, or draw you to one discipline or the other? What presumptions do you make about people, about systems, or about the planet itself? What do you do, think, or feel when your assumptions are challenged? How willing are you to be wrong?
  3. Open the discourse. Inclusive science won’t happen by accident—it will happen because we work to eliminate the sources of bias in our systems and structures that list the ship toward one side or the other. And the only way we can learn about these sources of bias is to (1) acknowledge their existence, then (2) begin to look for them. Talk to other scientists—at conferences, on Twitter, on Facebook, on reddit, on Snapchat, through email chains, through list-servs—any way you can. Listen for the differences in your perspectives and approaches. Ponder on the political reality from which they might originate. Ask questions, and genuinely want to hear (and accept) the answers. Then go back and reconsider the questions regarding your political reality and how you could now approach your science based on what you have learned of others. As a clear example, western science has consistently overlooked the already-learned lessons of indigenous science and disregarded the voiced experiences of indigenous researchers. Greater recognition of—and collaboration with—indigenous scientists has the potential to greatly speed and improve advances in our work. Opening the discourse is a first step toward ameliorating this deficit in our learning.
  4. Collaborate, collaborate, collaborate. Reach out to scientists who do not look like you, do not speak your dialect, do not come from your country, do not share your values or religion, do not frame questions in the same way, and do not hold the same theories precious. Share equally in the experience of scientific discovery. Choose a journal that will assign multiple-first-authorships. Publish open-access if you can, and share directly if you can’t.
  5. Choose to include. Take responsibility at all stages—in the planning for science, the choosing of methods, the hiring of staff, the implementation—for creating strong, inclusive scientific teams and systems. Be aware of how your own political reality affect your scientific design, planning, or implementation. Check your unrecognized presumptions or biases. Challenge yourself to ask your question through a different lens or through different eyes. Choose to participate in the improvement and refinement of our shared scientific machine.

Ignoring politics doesn’t insulate us from it—if scientists want to be champions for knowledge, then we have to defend our practice from the human tendencies that threaten to unravel it—exclusion, tribalism, competition, and bias. Science can’t be apolitical, but it can be a better path to knowledge—so let’s make it happen.

 

Alexandra E. Sutton Lawrence is an Associate in Research at the Duke Initiative for Science & Society, where she focuses on analyzing innovation & policy in the energy sector. She’s also a doctoral candidate in the Nicholas School of the Environment, and a member of the Society for Conservation Biology’s Equity, Inclusion and Diversity Committee. She’s also a former member of the global governing board for the International Network of Next Generation Ecologists (INNGE).

 

 

Dr. Rae Wynn-Grant is a conservation biologist with a focus on large carnivore ecology in human-modified landscapes, with a concurrent interest in communicating science to diverse audiences. Dr. Wynn-Grant is the deputy chair of the Equity, Inclusion, and Diversity committee for the Society for Conservation Biology.

 

 

 

Cynthia Malone is a conservation scientist and social justice organizer, whose intersectional, trans-disciplinary research ranges from primate ecology to human wildlife conflict across the tropics, including Indonesia and Cameroon. She is a cofounder and current co-chair of the Society of Conservation Biology’s Equity, Inclusion, and Diversity Committee.

 

 

Dr. Eleanor Sterling has interdisciplinary training in biology and anthropology and has over 30 years of field research and community outreach experience with direct application to biodiversity conservation in Africa, Asia, Latin America, and Oceania. Dr. Sterling is active in the Society for Conservation Biology (SCB), having served for 12 years on the SCB Board of Governors and she currently co-chairs the SCB’s Equity, Inclusion, and Diversity Committee, which she co-founded. She also co-founded the Women in the Natural Sciences Association for Women in Sciences chapter in New York City.

 

Martha Groom is a Professor in the School of Interdisciplinary Arts and Sciences at the University of Washington Bothell and the College of the Environment at the University of Washington.  Her work focuses on the intersections of biodiversity conservation and sustainable development, and on effective teaching practice. A member of the SCB Equity, Inclusion and Diversity Committee, she is also a leader of the Doris Duke Conservation Scholars Program at the University of Washington, a summer intensive program for undergraduates aimed at building truly inclusive conservation practice.

 

Dr. Mary Blair is a conservation biologist and primatologist leading integrative research to inform conservation efforts, including spatial priority-setting and wildlife trade management. She is the President of the New York Women in Natural Sciences, a chapter of the Association for Women in Science, and a member of the Society for Conservation Biology’s Equity, Inclusion, and Diversity Committee.

 

 

Science Network Voices gives Equation readers access to the depth of expertise and broad perspective on current issues that our Science Network members bring to UCS. The views expressed in Science Network posts are those of the author alone.

Chevron, ExxonMobil Face Growing Investor Concerns About Climate Risk

UCS Blog - The Equation (text only) -

In preparation for their annual meetings on May 31, both Chevron and ExxonMobil opposed every climate-related resolution put forth by their shareholders. In a previous post, I wrote that Chevron continues to downplay climate risks while attempting to convince shareholders that the company’s political activities—which include support for groups that spread climate disinformation—are in shareholders’ long-term interests.

Now the proponents of a shareholder resolution calling for Chevron to publish an annual assessment of long-term impacts of climate change, including 2°C scenarios, have withdrawn it from consideration at the annual meeting.

In a carefully calibrated statement, investors Wespath and Hermes noted that the report “Managing Climate Change Risks: A Perspective for Investors” lacks a substantive discussion of Chevron’s strategies, but accepted the report as a first step and decided to give the company more time to explain how climate change is factored into its strategic planning.

Similar resolutions are gaining momentum with shareholders of utility and fossil fuel companies this spring, receiving more than 40% support at AES Corporation, Dominion Resources Inc., Duke Energy Corporation, and Marathon Petroleum Corporation. Last Friday, a majority of Occidental Petroleum Corporation shareholders voted in favor of a resolution (also filed by Wespath) calling on the company, with Board oversight, to “produce an assessment of long-term portfolio impacts of plausible scenarios that address climate change.”

ExxonMobil shareholders will vote on a comparable proposal in two weeks. In 2016, a resolution urging the company to report on how its business will be affected by worldwide climate policies received the highest vote ever (38%) from company shareholders in favor of a climate change proposal.

The 2°C scenario analysis proposal, co-filed by the Church Commissioners for England and New York State Comptroller Thomas P. DiNapoli as Trustee of the New York State Common Retirement Fund, is on the agenda again this year, and a coalition of institutional investors with more than $10 trillion in combined assets under management is pushing for its adoption. (Look for a forthcoming blog on ExxonMobil’s 2017 annual shareholders’ meeting).

Chevron has bought some time from shareholders, but the company would be wise to improve its disclosures in response to growing investor concerns about the potential business, strategic, and financial implications of climate change. Instead, the company (along with BP, ConocoPhillips, and Total SA) funded a new report criticizing the recommendations of the Task Force on Climate-Related Financial Disclosures (TCFD—see below for additional details).

The US Chamber of Commerce will roll out the oil and gas company-sponsored report at an event this week. While the US Chamber claims to represent the interests of the business community, few companies publicly agree with the group’s controversial positions on climate change.

Meanwhile, carbon asset risk is still on the agenda for Chevron’s shareholders this month: the proposal on transition to a low-carbon economy filed by As You Sow will go forward to a vote. As UCS closely monitors Chevron’s and ExxonMobil’s communications and engagement with concerned shareholders over its climate-related positions and actions, our experts and supporters will be stepping up the pressure on both companies in the lead-up to their annual meetings at the end of May.

North Korea’s Missile in New Test Would Have 4,500 km Range

UCS Blog - All Things Nuclear (text only) -

North Korea launched a missile in a test early in the morning of May 14, North Korean time. If the information that has been reported about the test are correct, the missile has considerably longer range than its current missiles.

Reports from Japan say that the missile fell into the Sea of Japan after traveling about 700 km (430 miles), after flying for about 30 minutes.

A missile with a range of 1,000 km (620 miles), such as the extended-range Scud, or Scud-ER, would only have a flight time of about 12 minutes if flown on a slightly lofted trajectory that traveled 700 km.

A 30-minute flight time would instead require a missile that was highly lofted, reaching an apogee of about 2,000 km (1,240 miles) while splashing down at a range of 700 km. If that same missile was flown on a standard trajectory, it would have a maximum range of about 4,500 km (2,800 km).

New press reports are in fact giving a 2,000 km apogee for the test.

Fig. 1  The black curve is the lofted trajectory flown on the test. The red curve is the same missile flown on a normal (MET) trajectory.

This range is considerably longer than the estimated range of the Musudan missile, which showed a range of about 3,000 km in a test last year. Guam is 3,400 km from North Korea. Reaching the US West Coast would require a missile with a range of more than 8,000 km. Hawaii is roughly 7,000 km from North Korea.

This missile may have been the new mobile missile seen in North Korea’s April 15 parade (Fig. 2).

Fig. 2 (Source: KCNA)

Shake, Rattle, and Rainout: Federal Support for Disaster Research

UCS Blog - The Equation (text only) -

Hurricanes, wildfires, and earthquakes are simply natural events—until humans get in their way. The resulting disasters are particularly devastating in urban areas, due to high concentrations of people and property. Losses from disasters have risen steadily over the past five decades, thanks to increased populations and urban development in high-hazard areas, particularly the coasts. There is also significant evidence that climate change is making weather-related events more frequent and more severe as well. As a result, it is more critical than ever that natural hazards research is being incorporated into emergency planning decisions.

NOAA map denotes a range of billion dollar weather and climate disasters for 2016.

Improving emergency planning for the public’s benefit

A handful of far-sighted urban planning and management researchers, with particular support from the National Science Foundation, began studying these events during the 1970s. I participated in two of these research studies. Both opportunities afforded me clear opportunities to make a difference in people’s lives, a major reason I chose my field.

In 2000, a group of researchers from the University of New Orleans and Tulane University looked into the effects of natural hazards on two communities: Torrance, CA (earthquakes) and Chalmette, LA (hurricanes). This research focused on the oil refineries in both communities. We looked at emergency-management protocols, potential toxic effects due to refinery damage, and population impacts.

Hurricane Katrina photo of oil spill in Chalmette, showing oil tanks & streets covered with oil slick. US EPA photo from “http://www.epa.gov/katrina/images/oilspill_650.jpg” by the United States Environmental Protection Agency

Although California has a far better-developed emergency management system at all levels of government, Chalmette was less vulnerable than Torrance, due to the advanced warning available for hurricanes. We also found that, though even well-informed homeowners tend to be less prepared than expected, renters are more vulnerable to disaster effects due to inadequate knowledge, dependence on landlords to secure their buildings, and generally lower socioeconomic status. Our findings had major implications for community-awareness campaigns, suggesting that more than disaster “fairs”, public flyers, and media attention are needed. We concluded with a series of recommendations for emergency managers and planners to improve their communities’ prospects.

This conjoint-hazard research also stimulated in-depth studies of the various aspects of what is now called “natech”. For example, a pair of researchers subsequently found that natural hazards were the principal cause of more than 16,000 releases of hazardous materials between 1990 and 2008—releases that could have been prevented with better hazard-mitigation planning and preparation. The implications for regulation of businesses that use hazardous substances are obvious. So are the ramifications for public outreach and disaster response.

The second NSF-funded study, conducted at Florida Atlantic University, began in the aftermath of Hurricane Katrina. Before starting, we scoured the literature for earlier research on housing recovery, only to discover that most of it dealt with either developing countries or one or two earthquake events in California.

We focused on housing recovery along the eight-state “hurricane coast” from North Carolina south and west to Texas. A case study of New Orleans quickly revealed the extent to which local circumstances, population characteristics, and state and federal policies and capacity impaired people’s ability to restore their homes and rebuild their lives. We assembled data on the socioeconomic, housing, and property-insurance characteristics of the first- and second-tier coastal counties, as well as information about state and local disaster-recovery policies and planning.

The research team then developed a vulnerability index that provides a numerical snapshot for each county, as well as a series of indicators that contributed to the overall rating. These indicators can be used to evaluate specific areas in need of improvement, such as building regulations, flood-protection measures, and reconstruction policies—for example, restrictions on temporary housing—as well as the extent to which each area contributes to overall vulnerability.

Science informs public policies

Although imperfect, indexes do provide policy-makers and stakeholders with valuable insights. Moreover, our analysis of post-disaster housing policies revealed the inadequacies in federal provision of temporary housing, the most critical need once community safety has been restored. The controversies surrounding FEMA’s travel-trailers—high cost, toxic materials, and haphazard placement—made national news. Now there is increasing recognition that small, pre-fabricated houses are a better approach, presuming that local jurisdictions allow them to be built regardless of pre-disaster construction regulations. More planners are engaged in looking at these regulations with disaster recovery in mind.

I’m proud of the research I’ve contributed to, but I’m even more gratified with the impacts of that research. Many of our recommendations have been directed at government actors, and it is through those actors that real differences are made in people’s day-to-day lives—and in their resiliency in the face of disaster. In an era of accelerating environmental change, helping communities endure will be ever more dependent on cutting-edge research of this kind. I’m grateful to have had the opportunity to participate in the endeavor.

 

Joyce Levine, PhD, AICP, received her PhD from the University of New Orleans. As an urban planner with thirty years of experience, she became interested in pre- and post-disaster planning by preparing her dissertation under hazard-mitigation guru Raymond J Burby. She participated in two NSF-funded projects that focused on hazard-prone states — California and Louisiana in the first, and the southern “hurricane coast” in the second. She is the author of an extensive study of the housing problems i New Orleans reported by government and the media during the first six months after Katrina. Although she has retired from academia, she continues to follow disaster research in the U.S.

Science Network Voices gives Equation readers access to the depth of expertise and broad perspective on current issues that our Science Network members bring to UCS. The views expressed in Science Network posts are those of the author alone.

Graphic: NOAA

Pages

Subscribe to Union of Concerned Scientists aggregator - Combined UCS Blogs