Combined UCS Blogs

Maine Hits Clean Energy Grand Slam

UCS Blog - The Equation (text only) -

As a baseball fan, I’m looking forward to watching the best players in the world compete for bragging rights in the 90th Major League Baseball All-Star game tonight. As a Maine resident for the past 11 years, I’m even more thrilled to see Maine regain its all-star status as a clean energy leader.

Thanks to the leadership of Governor Janet Mills and strong bi-partisan support in the legislature, Maine hit a clean energy grand slam this year, passing several major climate and clean energy bills. In addition to creating new jobs and reducing the state’s reliance on imported oil and natural gas, these laws will put Maine on a pathway to achieve its statewide target of reducing global warming emissions 80 percent below 1990 levels by 2050.

UCS was part of a broad coalition of groups representing businesses, municipalities, and clean energy advocates that supported these bills.

On June 26, Governor Mills signs 3 major climate and clean energy bills into law at state’s largest solar farm in Pittsfield.

Increasing renewable energy to 80% by 2030

LD 1494 doubles Maine’s renewable portfolio standard (RPS) from 40% by 2017 to 80% by 2030 and sets a goal of 100% renewables by 2050. This puts Maine at the top of the batting order, with the highest RPS in the country by 2030. Maine’s RPS surpasses renewable standards of 50% or more by 2030 recently adopted by other leading states (CA, NY, NJ, NM, NV and VT), as shown in the map below.

Sponsored by Sen. Eloise Vitelli (D-Arrowsic), LD 1494 was enacted with strong bi-partisan support, passing the Senate by a unanimous vote of 34-0 and the House 93-48. In addition to testifying in support of the bill, I was a peer reviewer of an analysis of the bill conducted by Synapse Energy Economics and Sustainable Energy Advantage. The study found that the policy is affordable and would deliver the following benefits between 2020 and 2030:

  • Install 700 MW of new renewable energy capacity in Maine
  • Create 1,900 new jobs in Maine, or about 170 per year.
  • Reduce electric sector global warming emissions attributable to Maine by 55 percent.
  • Avoid $500,000 per year in health-related damages from burning fossil fuels.
  • Result in a modest 1.1% increase in monthly residential and small commercial electricity bills

Maine’s 80% RPS makes the state well-positioned to benefit under a national RPS. The same day Gov. Mills signed LD 1494 into law, Senator Tom Udall (D-NM) introduced a national RPS that would more than double the supply of renewable energy to 50% of US electricity generation by 2035. A UCS analysis showed that a 50% RPS would boost the US economy, benefit consumers, and put the nation on a pathway to decarbonize the power sector by 2050. Senator King co-sponsored the bill because of the potential economic and environmental benefits to Maine of selling renewable energy credits to utilities in other states to help them meet their targets. We hope Senator Collins follows suit, as she has voted in favor of a national RPS at least four times it has come up for a vote in US Senate over the past two decades (see votes in 2002, 2005, and 2015).

Joining the solar revolution

While the RPS is expected to drive significant investments in utility-scale solar projects, LD 1711 is a complementary policy that will allow all Maine’s residents, businesses, and municipalities to become more energy independent by investing in distributed solar projects. Sponsored by Senator Dow (R-Lincoln), it uses competitive markets to deploy at least 400 MW of distributed solar projects of 5 MW or less, with prices that decline over time as more solar is deployed.

By removing arbitrary limits on community solar projects, it provides greater access to clean, affordable power for all renters and homeowners, including provisions that will increase solar investments in low- and median-income households. It will also enable businesses, schools and municipalities to invest in larger solar projects and provides incentives to install projects on landfills and brownfields.

Despite being a northern state, Maine has a much better solar resource than you might expect. According to data from the National Renewable Energy Lab (NREL), a solar PV system installed in Portland will generate slightly more electricity than a system installed in Houston and only 5 percent less than a system installed in Miami.

This bill has a long history going back at least five years. The original proposal was developed by the Maine Public Advocate’s Office following the Maine Value of Solar (VOS) Study in 2014. In addition to participating in the VOS study, I represented UCS in a diverse stakeholder process at the Maine PUC to revise the proposal, which was eventually introduced as legislation. Previous versions of the bill passed the Legislature with bi-partisan support, only to be vetoed by former Governor Paul LePage, who was also a vocal opponent of the RPS.

Governor Mills also signed LD 91 on April 2nd to eliminate so-called “Gross Metering,” reversing a recent PUC decision that penalized homeowners and business for going solar. When combined with LD 91, LD 1711 will finally unleash solar investment in Maine.

Reducing global warming emissions 80% by 2050

LD 1679 establishes the Maine Climate Council, which is charged with developing action plans to reduce Maine’s global warming emissions 45% below 1990 levels by 2030 and at least 80% by 2050. The bill also promotes clean energy jobs and climate resiliency for local communities as Maine transitions to a low-carbon economy. Sponsored by Senator David Woodsome, a Republican from York, this bill shows that climate and clean energy policy is not a partisan issue in Maine.

Electrifying transportation and buildings

Electrifying vehicles, buildings and industry with clean energy has been identified as a key strategy for replacing fossil fuel use in other sectors and achieving deep cuts in emissions. Maine adopted several policies this legislative session that would help accomplish this, including:

Maine’s clean energy future looks bright

We applaud Governor Mills and the Maine legislature for passing strong, bi-partisan clean energy legislation that recognizes the urgency of the climate crisis and takes meaningful steps to address it. Maine can finally rejoin the big leagues and regain its all-star status as a clean energy leader that puts the state on a pathway to achieve significant cuts in global warming emissions.

Photo: Barbara Barrett

How Do Power Grids Beat the Summer Heat?

UCS Blog - The Equation (text only) -

Credit: iStockphoto/Pleasureofart

In the searing heart of summer, when blazing days stack end on end and the air hangs heavy and still, the power grid gets put to the test as people turn to air conditioners to find reprieve.

Millions upon millions of air conditioners, cranking away on rooftops, in windows, behind buildings; block by block, business by business, home by home: together, these many machines can add up to a major increase in electricity demand.

In Texas, grid operators estimate that such sweltering summer days can result in a doubling of peak electricity use compared with during spring.

When summer rolls around, people start using a lot more electricity to try to stay comfortable. In Texas, grid operators show this surge in summer use and estimate nearly half can be attributed to the weather. Credit: ERCOT.

At the same time, many power plants and power grid components can themselves struggle in the face of sky-high heat, which means even more strain is placed on the grid right when it’s needed most.

The upshot is that while most of us are lying low to try to beat the heat, the power grid is in an all-out sprint to ensure that it keeps up. And that means grid operators pull out all the stops, from long-range planning to moment-of operations, targeting both supply and demand.

Some of these approaches, like keeping polluting power plants around to run just a few times a year, are costly and inefficient. But as cleaner resources come online and technologies on the grid evolve, new and exciting solutions are emerging that are not only cleaner but cheaper, too.

These advances couldn’t come at a more critical time as climate change increasingly points toward more dangerous high-heat conditions that threaten health and well-being, especially in the absence of cooling, which elevates the importance of increased access to cooling itself, as well as ensuring the resilience and reliability of the enabling grid.

Planning

The foundation of reliable grid operations is planning: estimating how much electricity people will need and whether there are enough resources around for that need to be met—including during heat waves, and including during heat waves when unexpected incidents occur.

One check on this is the annual summer reliability assessment conducted by the nation’s top reliability cop, the National Electric Reliability Corporation (NERC), which evaluates just such questions for every region of the grid.

In its annual summer reliability assessment, NERC examines each region of the grid and assesses sufficiency of resources to meet potential summer needs. For the vast majority of regions in 2019, resources far outpace anticipated needs. Credit: NERC (2019).

This consideration of “resource adequacy” and “reserve margins” ends up shaping grid decisions large and small, which makes it critically important to get the underlying assumptions just right. Otherwise, for example, uneconomic power plants might be unnecessarily kept around, wasting consumer money and hindering the transition to clean electricity. Or, operators might not recognize that the timing and magnitude of peak demand can rapidly change in shape as installations of rooftop solar surge across the country, with abundant solar power easing afternoon grid stress and in turn pushing the peak later and lower in the day.

As ISO-New England illustrates, as more and more rooftop solar gets added to the system, the summer load profile can change, with peak demand not only dropping lower but also shifting later, too. Credit: ISO-NE.

Operations

But long-range planning is really just the start. Next up is making sure that all those power generating, power saving, and power transmitting resources can actually be used.

Power plants and supporting grid infrastructure routinely undergo maintenance, meaning sometimes they have to go offline. If fixes are quick, then outages during low-demand seasons like spring mean there’s still plenty of slack on the system to mitigate effects. For longer lasting outages, though, operators plan ahead to ensure that not too many outages are planned at once, and that enough resources remain available to make it through those hottest summer days.

Yet even when operators do the best planning they can, equipment still breaks and accidents still happen. And then, too, there’s the fact that summer heat can itself wreak havoc on the grid.

For example, virtually all coal and nuclear power plants require cool water to run; in the summer, as hot weather steadily drives up the temperatures of area waterways, that water can eventually get too hot to be of cooling use or can be limited by drought, meaning those massive generators have no choice but to curtail operations or even entirely shut off—right when we need their power the most.

High temperatures can also decrease the efficiency of transmission lines and increase the likelihood of a disruption on the system, which if not rapidly addressed can quickly cascade into far larger outage events, like the 2003 Northeast blackout which thrust over 50 million people into the dark. And now in California, in the face of climate change and the growing threat of wildfires, utilities are starting to pre-emptively shut off transmission lines in high-risk areas during high-risk days to minimize chances of sparking a new blaze.

Which all means operators need to keep a watchful eye on the grid, managing resources to be prepared for contingencies to occur, and relying on weather forecasters to help them anticipate exactly when such conditions might arise in order to proactively plan for how these situations can be overcome.

Markets and management

And when a heat wave does finally arrive? After all the planning, and the operations, and the forecasts—how does the grid actually manage that overwhelming dystopian symphony of compressors cycling on and off, on and off, day after day after day?

With good offense and good defense.

First, there’s the usual starting lineup of least-cost, most efficient, and often cleanest resources, ready and reporting for action (minus those lost to outages, planned or otherwise). These are the ones that are typically relied on all throughout the year.

Then, as hot days continue and the demand for power grows, the grid is increasingly forced to call on its back bench: the more expensive and less efficient “peaker plants,” some of which run only a handful of times a year.

Peaker plants often take the form of combustion turbines, and can be sited right in the heart of communities—which means on some of the worst air quality days of the year, these polluters are roaring to life, exacerbating exposures to already unhealthy air.

Unsurprisingly, inefficient plants running just a few times a year end up being quite expensive, too. In regions with energy markets, this is when prices on the grid start to spike.

Prolonged hot weather can send real-time electricity prices surging, as seen in this chart of a 2013 heatwave in New York. Credit: EIA.

This way of running the power system is ripe for disruption, and indeed recently, new technologies have started to edge in. In particular, peaker plants are beginning to be replaced by combined solar-plus-storage projects. These projects couple solar power plants with battery energy storage, resulting in clean, reliable, and rapidly dispatchable resources, useful not just in those peak moments, but in fact the whole year round.

But during heat waves, it’s not just power plants coming in to save the day—it’s everyday people, too. That’s because a huge part of responding to peak demand is actually lowering power demand itself during those very highest hours.

Some of these actions are systematic: utilities can permanently preclude the need for that last peaker plant by incentivizing people and businesses to use less electricity during those highest hours of the day, not just during heat waves, but every day. They can guide that response through time-varying electricity rates, which are high during high-demand hours and lower during the rest, to encourage shifting of flexible electricity use, like running a dishwasher or drying laundry, away from periods of grid stress.

Other demand-side interventions are more specific to major peak events. For example, in exchange for handsome compensation, electricity customers can agree to be called on a few times a year to ease their electricity consumption, much of which can be done automatically, like raising thermostats several degrees, or stopping industrial operations, or flipping off every other bank of lights in a big box store, all to avoid bringing the costliest final power plants online.

Yet even after all of that, sometimes, it’s still just not enough. Despite all the planning, all the power plants, all the demand response—still, more power is needed than the grid is able to give. That means first, looking to neighboring regions to see if they might have some electricity to spare and sell. Especially when weather varies across regions, sharing of resources can be an effective and efficient solution.

But sometimes, especially when wide swaths of the country are enduring a heat wave at once, it’s still just not enough. And that means turning to the extreme last resort of “load shedding,” the forgiving term assigned to cutting power to some consumers to keep the lights on for the rest. Because if not, and the grid gets overloaded, it can quickly become lights off not just for some but for all.

Looking ahead

To get through a heat wave, grid operators employ a highly dynamic approach informed by careful planning, toggling switches and turning dials to modulate supply and demand. And it turns out, this dynamic method of operations is in fact where the grid of the future is headed, as more variable renewable resources like wind and solar come online and technologies support far more flexibility and coordination in when and how electricity is consumed.

Indeed, grid management of heat waves can teach us a lot about how to get the most out of the resources we want, and how to limit our use of the ones we don’t.

It also elevates the critical importance of paying attention to these challenges, and proactively planning for an increasingly flexible, resilient, and reliable grid as we face the growing strain and stress of climate impacts in the years to come. Because during a heat wave, reliable access to electricity isn’t only a matter of comfort—it’s a significant matter of health and safety, too. Which makes it all the more important to ensure that grid operators aren’t just prepared for heat waves this summer, but are also looking out for worsening conditions to come.

Postscript: If you want to track heat waves rippling across the grid this summer, take a look at one of these data feeds, including at the national level or from the regional grid operators below:

Credit: iStockphoto/Pleasureofart

6 Maps That Show How Bad Energy Poverty is and Reveal 2 Ways to Make it Better

UCS Blog - The Equation (text only) -

Fresh off the presses is the latest tool from the US Department of Energy (DOE) that generates color-coded maps (known as choropleths) of the deep energy burden many Americans face. There is a wealth of data and hundreds of different ways to display it (you can check it out for yourself here). I’ve chosen 3 sets of maps (6 maps in total) that show the extent of the energy burden but also illuminates a couple of ways we can address the problem of energy poverty.

The energy burden is incredibly regressive

Economists like to talk about ‘progressive’ and ‘regressive’ policies. These aren’t political descriptors, rather, they describe the distributive effects of various policies. Progressive policies place a higher burden as you move up the income ladder. Regressive policies place a higher burden on lower income folks. Generally, decisionmakers try to avoid creating regressive policies. However, we haven’t succeeded on that front when it comes to energy.

The energy burden is far worse for those already living in poverty. Data from the DOE Office of Energy Efficiency and Renewable Energy.

This map on the left isn’t all that exciting, it shows the percent of household income spent on household energy bills (electric, heating, stuff like that) for households that have a combined income four times higher than the federal poverty line (FPL), which is $25,750 for a family of four. It shows a mostly uniform distribution across the US, with households in that income bracket typically spending only 1-2% of their income on household energy. Even if you add in anyone above the federal poverty line the map doesn’t get all that more exciting, in all 50 states, and Puerto Rico, households above the poverty level spend about 2-3% of their income on energy.

Things start to look much different once you look at those households below the poverty level (the map on the right), the burden balloons to 10% up to 26%. 10 percent of household income is also an important threshold because it is often used as the delineation for energy poverty.

This new data suggests that those families in the US that are below the poverty line are far more likely to be suffering from energy poverty.

There are a few clusters of higher burdened areas. Some of the worst burden is in states like Connecticut, Massachusetts, Maryland, New York, Pennsylvania, and the District of Columbia. Outside of the northeast two clusters stand out. The first is in the Midwest, in states like Michigan, Illinois, Missouri, and Kansas. It is worth noting that Kansas and western Missouri are both served by Evergy, while eastern Missouri and parts of Illinois are served by Dynergy.

Michigan has an incredibly high burden, with 22% of low-income household’s income going to energy bills.

Another cluster that stands out is in the southeast. Alabama, Georgia, and Mississippi (all served by subsidiaries of Southern Company) along with South Carolina. Some Commissioners in those states claim that the high bills are a function of high AC load, which is odd because Florida, Arkansas, Louisiana, Arizona, New Mexico, and Nevada also have high AC loads and they all have lower energy bills and lower energy burden.

Multi-unit buildings are better

Creating affordable housing is an important part of a set of anti-poverty policies, but this new data suggests that it may also be helpful in the fight against energy poverty.

Multi-family households below the poverty line tend to be more efficient and have lower energy bills so the burden is lower on families living in those houses. Data from the DOE Office of Energy Efficiency and Renewable Energy.

In every state, the annual energy bill in multi-unit households is lower, and for low-income households it is a considerable saving, by at least $300 and as high as $1,850 a year. That is considerable savings for a family that is below the federal poverty line. Single-family households spend as much as 13% more of their income on energy expenditures.

The split incentive

The split incentive is another wonky piece of economic jargon. It boils down to this:

Owners tend to have the capital and long-term interest to make investments that will lower energy bills but often don’t pay the bill, so the incentive is split with renters. Renters generally don’t have much of an incentive to make investments in the homes they rent but they do pay the energy bills.

Looking at the maps of renters vs owners shows this phenomenon in effect for those below the federal poverty line.

Owners below tend to have an incentive to invest in efficiency (appliances or home retrofits)  and so have lower energy bills. Data from the DOE Office of Energy Efficiency and Renewable Energy.

Owners tend to have lower energy bills, probably b/c they are more likely to live in their home for long enough periods where investment in more efficient appliances or home insulation will pay off. Renters are sadly facing higher energy bills which could mean that those unable to afford buying a house are also more likely to be unable to afford to pay their energy bill.

The maps illustrate the magnitude of the problem. Luckily there are ways to solve the split incentive which include:

Action worth taking

Policies that promote multi-family households not only help deal with energy poverty but can be a centerpiece the housing crisis. States (like Oregon) and cities (like Minneapolis) are already taking on this issue.

The split incentive is also an issue worth tackling. A 2014 study found that fixing the split incentive problem would save consumers $4-11 billion per year.

$4-11 BILLION per YEAR.

That is a lot of money and considering that a lot of those savings would be enjoyed by low-income households it means that it would go a long way at easing the energy burden.

Making households more energy efficient will go a long way to reduce energy costs. That increased efficiency will reduce energy bills, reduce energy burden, and reduce pollution. That last point, reduced pollution, is critically important because low-income households are also disproportionately burdened by healthcare costs.

It turns out, housing policy is energy policy and energy policy is climate policy.

Public Domain

Can Trees, Oceans and Giant Carbon Sucking Machines Save Us from Climate Catastrophe?

UCS Blog - The Equation (text only) -

The world needs leadership on climate change–as witnessed by the limited progress made by nearly 200 country delegates to the climate conference last week who failed to overcome Saudi Arabia’s block on formal discussion of the latest climate science produced by The Intergovernmental Panel on Climate Change (IPCC).  As we confront ever more obvious impacts of a warming world, we must immediately tackle the political and technical challenges of reducing the pollution causing climate change. And, just as actively, seek ways to remove, store and manage carbon to bring our climate back into balance.

Florida is perhaps the US state facing the most obvious evidence of a warming world. Sea level rise has already cost Florida taxpayers billions of dollars to keep the ocean at bay. And they are fighting a losing battle if we don’t act fast to move our economy off fossil fuels.

Democratic presidential hopefuls in the Miami debate spent more time than ever before addressing climate change, and many candidates indicate it will be a high priority. But without truly transformational change, Florida and the rest of this country are destined for a world of more dangerous storms, longer and hotter heat waves, wildfires, and floods threatening the health, homes, and economies of thousands of communities.

While moving to a clean energy economy and adaptation must be our first-line solutions to climate change, scientists are signaling that carbon dioxide removal will also need to be a part of our strategy.  Carbon dioxide removal (CDR), also known as negative emissions technologies (NETs), is a term used to describe a range of options to actively remove carbon dioxide (CO2) from the atmosphere.

The host nation for the next world climate conference, Chile, is planning to feature natural ways for trees and oceans to store carbon.  Meanwhile, Exxon and other oil companies, along with Bill Gates, are making news with investments in new technologies to draw CO2 out of the air.  It may well be that both types of approaches are needed to grapple with the climate crisis that is upon us.

What is carbon dioxide removal and why consider it?

A crucial insight from the recent IPCC 1.5° C report is that meeting the long-term goals of the Paris Agreement on climate change will require not just getting to zero emissions, but getting to “net negative” global CO2 emissions by mid-century.  Unfortunately, we are rapidly running out of time to avoid severe climate risks through deep cuts in emissions alone, and some sectors of our economy may find it difficult to stop using fossil fuels.

The report highlighted, in a more pointed way than had been done before, the need to invest in measures to protect communities from the impacts that are already unfolding and/or are unavoidable by helping them adapt.  At the same time, there are limits to adaptation, especially as we get closer to high-risk or irreversible climate impacts, like sea level rise.

In this daunting context, understanding the risks and potential of CDR will be essential for climate activists, scientists, the media, and lawmakers.

CO2 removal occurs naturally on land (e.g. forests, soils and wetlands) and the ocean (e.g. seagrasses, microalgae), and we can enhance that natural process to increase the amount of carbon stored.  More attention has been given lately to engineered, technological approaches, include capturing carbon from the air directly (Direct Air Capture or DAC) and storing it in secure geologic formations or converting it into fuel, cement, minerals, and plastics, or used as a feedstock for chemicals – all of which are at various stages of research and development.

The National Academy of Sciences released last year its analysis of the major CDR options, titled Negative Emissions Technologies and Reliable Sequestration: A Research Agenda.  Specifically, they looked at a number of natural and engineered approaches including:

  • Coastal blue carbon (Chapter 2)—Land use and management practices that increase the carbon stored in living plants or sediments in mangroves, tidal marshlands, seagrass beds, and other tidal or salt-water wetlands.
  • Terrestrial carbon removal and sequestration (Chapter 3)—Land use and management practices such as afforestation/reforestation, changes in forest management, or changes in agricultural practices that enhance soil carbon storage (“agricultural soils”).
  • Bioenergy with carbon capture and sequestration (Chapter 4)—Energy production using plant biomass to produce electricity, liquid fuels, and/or heat combined with capture and sequestration of any CO2 produced when using the bioenergy and any remaining biomass carbon that is not in the liquid fuels.
  • Direct air capture (Chapter 5)—Chemical processes that capture CO2 from ambient air and concentrate it, so that it can be injected into a storage reservoir.

A broad portfolio of negative emissions technologies are needed if we are to reach a goal of limiting global average temperature to well under 2°C. Coastal blue carbon, afforestation/reforestation and soil carbon are among the best options we have available today. Bioenergy with CCS, Direct Air Capture and Accelerated Weather are technologically based approaches that need varying levels of research to determine whether they can be safely and affordably deployed. Source: National Academies of Science

 Important considerations to help evaluate CDR options

CDR options must be evaluated on an individual basis as well as how they interact with each other (e.g. competition for the same land). Some are an enhancement of natural processes and others are more heavily reliant on technological solutions. While some of these options are well-understood and already being implemented, others are still at early stages of research and development. In addition to climate benefits, some of these options also have the potential to provide other valuable co-benefits—such as ecosystem benefits, flood protection, and more productive soils and forests—and might make sense to deploy for those reasons.

Many CDR options may, though, pose significant risks, costs and uncertainties. These include trade-offs in terms of use of scarce resources like land and water and the risks of a sudden release of CO2 from a failed repository. There are many challenges and issues raised by the different CDR approaches, including important questions of sustainability and equity (Dooley and Kartha 2018, Creutzig et al. 2013), that can be addressed broadly by asking three questions:

  • How much land does the approach require, and what kind of land is it? How permanent is land storage, in different ecosystems and at different depths, likely to be?
  • How much (external, non-photosynthetic) energy does the approach require?
  • How much matter (e.g. biomass, rock, or CO2) does the approach require to be transported, pyrolyzed, crushed, buried or otherwise processed, and what kind of processing, transportation and infrastructure are required?

Scenario of the role of negative emissions technologies in reaching net zero emissions. NOTE: For any concentration and type of greenhouse gas (e.g. methane, perfluorocarbons, and nitrous oxide) CO2e signifies the concentration of CO2 which would have the same amount of radiative forcing. Source: UNEP, 2017

It is very important to consider the equity and environmental justice impacts of CDR approaches, with particular attention to competition for land and pollution when evaluating CDR options.  Community groups are already overburdened with pollution from facilities like power plants, whether they are coal, natural gas or bioenergy.

While adding carbon capture technology to a bioenergy facility would reduce CO2 emissions, other pollution control technologies will be necessary to reduce other harmful air pollutants. And there are concerns about the demand these plants present for water and land and how their needs for those natural resources would affect the community that also relies on them.

In addition, the network of pipelines required to transport carbon to safe storage sites could encroach on indigenous people’s lands.  Similarly, even the natural solutions, like reforestation, can pose threats to the rights of indigenous peoples by commoditizing their homes and property.

A less technical and more common risk cited by many is the potential “moral hazard.” Could a focus on CDR give politicians and polluters yet another excuse for delaying action to rapidly reduce fossil fuel emissions and fund adaptation, in the hopes that we could engineer our way out of the climate crisis?

Some argue that CDR could endanger our transition to carbon-free energy options, possibly with drastic consequences, by diminishing both the investments for and the political pressure to eliminate high-carbon energy sources.  Or it could make the transition dependent on negative emissions from CDR approaches that may not pan out or that may have unforeseen risks for ecosystems or environmental justice.  Or it might lock the world in to overshoot scenarios (temporary increases of global average temperature over 1.5 or even 2 degrees) with potentially irreversible climate consequences.

What could a path forward for CDR entail?

The National Academy of Sciences report on carbon removal recommended that the US “launch a substantial research initiative to advance negative emissions technologies (NETs) as soon as practicable:”

A substantial investment would (1) improve existing NETs (i.e., coastal blue carbon, afforestation/reforestation, changes in forest management, uptake and storage by agricultural soils, and biomass energy with carbon capture and sequestration) to increase the capacity and to reduce their negative impacts and costs; (2) make rapid progress on direct air capture and carbon mineralization technologies, which are underexplored but would have essentially unlimited capacity if the high costs and many unknowns could be overcome; and (3) advance NET enabling research on biofuels and carbon sequestration that should be undertaken anyway as part of an emissions mitigation research portfolio.

If the US is to embark on such an initiative, it needs to be paired with significant stakeholder engagement to develop a framework for governance that will minimize the moral hazard threat and ensure equity concerns are addressed in any research and development project.

Policymakers, scientists, private companies and civil society will need a thorough understanding of the costs, benefits, uncertainties, and potential harms associated with various CDR options. Engagement with a diverse set of stakeholders who would be affected by their use should occur ahead of making any major decisions or large investments. And more public education about CDR options is an essential step to help foster an informed stakeholder process.

Robust and inclusive systems of governance that are mindful of relevant societal, environmental, ethical, and political considerations can lead to wiser decisions about all technologies and practices that have potentially far-reaching consequences.

 

National Academy of Sciences National Academy of Sciences

Happy Birthday America: The Census is Intact, for Now

UCS Blog - The Equation (text only) -

Photo: spurekar/CC BY 2.0 (Flickr)

With news that the Trump administration has abandoned its attempt to place a citizenship question on the 2020 Decennial Census, the people of the United States received the best birthday gift they could hope for, averting a xenophobic and racist effort to disenfranchise millions of people of color by corrupting the nation’s largest civic event. Today we can all celebrate knowing that the oath that US Marshals first took in 1790, to complete “a just and perfect enumeration” of all persons, remains intact, thanks to the efforts of thousands of scientists, legal experts, and advocates. However, undercounts resulting from budget negligence and disinformation campaigns remain a serious threat to the integrity of the Census. Come 2020, we have to be more vigilant than ever to ensure that every voice is counted in order to stop the further erosion of our democratic infrastructure.

Why Trump abandoned his attempt to weaponize the Census

Of all his actions that undermine US democracy, corruption of the nation’s largest non-military exercise for discriminatory purposes would possibly have been the most destructive. The Census, in addition to determining the allocation of about $800 billion in federal program funding for medical services, schools, housing, and infrastructure, also provides economic data that shapes the function of the entire economy, and of course is used to allocate seats to the House of Representatives.

Recently revealed court records show that even before President Trump took office, officials involved in his transition team were figuring out how to place a citizenship question on the Census in order to minimize the voting power of Hispanics and be “advantageous to Republicans and Non-Hispanic Whites” according to a memo from GOP demographer Thomas Hofeller, who had simulated redistricting plans using citizen voting age populations, rather than total population.

President Trump and Electoral “Integrity” Commission Leader Kris Kobach, who was eventually held in contempt of court for failing to notify voters of eligibility as Kansas Secretary of State

As soon as it was clear that Trump and loyalists like Kris Kobach, co-chair of the infamous (and quickly disbanded) “Electoral Integrity”commission, were serious about getting their citizenship question, supposedly for the purposes of better enforcing the Voting Rights Act, the scientific community took action: several former Census directors voiced their concern about the “huge, unpredictable consequences” that placing an untested question could produce, and leading scientific organizations and users of Census data, joined by the Union of Concerned Scientists, sent letters to Commerce Secretary Wilbur Ross, explaining the importance of maintaining the scientific integrity of the nation’s most valuable data resource.

In March 2017, when Secretary Ross announced his intention to add the question, the civil rights community fought back: lawsuits were filed in several states and Congress held hearings, where testimony revealed that not only would adding the question likely result in an undercount of 5-12%, concentrated in hard-to-count and immigrant communities, but that the administration had no legitimate justification to add the question, given evidence that it would be more difficult to enforce the Voting Rights Act with such flawed data. Then last week, and shortly after the damning evidence of the Trump administration’s discriminatory intentions were revealed, the Supreme Court blocked the question from being added, based on its conclusion that the justification was “contrived.” President Trump initially responded by threatening to delay the Census until he got a more favorable decision, but administration officials declared that the Census forms were being printed without the question.

Serious threats to the integrity of the Census, and democracy, remain

After the initial surrender, President Trump has since contradicted his own administration officials, tweeting that “News Reports about the Department of Commerce dropping its quest to put the Citizenship Question on the Census is incorrect, or, to state it differently, FAKE!”

The president has also abandoned the public justification for the question, resorting to his typical xenophobic vitriol:

“I think it’s very important to find out if somebody’s a citizen as opposed to an illegal. I think that there’s a big difference to me between being a citizen of the United States and being an illegal.”

It is beside the point that a citizenship question would not even determine whether a non-citizen is here illegally. We are likely to see much more of this language coming from the White House, with a clear intent of frightening immigrants and lowering their response rates, thereby amplifying the representation of Non-Hispanic whites in the census count and distorting the allocation of seats to the House of Representatives. The Census Bureau may even be planning to provide citizenship data to states from other administrative sources, in an effort to encourage the creation of citizen-only redistricting maps, mirroring the discriminatory intent of the Census citizenship question. The Supreme Court has been agnostic on the question of whether such maps would be constitutional.

A republic, if we can keep it

Finally, there will likely be a more coordinated effort coming from enemies of democracy within and abroad. Try searching #boycottcensus on Twitter and you will already find a large number of right-wing actors (and bots) urging conservatives, especially in blue states, to not take part in the Census.

This is the battle before us. Having preserved the integrity of the Census questionnaire (for now), it is time to ensure the integrity of its implementation. It is our largest civic event, derived straight from the Constitution: the snapshot of America that literally constitutes our numeric population, once a decade. Our Census is the technology that gives life to our constitutional protections, as the measurement of population traits is what allows political power to be allocated to us. To the degree that it is corrupted, so is our democracy.

Celebrate today, because we have earned it. And when the Census rolls out in 2020, we must work together to educate, organize and engage all our neighbors to take part, liberals and conservatives, citizens and non-citizens, to ensure that everyone is aware of their rights, especially those who have been systematically targeted by this administration in such an un-American fashion.

Is the USDA Relocation Just Good Old-Fashioned Rent Seeking?

UCS Blog - The Equation (text only) -

Photo: aisletwentytwo/CC BY 2.0 (Flickr)

One of things I cherish about economists is their ability to call BS when they see it. In research settings economists tend to have a reputation for asking hard-hitting questions during seminars. They are known for having the most unpopular opinion and for being unabashedly proud of it. I’ve personally seen non-economists bristle at the thought of giving a talk to an economics-oriented audience. As someone who straddles the worlds of public health and economics I get it, trust me. I’ve been there.

Without doubt, this attitude is partially a function of the male-dominated nature of the profession, which has serious drawbacks and has been the center of much negative attention lately. That aside, there is still great value in having the ability to be constructively and compassionately critical, and to voice an evidence or theory-based opinion even when it is unpopular, and to be mighty proud of it. The world needs a great deal of this right now.

Personally, I’ve taken the economists’ contrarian culture to heart over the last several years, which is part of my rationale for coming to Washington to work for evidence-based food and agriculture policy.

In the very short time I’ve been in DC one issue I’ve worked closely on is the relocation of USDA’s Economic Research Service (ERS) and its National Institute of Food and Agriculture (NIFA). And throughout this work so far I’ve been proud—and unsurprised—that many economists have loudly and publicly opposed the controversial plan to relocate these two agencies.

For example, Dr. Brian Stacy, who was a former ERS researcher now working at the World Bank, explained why he thinks the relocation is so harmful to the agency and agricultural research. Former ERS Administrator Dr. Susan Offutt said in an op-ed that USDA is throwing away a world-class research institute. Dr. Dawn Thilmany, president-elect of the Agricultural and Applied Economics Association (AAEA) and associate department head at Colorado State University’s College of Agricultural Sciences, recently penned a column in Colorado’s Daily Sentinel about the harms the relocation will have on the state’s farmers and food system. Former AAEA president Dr. Scott Swinton wrote a hard-hitting op-ed claiming that, done properly, a relocation of ERS and NIFA could have made sense for farmers and food consumers, and I happen to agree with this view. AAEA itself, which represents thousands of economists, has come out strongly against the relocation. AAEA recently dealt a serious blow to the Administration’s so-called “cost benefit” analysis that was intended to justify the move as a “cost saving” measure.

These economists know the relocation will dilute, diminish, and at worst, cause long-term damage to government research that serves the interests of our nation’s farmers and food supply. In addition, as any good student of economics would tell you, these economists know this relocation may come with a heavy dose of rent-seeking.

Rent seeking 101

For my non-economist readers, rent-seeking is when individuals or groups ask the government to change policies to benefit themselves without a concomitant benefit to the rest of us. “Rent” in this phrase doesn’t refer to the money spent to pay for a house or apartment. Rather rent in this context is a cost paid by the producer of some good or service to generate a sort of “unearned” income. Unearned? Yes, unearned because the income wasn’t generated from producing or selling a good or service. The income was generated directly from changing policies or currying favor or privilege (typically) from government officials or those in power.

Crucially, part of the theory of rent-seeking explains that those seeking policy change to benefit themselves spend resources to capture these “rents” for themselves (e.g., lobbying activities), and these expenditures will not create new wealth for society at large. The story can be further complicated when the benefits to those engaging in lobbying activities are concentrated and the costs of their actions are borne by many people. And in many instances, the people who are harmed by the change may have less incentive or ability to launch their own attempt to block the changes being asked for by the rent-seekers.

With respect to the relocation of ERS and NIFA, economists and experts (including several former high-level USDA officials) have repeatedly exclaimed how the move will be a huge loss to our nation’s farmers and consumers since the agencies are losing talented, seasoned food and agriculture experts who will now be far away from where policy is made. Some claim that this relocation even jeopardizes the scientific integrity of USDA. Moreover, Secretary Perdue has never articulated specific, credible reasons for how this relocation will benefit farmers and the public at large. But on the other hand, the Kansas City region stands to gain new jobs and federal funding from the relocation. Thus, the rent-seekers gain, and the rest of us lose.

Keep in mind with this rent-seeking hypothesis I am posing, I’m being optimistic. I’m taking Secretary Perdue at his word that he will rehire all the vacancies that have been created since the relocation proposal was announced. However, if I’m being pessimistic (which I’ve already been), this relocation is an attempt to dismantle important USDA research. Current estimates indicate that 80% of the current ERS roster will quit before the relocation is complete AND the Administration’s FY 2020 budget proposal has a 30% budget cut and 52% cut to staff years at ERS. Who knows if this administration will actually rehire the hundreds of employees who have already quit and who plan to do so in the coming months.

But, let’s hold on to what little shred of optimism we have for the moment. Is it possible then that the relocation of ERS and NIFA was motivated by rent-seeking by some group or individual? Is there any evidence to suggest that rent-seeking might be what’s really going on here? That’s a great question!

Who’s who among the rent seekers

In addition to the Missouri and Kansas Congressional delegation—who have been vocally in favor of this no matter the harms it will cause (because they will gain many jobs and federal funding, of course)—one lobbyist has also been particularly outspoken and actively in favor of the relocation. His name is Randy Russell, and he runs the Russell Group, one of DC’s most powerful food and agriculture lobby shops. An Agripulse “Signal to Noise” podcast has Russell in February 2019 boasting the virtues of the relocation. Our own research into Congressional lobbying records (here is one for example) indicate that Russell Group was lobbying on behalf of BioSTL, a St. Louis, Missouri bioscience firm that was one of the applicants on the USDA short list of potential relocation sites. Clearly these groups are seeking rents—either for the states of Missouri or Kansas, or for some other groups such as real estate firms or other businesses who see an economic opportunity in the relocation.

Of course, we found records of many other interest groups lobbying on the relocation, so Russell Group is not the only one who is seeking rents for its clients. For example, Broad Square Partners, a Kansas City non-residential building operator, hired lobby firm Kit Bond Strategies to lobby on the relocation of ERS and NIFA. The University of Kansas also did its own lobbying, presumably in the hopes that it could help sway Secretary Perdue to move the agencies closer to its campus.

Meanwhile, Secretary Perdue said to the Senate Appropriations agriculture committee in April 2019 that to make his final decision on the relocation site he was going back to the finalist (or short list) sites to ask them for “their last and best offers”.  In other words, Secretary Perdue was asking the rent-seekers for more.

But regardless of who exactly was lobbying and no matter which place Secretary Perdue decided to move these agencies—Kansas City, Purdue University, St. Louis, North Carolina’s Research Triangle were all finalists—this is, bottom line, a big win for one small locality (and perhaps a certain set of food system industries or interest groups) and a giant loss for farmers and our nation’s food supply. In other words, the relocation can be viewed as classic rent-seeking. That is, if this whole thing isn’t just an attempt to dismantle both agencies.

And knowing my economist colleagues who have yet to voice their contrarian views in their typical outspoken fashion, they ought to be furious that this rent-seeking behavior has already caused substantial damage to one of our nation’s most treasured economic research institutions.

The Other Existential Threat: Nuclear Weapons & the 2020 Presidential Campaign

UCS Blog - The Equation (text only) -

Photo: Unsplash/Element5 Digital

The 2020 presidential campaign kicked off in earnest with last week’s Miami debates, and many of the “high profile” topics were covered: climate change, immigration, gun control. One topic was a little more unexpected: nuclear weapons. On the first night, three of the ten candidates on stage said nuclear weapons or the threat of nuclear war is the biggest geopolitical threat facing the United States.

This should not be surprising: recent polling shows that in key primary states, including New Hampshire and Iowa, over 80% of respondents want to know what candidates think about nuclear weapons. We also know from recent national polling that more than 80% of people support arms control treaties with Russia.

Unfortunately, current US policies put the public at danger from nuclear use. Today, the United States retains the right to use nuclear weapons first in a crisis and maintains hundreds of land-based missiles on hair trigger alert. New, more usable nuclear weapons are being developed as part of a trillion-dollar plan to re-build and maintain the entire nuclear arsenal (a proposal mind you that dates to the Obama administration). For its part, the Trump administration has pulled out of crucial nuclear agreements that have kept us safe, including the Iran nuclear deal and the Intermediate-Range Nuclear Forces (INF) Treaty, and seems poised to walk away from the New START Treaty as well.

These kinds of policies should be a major topic of discussion among candidates in the 2020 election, and candidates are already being asked about their positions on the campaign trail. Their responses and comments show a range of thought and understanding on the topic. You can see videos of the conversations with the presidential candidates about nuclear weapons on our YouTube channel. We’ll keep adding videos to this channel as members of the public and activists around the country continue to have these conversations with the candidates in the months ahead.

A question about nuclear weapons is asked at a Beto O’Rourke campaign event. Source: Sam Tardiff

Indeed, voters have a critical role to play by raising the profile of these discussions and helping to elevate this important conversation and debate—both within our communities and online.

Nuclear weapons and climate change are the two existential threats facing humanity. They are serious. They are growing. They are urgent.  And our country and leaders must act—before it’s too late.

So that’s where “we the people” come in. Let’s educate others. Let’s raise our voices. Let’s insist that those who wish to lead our country do just that—lead us on a path that reduces the risks these horrible weapons pose.

The Union of Concerned Scientists aims to increase public discussion about the use of nuclear weapons; we are posting these videos to highlight such discussion by candidates for president. As a 501c3 nonpartisan organization, UCS does not support or oppose any candidate for election.

Photo: Unsplash/Element5 Digital

Congress Investigates Rollback of the Clean Car Standards – an Epic Oversight Hearing

UCS Blog - The Equation (text only) -

The House Energy and Commerce committee held its first oversight hearing on the soon-to-be-rolled-back fuel economy and greenhouse gas standards on Thursday, June 20.   The hearing highlighted how the rollback will be bad for consumers, the environment, health, and energy security – you can read more about the hearing set up in my colleague Dave’s curtain raiser blog, and Rep. Schakowsky does a nice job of setting up what’s really going on in her opening statement.

But the night before the hearing, Committee leaders called attention to the real beneficiaries of the rollback and officially launched an investigation into Big Oil’s covert campaign supporting the rollback, which was originally exposed in a blockbuster New York Times report late last year. The committee demanded answers on the coordination between the administration and  Marathon Petroleum, American Fuel and Petrochemical Manufacturers, American Legislative Exchange Council, Energy4US and Americans for Prosperity. Those answers are due on July 3 – we will see if these entities comply with the Committee’s request.

The hearing

While we await those answers, the hearing provides some fascinating background about the machinations behind this ridiculous rollback. Some quick numbers:

  • 10 – There were ten (10!) witnesses at this hearing. There were two testifiers from the Trump administration, Bill Wehrum, (now outgoing) Assistant Administrator of EPA, and Heidi King, Deputy Director of the National Highway Traffic Safety Administration (NHTSA) in the Department of Transportation.  Mary Nichols, the Chair of the California Air Resources Board (CARB), also testified on the second panel, along with witnesses from the United Auto Workers, Consumer Reports Advocacy, the Motor Equipment Manufacturers Association, Colorado Department of Transportation, the Heritage Foundation, the Alliance of Auto Manufacturers, the Louisiana Attorney General, and others.
  • 5 – Between opening statements, documents for the record, testimony, questioning, and drama, the hearing lasted more than five hours.
  • 17- About two weeks before the hearing, 17 of the world’s largest automakers, including Ford, General Motors and Toyota (many of whom are represented by the Alliance, who was a witness), sent letters to the Trump administration (also a witness) and California (also a witness), told the administration that their plan to rollback cleaner cars standards would reduce profits and create “untenable” instability in the auto manufacturing sector, promoting calls for parties to resume negotiations, which were summarily stopped by the Trump administration earlier this year.
  • 2 – As is typical, the hearing was split into 2 panels – the first panel was the administration officials and the second panel was everyone else. There was a little drama on this front though…..
The Wheeler letter

Mary Nichols, the Chairwoman of the California Air Resources Board (CARB), should have been sitting at the table with the administration witnesses, as California is (should be) an equal partner in setting the standards. However, the EPA was committed to undermining her at every turn.  Bill Wehrum refused to sit with her on the same panel.  While sparks were flying about this detail in the hearing room (with the Democrats arguing that Mary Nichols should have been on the first panel and Republicans arguing that she shouldn’t be sitting with administration witnesses), EPA Administrator Wheeler – who was not in the room – found a way to chime in.

David Shepardson tweeted out a letter from EPA Administrator Wheeler addressed (only!) to the minority (Republican) committee members. In the letter Administrator Wheeler basically called Mary Nichols a liar, said that her testimony was not truthful (!!), and accused her of “irresponsible testimony about conspiracy theories that ‘the oil industry drove this action’.” Again, this is the morning after the committee sent letters to oil industry members to disclose their involvement, based on evidence they had been communicating with the administration about this rule.  Having this letter drop as the hearing was starting made the hearing kick off rather strange, AND from all of the public statements we have seen, the content was absolutely inaccurate.  The (untrue) content of the letter was referenced multiple times during the hearing, both by Wehrum and by some Republican Committee Members.

What did the administration say?

The real fireworks of the hearing occurred while Wehrum and King were testifying.   They both made rather predictable (and untrue) statements about how the proposed rollback is actually good for people, said that everyone at both agencies are working together (even though there is ample evidence that EPA technical staff have been frozen out of the analysis), blamed California for stalled negotiations – it was the same stuff we have been hearing for over a year now.

But the back and forth with the Reps was still fun.  Here’s a sampling of some of the lines of questioning that the witnesses had to respond to (the number in parentheses is the district each Rep represents in their state) –

  • Schakowsky (D-IL-9) noted that the analysis that NHTSA relies on to say that mandating more efficient cars actually has dramatic negative safety consequences is untested and unproven – she asked Wehrum if EPA really signed off on it.
    • Bill Wehrum’s answer was that EPA had talked about it and they believed that the rule would save lives (refuted by the fact that EPA had to put their critique of NTHSA’s model in the official record during interagency review – an unusual move). He went on to simultaneously mansplain and brush off Rep. Schakowsky by saying that that the safety analysis was “very complex”.
    • Schakowsky’s answer to this was perfect – she noted that garbage in equals garbage out when modeling, demonstrating her understanding of complex issues.
  • Matsui (D-CA-6) talked about the importance of state authority and asked Bill Wehrum specifically about the administration’s intent to revoke California’s waiver to regulate tailpipe pollution – an authority the state has had since the enactment of the Clean Air Act, and an action that no administration has ever proposed.
    • Bill Wehrum refused to acknowledge that a waiver granted to California to regulate emissions has never been revoked, while in reality over 100 waivers have been granted to date and none have ever been revoked.
  • Dingell (D-MI-12) mostly wanted to get the Trump administration awe nd CARB back to the negotiating table and asked if EPA would restart negotiations if CA was willing to.
    • Wehrum said he would do what the President wanted him to do.
  • Blunt-Rochester (D-DE) asked why NHTSA wasn’t working on rules that would actually increase the safety of vehicles – like side restraint standards and side impact testing for car seats.
    • King said that rulemakings are complicated and they issue them when they’re ready (nevermind that NHTSA is many years overdue for several safety rules, as was noted by former Deputy Administrator David Friedman on the second panel).
  • Chairman Pallone (D-NJ-6) DeGette (D-CO-1) probed Bill Wehrum’s potential conflicts of interest as he is under investigation about his work with his former clients in the oil industry – they asked specifically about the rollback – he didn’t recall any meetings and didn’t know if any of his staff had meetings with these groups.*
    • Bill Wehrum refused to say he would definitely get the list of meetings to Rep. DeGette, instead saying that he would take the request back to EPA’s Office of Congressional Affairs.

*As an epilogue to this section – it’s worth noting that on Wednesday June 26, Bill Wehrum announced that he was stepping down from EPA – apparently because the ethics probes by both the EPA inspector General and the House Energy and Commerce Committees were having detrimental impacts on his former employer,  Hunton Andrews Kurth, a law firm where he represented power sector and energy and gas clients who were mostly fighting against regulations.

The main theme

One of the things that we heard over and over again was that most people don’t want the rollback of the standards, as the administration has proposed.  Representatives, both Democrat and Republican, the United Auto Workers, the Alliance of Auto manufacturers, the head of the Colorado Dept of Transportation – everyone wants the Trump administration to cease and desist with their relentless rollback of the popular and effective fuel economy and global warming pollution standards and go back to the negotiating table with CARB to find a solution that strengthens the standards.

Following the hearing, bipartisan letters were sent to the agencies and California, urging them to restart negotiations.  These letters were signed by Reps. Dingell and Tonko from the democratic side and Reps. Upton and Shimkus from the republican side.

While a negotiated outcome could be better than what we’re facing, particularly if it eliminates the administration’s attack on the Clean Air Act and state authority, the devil is in the details. As my colleague Dave has pointed out, the proposals that automakers put into the record are still a lot weaker than the existing standards—if that’s an indication of what a negotiated settlement looks like, that’s not much of a victory for the American public.

Moving forward, it will be critical that Congress continue to press the administration on its bad modeling and even worse proposals—strong oversight is needed to get the administration to uphold its Congressionally mandated responsibilities to protect public health and welfare and improve energy efficiency. The Energy and Commerce hearing is a good public display of the committee’s interest in this issue, and the letters that they sent to the oil companies and oil-funded front groups shows that they aren’t letting go of this issue any time soon.  We will continue to share our analysis and expertise in this issue and look forward to learning more about the committee’s work over time.  Ideally this level of interest stops the Trump administration from finalizing the rule as it was originally put forward in their proposal, and Congress will continue to play an important role in continuing to hold their feet to the fire.

Photo: nsub1/Flickr

A New Way to Assess Impacts of Climate Change on World Heritage Sites

UCS Blog - The Equation (text only) -

Skara Brae, Orkney. Adam Markham

The stone-age village of Skara Brae, one of the world’s most important archaeological sites, is at high risk from climate change according to the results of a new impact assessment launched this week at the annual World Heritage Committee meeting.

Dr. Alistair Rennie from Scottish Natural Heritage and the Dynamic Coast project explains the processes of accelerated coastal erosion at Skara Brae, Orkney, to CVI workshop participants. Photo: Adam Markham

 

More than 5000 years old and one of the best preserved Neolithic site in Europe, Skara Brae is part of a World Heritage property that also includes the Stones of Stenness, the Ring of Brodgar and the Maeshowe chamber tomb – known for its alignment with the sun’s rays at the Winter Solstice and its Viking graffiti. These spectacular places are on the Orkney Islands – an archipelago just a few miles off the north coast of mainland Scotland, famed for its extraordinary density of archaeological sites.

More than 3,000 archaeological sites have been identified so far on Orkney, and a survey carried out by the SCAPE Trust found at least a third of them to be already damaged by coastal erosion or at risk of being so. Whole classes of types of site, for example Iron Age Brochs (defended stone round houses) and boat nousts (haul outs) are endangered.

The new assessment focused solely on the World Heritage property and found it to be “extremely vulnerable” to sea level rise, precipitation change and increased frequency and intensity of storms.

A climate impacts workshop in Scotland

UCS worked in partnership with James Cook University, Historic Environment Scotland (HES), the University of the Highlands and Islands, Orkney Islands Council and ICOMOS (the International Council on Monuments and Sites) to test a new rapid assessment methodology – the Climate Vulnerability Index (CVI) – for the first time on a cultural World Heritage site.

Thirty stakeholders, including archaeologists, climate scientists, heritage managers, businesses and local community members gathered in Orkney for 3 days in April 2019. The workshop applied the CVI methodology and concluded that Skara Brae and the group of sites with which it makes up the Heart of Neolithic Orkney World Heritage property are in the highest category of climate risk.

The finding was announced at the 43rd World Heritage Committee meeting in Baku, Azerbaijan, on July 2nd where more than 150 nations gathered to discuss the protection of some of the planet’s most iconic and important natural and cultural sites.

Coastal erosion at the end of the sea wall protecting the Neolithic village of Skara Brae. Photo: Adam Markham.

The development of the Climate Vulnerability Index (CVI)

UCS first identified the need for a systematic review of climate risk to all World Heritage properties in a 2016 report published with UNESCO and UNEP. Then in 2017, at a meeting that UCS participated in on the German Baltic island of Vilm where experts gathered to discuss priorities for the revision of the World Heritage Committee’s decade-old climate policy, the idea for a vulnerability index for sites at risk from climate change was introduced.

Two researchers at Australia’s James Cook University, oceanographer Scott Heron and Jon Day – a former director of the Great Barrier Reef Marine Park – had taken up the challenge to design a rapid assessment methodology that could be used for all types of World Heritage sites.

Following more development, the Climate Vulnerability Index (CVI) was first tested at the natural World Heritage site, Shark Bay in Western Australia in 2018.

Soon after, UCS joined the CVI development team, and ICOMOS (one of the three official Advisory Bodies to the World Heritage Committee) included it as a project of its new Climate Change and Heritage Working Group.

The foundation of the CVI is to look at how key climate change impacts the Outstanding Universal Value (OUV) of World Heritage properties (a property’s OUV describes the characteristics for which it was inscribed on the World Heritage List). If OUV is significantly degraded or lost, a property can be put on the World Heritage in Danger list, or even de-listed completely.

In addition to assessing climate risk to the OUV, a very important aspect of the CVI is that it also looks at the economic, social and cultural vulnerability of the community associated with the World Heritage site.

 The CVI’s potential for World Heritage Management

The detailed and comprehensive report from the Orkney CVI workshop, which was unveiled at the meeting in Azerbaijan, will serve as a model for other CVI reports in the future. The CVI process will continue to be honed and strengthened in a pilot phase that will continue at least through 2020, with site workshops already being planned for the cultural landscape of the Vega Archipelago in Norway, and the natural tri-national Wadden Sea property (Netherlands/Germany/Denmark).

HES will be integrating the CVI findings into the revision of the management plan for the Heart of Neolithic Orkney (a process that began in 2019), and the agency has proposed that CVI workshops also be undertaken for two additional Scottish World Heritage sites in 2020 – the Antonine Wall, and the island of St. Kilda.

The pilot CVI workshops in Shark Bay and Orkney have demonstrated that for the first time, we have a climate risk assessment methodology customized for World Heritage than can be effectively applied across very different types of sites. The CVI is scientifically robust, transparent, repeatable and flexible enough to work everywhere from an underwater archaeology site to a tropical forest park – critical attributes if it is to be adopted within the World Heritage community.

Schematic outline of the CVI process, leading to assessments of OUV and Community vulnerabilities

It has the potential to be a hugely valuable tool for World Heritage managers and the governments that are parties to the Convention, to help them accurately understand and plan for the climate risk they are facing at each property. If applied to all World Heritage sites the CVI could help prioritize action on climate resilience and spur greater urgency amongst the States Parties in meeting their commitments under the Paris Agreement.

 

Adam Markham

Xi’s China Stands with World. Trump’s America Stands Alone.

UCS Blog - All Things Nuclear (text only) -

US and Chinese delegations talk trade in Osaka, Japan.

The presidents of the United States and China met at the G-20 leadership summit in Osaka, Japan to try to put an end to a trade war that’s disrupting the global economy. They walked away with a ceasefire agreement that left everyone uncertain about the future.

Almost all of the other members of the G-20 have serious problems with the way President Xi’s China does business. Yet not a single one of them stood with President Trump. The meeting closed with what they politely called a 19+1 declaration. It would be more accurate to call it a declaration of the 20-1 .

China, the United States and the World

At the end of the last world war political, economic, social, cultural, educational and religious leaders throughout the world committed to a collective effort to avoid another world war. They agreed the best way to do that was to act, to the greatest degree possible, in support of common interests, not only national ones.  Over the decades they established institutions, laws, and common practices to work through the very difficult problems that can arise when powerful national interests are at odds with the common good.

Before the People’s Republic of China (PRC) reclaimed China’s seat in the United Nations in 1971—over the objections of the United States, which did everything it could to isolate Communist China from the rest of the world—Xi’s predecessors preached the Marxist-Leninist gospel of global revolution.  They saw the United Nations, the Nuclear Non-Proliferation Treaty and the General Agreement on Tariffs and Trade, which eventually became the World Trade Organization (WTO), as instruments of US imperialism.  Xi’s China is still unapologetically communist. But today China is not only a member of the world order it once reviled, it is one of its biggest beneficiaries and staunchest defenders.

The United States, on the other hand, is walking away from the world order. It is withdrawing from arms control treaties, disregarding trade rules by leveling tariffs and, most importantly, telling the world it will dump as much carbon into the atmosphere as it pleases. President Trump and the officials he’s hired to represent the United States have proudly proclaimed they’re putting US national interests above the common good. They seem to have decided that most if not all of the international commitments the United States made in the past are bad deals that disadvantage the United States to the benefit of others, especially China.

Who will the rest of the world follow? If it takes after Trump’s America the consensus on avoiding world war by building international institutions and promoting global norms to protect the common interest will collapse. If it doesn’t the world is unlikely to follow a communist China with pressing human rights problems and a strident approach to territorial disputes.  Most of the rest of the world is more likely to continue to press forward as best it can without the United States. This latest meeting of the G-20 was a pretty strong signal that the post-war consensus can hold, and that China intends to help defend it.

Hopefully, President Xi will eventually recognize China needs to compromise more of its national interests to make good on that intention. His commitment to combating climate change is an encouraging sign.  Playing a more prominent role in international nuclear arms control and disarmament would be an excellent next step.

Going Forward

The United States is home to 4.3% of a global population rapidly approaching 7.6 billion. It is true that the US share of the global economy has been shrinking for quite awhile. But that’s not a sign of American decline. To the contrary, it is a reflection of the economic success in the rest of the world that US post-war internationalism intended to create.

The reason so many average Americans seem eager to walk away from the world today is not because US internationalism decreased economic disparities between nations—and in the process made the whole world a lot wealthier—but because it increased economic disparities within nations. The benefits of globalization were not shared equally among social and economic classes across countries. Discontent in China is what led to the rise of Xi Jinping. He won the Communist Party’s top spot with a promise to save the Party by rooting out corruption and rebalancing the economy. Discontent in the United States led to the rise of Donald Trump. He won his office, in part, with a promise to save America from foreigners and the supposedly bad deals his predecessors made with them.

The G-20 declaration presented a comprehensive defense of the post-WWII internationalist consensus and a unambiguous refutation of the new US nationalism. It vowed to keep international markets open and to strengthen the institutions that govern them, especially the WTO. The G-20 would have included a warning against protectionism but sought to avoid widening further its rift with the biggest offender, which isn’t China but the United States.

It also took note of “the important work of the International Panel on Climate Change” and declared the G-20 is “irreversibly” committed to the Paris Agreement. Despite vociferous and time consuming US objections, all of its members, except the United States, “reaffirmed their commitment to its full implementation.”

 

 

 

 

 

The Supreme Court’s Partisan Gerrymandering Decision is Justice Scalia’s Last Laugh

UCS Blog - The Equation (text only) -

The Supreme Court’s 5-4 decision in Rucho v. Common Cause that partisan gerrymandering claims present questions beyond the reach of the federal courts may signal the first time in the nation’s history that a majority of Justices have surrendered our most fundamental of constitutional rights, the right to participate equally in the political process, because “it has searched high and low and cannot find a workable legal standard to apply” in the dissenting words of Justice Elena Kagan.

Many scholars of election law and redistricting saw that it might play out this way. The majority decision, written by Chief Justice John Roberts, lays bare two crucial errors that we dedicated several chapters to in our book on Congressional redistricting and the courts, Gerrymandering in America. First, the majority failed to recognize the nature of vote dilution as it relates to partisan gerrymandering. Second, and as a result of the first error, the majority held fast to the false intuition that available standards and metrics, including responsiveness, asymmetry and proportional representation, are political or arbitrary rather than grounded in Constitutional protections. We cannot discern whether these serious errors by the Chief Justice were made because of an inability to comprehend the social science or a more deliberate path of willful ignorance.

The Rucho majority’s framing reflects the influence of the late Justice Antonin Scalia, who passed away in February of 2016, and clearly shaped their views on this matter. As a result, we have a decision straight out of The Federalist Society: a veneer of judicial restraint that masks an extraordinary and unprecedented exercise of political power from the least accountable branch of the Federal government. That being the case, we agree with the majority that citizens can (and must now) mobilize to enact reforms that our research shows can help compensate for the Court’s negligence.

The long reach from Scalia’s grave

In the last major partisan gerrymandering case,  Vieth v Jubelirer (2004) Justice Scalia (writing for the plurality) claimed that, unlike the “one-person, one-vote” standard in reapportionment cases, a majority rule standard (a majority of persons must elect a majority of legislators) could not be derived from any “constitutionally discoverable” right, because majority rule claims pertain to groups, not persons. The standard “rests upon the principle that groups (or at least political-action groups) have a right to proportional representation. But the Constitution contains no such principle.”

“Democracy Going” by JmacPherson (CC BY 2.0)

Chief Justice Roberts picked up right where Scalia left off: “Partisan gerrymandering claims invariably sound in a desire for proportional representation.” Roberts then declares that such claims reflect “a ‘norm that does not exist’ in our electoral system” and that “The Founders certainly did not think that proportional representation was required.” In Gerrymandering in America, we demonstrate that Justice Scalia and now Chief Justice Roberts, was and are incorrect to argue that a majority rule standard can only be derived from a claim to group rights and proportional representation. We show that the equal treatment of individual voters logically implies the majority rule standard.

Partisan vote dilution and equal protection

Chief Justice Roberts fails to see the equivalence between the one-person, one-vote rule and majority rule. On the one hand he acknowledges that vote dilution “refers to the idea that each vote must carry equal weight” and that the rule is “relatively easy to administer as a matter of math” for apportionment cases by requiring approximately equally populated districts. On the other hand, he claims that “It hardly follows from the principle that each person must have an equal say in the election of representatives that a person is entitled to have his political party achieve representation in some way commensurate to its share of statewide support.”

But it does. There is no way for equally weighted votes to produce the outcomes, including minority rule, that we frequently get from partisan gerrymanders. And while the intent of malapportionment or racial gerrymandering may differ from partisan gerrymandering, they all work through the same effect: vote dilution. Vote dilution, by whatever method or reason, can reach a point where it violates majority rule. And if the majority rule principle is derived from individual rights of equal protection, which it is, it is protected under the 14th Amendment. Not as a matter of political motivation, or precedent, but as a matter of math. This is a principle that the Founders, especially James Madison, understood. It is why seats in the House of Representatives are supposed to be allocated proportionally using the Census. But that’s another story.

The road to democratic restoration

Defenders of political equality and majority rule, the first principles of democracy, now bear the burden of securing them without federal judicial protection. In the states, grassroots organizing to legislate independent redistricting commissions, as well as multi-district, and yes, more proportional districting plans, can reduce partisan bias. And as the Rucho majority noted, several Congressional bills (the For the People Act, the Fair Representation Act) have provisions that could radically improve electoral representation. The time has come to take action.

The research team that wrote Gerrymandering in America includes Dr. Latner and the following scholars and experts:

Alex Keena – Virginia Commonwealth University
Alex Keena is assistant professor in the Department of Political Science at Virginia Commonwealth University in Richmond, Virginia, where his research focuses on political representation, Congress and elections.

Anthony J.McGann – University of Strathclyde
Anthony McGann is a Professor at the School of Government and Public Policy in at the University of Strathclyde.  He is also affiliated with the Institute for Mathematical Behavioral Sciences and the Center for the Study of Democracy at the University of California, Irvine.

Charles Anthony Smith – University of California, Irvine
Charles Anthony Smith is Professor at the University of California-Irvine, where his research is grounded in the American judiciary.

This article originally appeared at the LSE’s USAPP – US politics and policy blog.

As Methane Levels in the Atmosphere Soar, Trump Administration Moves to Gut Regulations

UCS Blog - The Equation (text only) -

Until recently, carbon dioxide has earned top billing among global warming gases. Emitted when fossil fuels burn, it remains the most prevalent heat-trapping emission driving climate
change. Its concentration in the atmosphere has now reached levels unseen for three million years, helping to usher in an unprecedented decline in plant and animal species, according to a recent major United Nations report. Recent science is adding another gas to the marquee: methane. Just as we are learning how desperately we need to curb this gas, the Trump administration
wants to kick the oil and gas industry’s methane standards to the curb.

First, the science.

Methane is the main gas emitted in the extraction of natural gas, which has accelerated dramatically in the United States with the development of hydraulic fracturing to get at previously unreachable reserves layered under shale. Like carbon dioxide, methane is also now present at levels in the atmosphere unprecedented in human history, with an atmospheric concentration that has more than doubled since preindustrial times.

In fact, now that gas has supplanted coal as the single biggest fuel source for American electricity, it has also surpassed coal in carbon emissions since 2015. The federal Energy Information Administration says if there are no changes to policy, regulations and technology, the carbon dioxide from natural gas alone will keep America’s carbon dioxide levels at the 1990 baseline level, preventing many states from their goals of reducing emissions further.

But the picture is greatly changed when we additionally consider the consequences of methane, which was not factored into the climate models guiding the Paris climate goals of holding planetary temperature rise to 2 degrees Celsius above preindustrial levels. Methane’s full implications have only recently become a scientific priority, at least partly because of the largely positive image natural gas long enjoyed in the mainstream as a “bridge” fuel away from coal. What we do know is that, while it does not last as long in the atmosphere as carbon dioxide, methane is 86 times more efficient in trapping heat on a 20-year time scale and 34 percent more efficient over the course of a century.

Methane: Underestimated no longer

Today in the United States, the number of states where natural gas is the top source of electricity has more than doubled, from seven in 2001 to 16 in 2017, according to the New York Times. In a similar time frame, US methane emissions were found to have shot up more than 30 percent in the last decade according to a 2016 study by researchers in Harvard University’s School of Engineering and Applied Sciences. The team found that those emissions may account for between 30 and 60 percent of the global growth in atmospheric methane observed in the last 10 years.

Not only that, a major study in the journal Science last year found that the United States vastly underestimates the amount of methane emitted and leaked by the gas and oil industry. A team led by the Environmental Defense Fund (EDF), which included researchers from many universities and NOAA’s Earth System Research Laboratory, found that the industry is responsible for at least 60 percent more methane emissions than had been previously estimated by the Environmental Protection Agency.

According to this report, some 13 million metric tons of methane is spewed into the atmosphere, wasting $2 billion and enough to fuel 10 million homes. Methane leaks even amounted to some 2.3 percent of overall natural gas production. That might not sound like much but the Environmental Defense Fund previously estimated that a leakage rate of around 3 percent would negate any carbon emissions advantage natural gas might have over coal in mitigating climate change. An even more recent study by EDF and researchers at Cornell University found methane emissions at ammonia fertilizer plants to be 100 times higher than reported by industry and far above EPA estimates.

To be sure, not all independent studies come to dire conclusions about methane. Another team of researchers at NOAA’s Earth System Research Laboratory in Boulder, Colorado this year found no large increase in US emissions. The team said prior studies, including some cited above, confused ratios of methane with ethane, the second-largest component of natural gas. The American Petroleum Institute seized upon that study, with a blogger hailing it as “good news for America.” The blogger said the study proved industry is successful at capturing methane and reaffirmed natural gas’s “major role in reducing carbon dioxide emissions.”

Even if that study is correct, the comfort it offers is small. Despite some uncertainty in measuring, a study in the Journal of Geophysical Research found that, if methane levels continue rising at current rates, climate change targets set in Paris will likely be unattainable. Making matters worse, scientists need more information to fully determine exactly where the increases in methane levels are coming from. Besides leakage in the fossil fuel supply chain, possibilities include emissions from ruminant livestock such as cattle, sheep and goats, a possible feedback loop in which increased temperatures spark a release of methane from tropical wetlands, and the changes in the atmosphere that diminish its ability to destroy methane over time.

Trump administration plows ahead with rollbacks

As scientists race to pin down answers, there is no question that the bulk of the research clearly points to the fact that this should be a time for dramatic action to curb methane emissions, even if merely to be safe rather than sorry. But the Trump administration, showing a characteristic disdain for science, is racing to repeal the Obama administration’s plan to reduce methane emissions by 40 percent to 45 percent of 2012 levels by 2025. Key among the Obama-era initiatives were rules to control emissions from new and modified wells, and leaks and flaring from public and Native lands. That was just a start as most oil and gas drilling is on private land but, importantly, it was an effort to move in the right direction as is urgently needed.

For its part, the Trump EPA, led by former coal lobbyist Andrew Wheeler, fully admits that relaxing the oil and gas regulations will increase methane emissions and “may also degrade air quality and adversely affect health.” In Wheeler’s twisted view, that is fine because he says the relaxation of regulations will save industry $484 million in compliance costs. Wheeler conveniently ignores the fact that the Obama administration estimated that tighter regulations for new and modified sources gas production sites would have yielded climate benefits of $690 million by 2025, compared to compliance costs of $530 million.

As an exclamation point, Trump recently signed an executive order making it harder for states to block new gas pipelines and the administration is proposing prison sentences of up to 20 years for pipeline protestors who attempt to block construction. At about the same time, Under Secretary of Energy Mark Menezes said the expansion of a liquid natural gas export facility in Texas was an act of  “spreading freedom gas throughout the world.”

When spreading “freedom” abroad coincides with locking up protestors at home, we are at a new draconian moment in energy policy. As dismissive of science as Trump has been on carbon dioxide, his madness on methane threatens to create a prison of greenhouse gas no one can escape.

public domain

The US Supreme Court Turns its Back on Democracy

UCS Blog - The Equation (text only) -

Photo: Wikimedia

I will always remember where I was yesterday—the day the United States Supreme Court turned its back on democracy. I was in Birmingham, Alabama, ground zero in the fight for racial justice and democracy, on a tour through the south to speaking to churchgoers about climate change. And it was there I learned that the court had issued a decision that will go down in history as one of its most dreadful and infamous, on par with the Plessy v. Ferguson ruling that sanctioned the Jim Crow laws that African Americans and others later opposed so bravely in Birmingham.

In its ruling, the Supreme Court held that federal courts can do nothing about extreme partisan gerrymandering, in which incumbents draw voting districts to ensure that the party in power stays in power indefinitely. With this decision, the court sanctioned an insidious practice in which politicians pick their voters, rather than the other way around.

The two cases before the court clearly demonstrate the egregious nature of extreme gerrymandering. In the North Carolina case, the state legislature and governor enacted a redistricting plan designed to ensure that ten of thirteen of North Carolina’s congressional seats went to republicans, even though democratic candidates typically receive about half of the votes statewide. The architect of the redistricting plan was crystal clear on the intent:

We are “draw[ing] the maps to give a partisan advantage to 10 Republicans and 3 Democrats because [I] d[o] not believe it[’s] possible to draw a map with 11 Republicans and 2 Democrats.” I think electing Republicans is better than electing Democrats. So I drew this map to help foster what I think is better for the country.”

In Maryland, the mapmaker working on behalf of the democrat-led legislature and governor was given two instructions—make sure that seven of Maryland’s eight congressional districts would be held by democrats, and that all incumbents were protected.

Both of these gerrymandering efforts, aided by advanced computing technology and granular voting data, worked exactly as planned.

The lower courts struck them both down, and the Supreme Court took the cases ostensibly to decide when partisan gerrymandering goes so far that it violates the equal protection clause of the 14th amendment, which “guarantees the opportunity for equal participation by all voters in the election” of legislators.

Despite the long line of precedents striking down racial gerrymandering and requiring one-person-one-vote when drawing districts; despite the obvious way in which extreme gerrymandering denies voters the right to elect representatives of their choice; and despite the fact that the political system cannot be expected to remedy itself, the court essentially threw up its hands and claimed that it was beyond judicial capability to draw the line between permissible and unpermissible gerrymandering.

Not only is this rationale utterly unbefitting of the goals and grandeur of the US Constitution and the historic role the federal courts have played in enforcing it, it is simply incorrect, as my colleague Michael Latner points out.

As the dissenting opinion forcefully puts it, “for the first time ever, this Court refuses to remedy a constitutional violation because it thinks the task beyond judicial capabilities. And not just any constitutional violation. The partisan gerrymanders in these cases deprived citizens of the most fundamental of their constitutional rights: the rights to participate equally in the political process, to join with others to advance political beliefs, and to choose their political representatives.”

What we must do

If there is any silver lining to this blatant judicial abdication, it is this: voters now have no choice but to demand a political remedy to this problem. They can do this at the voting booth by supporting candidates committed to drawing fair district lines. They can support ballot initiatives that take the power of districting out of the hands of elected representatives (in states that allow it). And they can support democracy reforms at the federal level, such as HR 1, which requires states to establish independent commissions to draw district lines.

Unless and until this happens, we can continue to expect a government that is unresponsive to the will of the voters. This means gridlock and excessive partisanship on all of the issues of our time, including issues that UCS focuses on—climate change, nuclear weapons, food sustainability—as well as so many others such as immigration, gun control, and health care.

In Birmingham, those who fought for civil rights had the US Constitution and the federal courts on their side. Tragically, in today’s battle for democracy, we do not. It means we simply must fight harder.

Photo: Wikimedia

It’s Time to Stop Ignoring the Climate Change Threat to World Heritage

UCS Blog - The Equation (text only) -

Photo: Dan Broun

The World Heritage list comprises more than 1,000 of our planet’s most important natural and cultural heritage sites, but from the ancient city of Venice to the forests and rivers of Yellowstone National Park, these extraordinary places are increasingly vulnerable to climate change. The 187 governments which have ratified the World Heritage Convention have promised to take action to address climate threats to these sites, but as with the same countries’ Paris Agreement pledges, progress to date has been far too slow.

As an unprecedented early summer heatwave grips parts of Europe, and the worst wildfires in 20 years rage out of control in Spain, the annual meeting of the World Heritage Committee is beginning in Baku, in oil-rich Azerbaijan, on the shores of the Caspian Sea. The meeting comes in the wake of the recent wake-up call issued by the Intergovernmental Panel on Climate Change (IPCC) with its Special Report on 1.5°C.

The IPCC described how much worse a 2°C world is likely to be than if global temperature changes were limited to 1.5°C. For example, a further decline of 70-90 percent in coral reef is expected even at 1.5°C, but with a warming of 2°C, a shocking 99 percent of coral is expected to be lost.

There are 29 World Heritage reefs, including Australia’s Great Barrier Reef, the Belize Barrier Reef, and in the US, the Papahānaumokuākea National Marine Monument in the Hawaiian archipelago. According to a 2017 UNESCO analysis, coral in 21 out of the 29 properties (79 percent) experienced severe or repeated heat stress during the previous three years.

Papahānaumokuākea Marine National Monument, a US World Heritage site, provides sanctuary for Galapagos sharks, coral reefs and an extraordinary array of marine biodiversity and Pacific cultural heritage. Photo: Courtney Couch/HIMB

Worsening wildfires and a global glacier meltdown

On land, the heat and fires gripping Europe this week are another sign of a shift in climate conditions and a “new normal’ that will bring larger and more intense wildfires to many fire-prone parts of the Globe soon.

For example, devastating wildfires in Australia’s Tasmanian Wilderness World Heritage Area came on the back of dramatic heat and drought in 2016, severely damaging unique fire-sensitive alpine and rainforests ecosystems.  Fires hit again in 2019, endangering areas of slow-growing forests including King Billy pines, some of which are 1,000 years old.

As in Tasmania, one of the biggest threats to the Cape Floral region of South Africa with its extraordinarily rich plant endemism is the increased frequency and intensity of fires. In the US, western fire seasons have gotten at least 7 weeks longer since the 1970s, and we are seeing more large fires.

World Heritage glaciers too are under threat. According to a new study from IUCN, glaciers will completely disappear from many World Heritage sites within 80 years if current rates of greenhouse gas emissions continue unabated. Of the 19,000 glaciers surveyed in 46 World Heritage properties, (9% of the total of the total of approximately 200,000 glaciers world-wide), almost two-thirds could be lost.

To be inscribed on the World Heritage List, a protected area must demonstrate Outstanding Universal Value (OUV) under at least one of ten criteria. World Heritage sites that are listed wholly for the value of their glaciers include the Swiss Alps Jungfrau-Aletsch and the transnational site that includes Glacier Bay and Wrangell/St. Elias National in the US, and Canada’s Kluane National Park.

Ancient sites of the Mediterranean at risk

Yet more bad news comes from a 2018 study published in Nature communications which looked at the risk from sea level rise and coastal erosion for 49 cultural World Heritage properties situated on low-lying coasts of the Mediterranean. The analysis showed that 96% of the sites would be at risk by the end of the century, and most of them are already vulnerable.

Among the sites already at the highest risk today are the Early Christian Monuments of Ravenna (Italy), the Kasbah of Algiers (Algeria), Tyre (Lebanon) and Délos (Greece). On the island of Délos – the mythical birthplace of the Greek god Apollo and a center of Greco-Roman culture, sea level rise is pushing salt water up through the porous limestone substrate and damaging stonework and marble in this remarkable archaeological World Heritage property.

Flooding and coastal erosion aren’t the only climate threat to the Mediterranean. As elsewhere, wildfires, driven by heat and drought are increasing in the region, putting at risk World Heritage sites including the Old Town of Corfu and the monasteries of Mount Athos (Greece) and the Donana National Park in Spain.

Researcher Stéphanie Maillot shows the effects of salt water intrusion on 2,200 year old artefacts at the Délos World Heritage site in Greece. Photo: Andrew Potts.

Time for the World Heritage Committee to take action

Despite the clear and present danger that climate change represents for World Heritage sites across the globe, the World Heritage Committee has not responded to the scale or urgency of the problem. For example, if a site comes under local or regional threat from, say, mining, a hydroelectric project or uncontrolled urban development, it can be added to the List of World Heritage in Danger, with the sanction of being taken off the list if the problems are not urgently addressed.

However, no similar mechanism exists for climate change. Nor is there even any formal requirement under the World Heritage Committee’s current Operating Guidelines to assess climate risk or propose resilience or adaptation measures to address these risks when nominating a new site to the World Heritage list.

Union of Concerned Scientists (UCS) will be at the Baku meetings this week, working with partners including the International Committee on Monuments and Sites (ICOMOS), World Heritage Watch, Historic Environment Scotland and Australia’s James Cook University to propose new strategies and mechanisms by which the World Heritage Committee could effectively address climate change. The proposals include the adoption of a Climate Vulnerability Index (CVI) for World Heritage properties.

 

Dan Broun

LGBQT+ Representation in Science 50 Years After the Stonewall Riots

UCS Blog - The Equation (text only) -

RuPaul Andre Charles in drag

I have kept a photo of RuPaul Charles, America’s most famous drag queen, in my home since I came out as a gay man 14 years ago. Black, gay, and poor—the odds were stacked against RuPaul ever becoming a success. Yet, he’s now a multi-million-dollar superstar drag queen celebrity running a successful business. There’s possibility everywhere. 

Or is there? 

During pride month for the past three years now, I have written a blog about LGBQT+ (Lesbian, Gay, Bisexual, Queer, Transgender) representation in the fields of science, technology, engineering, and mathematics (STEM). The question of LGBQT+ representation in STEM was borne from a conversation with another gay graduate student during my Ph.D. tenure. 

We searched for data on LGBQT+ representation in STEM careers, but nothing came up. We couldn’t find out how many other people like us were in STEM. I remember this was disheartening because it gave us no sense of community in our own work. 

Today, it’s still difficult to find data on LGBQT+ representation in STEM fields. Why? It’s certainly difficult to collect – sexual identity is fluid and some don’t want to divulge that information, even if a survey ensures anonymity. But more importantly, studies on the issue are scarce.

The studies that do exist do not paint a pretty picture for the LGBQT+ community in or seeking a career in STEM fields. For example, even though LGBQT+ students are more likely to get involved in undergraduate research (often a good predictor of undergrads progressing into STEM careers), they do not remain in these STEM fields. Depending on the study you read, LGBQT+ individuals are 17-21% less represented in STEM fields than expected.  

Representation is just the first issue. LGBQT+ individuals that are out at work often report higher rates of negative workplace experiences compared to LGBQT+ individuals in other fields of work and their straight counterparts. This is due, in part, to LGBQT+ STEM professionals being harassed or witnessing harassment. Additionally, heteronormative standards continue to exist in most places of work resulting in unconscious biases that can create less-than-ideal work environments for LGBQT+ people (e.g., lack of acknowledgement of preferred gender pronouns).

How do we solve these problems? Well, there is a lot that we have to do, and it’s going to take long systemic change. However, one of the first steps that needs to be taken is for our government to recognize the plight of LGBQT+ individuals in STEM fields. This will require some data collection on LGBQT+ representation in STEM fields—something that I advocate the National Science Foundation (NSF) measure every year that I’ve written a pride month blog. 

I advocate that the NSF undertake this research because it already conducts research regarding representation of individuals from minority populations in the STEM workforce. The NSF has compiled detailed statistics about women, underrepresented minorities, and the prevalence of various disabilities among US researchers and STEM students, but no measurements exist for those who identify as LGBQT+. The agency might consider doing this in exit surveys of PhD students, for example. Policies to ensure a diverse scientific workforce, something the NSF regards as important, cannot be developed without data. It’s time that they started collecting it, and there are some signs that maybe they will begin such efforts. 

The Federal Committee on Statistical Methodology (FCSM) began a research group to address measuring sexual orientation and gender identity – their first background report was published in 2015. They identified federal agencies that have previously measured sexual orientation and/or gender identity, it is clear that NSF is missing. The FCSM research group also identified research needs, but LGBQT+ representation and retention in STEM fields is not listed. 

Science informs policies that keep the public safe and healthy, often those who are most disadvantaged. But it is difficult to develop policies to fix issues that plague disadvantaged communities when data doesn’t exist. The LGBQT+ community needs NSF and other federal agencies to step up and start collecting data on our representation and retention in STEM fields. There is enough research outside of the government to show that there is a problem and that more research is needed. 

Yes, there is possibility—anyone can grow up and become RuPaul, but there has to be room for them to do it. Right now, we have no idea whether such room exists for the LGBQT+ community in STEM, although the little research that has been conducted suggests the room available is limited and difficult to thrive within.

It’s been 50 years since Stonewall. I think it’s time that we make room for LGBQT+ folks in STEM. Don’t you?

Trump Opens Door to Renegotiating Controversial Okinawa Base Deal

UCS Blog - All Things Nuclear (text only) -

The Okinawa dugong will be evicted from its island home if the deal on a new military base struck by President Obama proceeds as planned. President Trump suggested he wants to renegotiate it.

Bloomberg News reported that President Trump “regards Japan’s repeated efforts to move a large military base in Okinawa as sort of a land grab and has raised the idea of seeking financial compensation.” The New York real estate mogul said the land the United States military is vacating “could be worth about $10 billion.” He feels it belongs to the United States. It doesn’t.

But that’s exactly how the US military feels about its bases in Okinawa. These sentiments are rooted in the brutal battle to take the island at the end of World War II that cost 12,520 American lives. The US military wanted to keep it indefinitely. Japanese public protests led the government in Tokyo to negotiate the return of Okinawa to Japan in 1972.

The base Trump was talking about is in the heart of a densely populated urban area and continuing to operate it is dangerous for US military personnel and the people living near by. But Obama’s Department of Defense would only agree to close the facility if it got a new one in return. The Japanese national government agreed to build a new base in Okinawa’s Henoko prefecture and pay the construction costs. The land under the old facility would be given back to the people who owned it before the US military appropriated the land in 1945 to build the base.

Outsiders might think the people of Okinawa would be happy. They aren’t. If you lived on the island you might see things their way.  The US military occupies about 18% of the land on the islands that make up Okinawa prefecture.  The land mass of Okinawa is only 0.6% of Japan’s total but it accounts for 74% of all the land occupied by US military bases in Japan. Just over half of all US military personnel in Japan are stationed in Okinawa, which bears a disproportionally heavy share of the cost of the US military presence in Japan.

Trump knows real estate. He’s right when he says the land occupied by the military base would be far more valuable in private hands, where it could be used to develop Okinawa’s economy, which is the poorest, by far, of any region of Japan. Military base-related revenue is a paltry 4.9% of the island’s gross income and provides only 1.4% of the island’s jobs.

Okinawans want the dangerous old base in the middle of the city closed and the land returned. They’d be happy to do the same with all the military bases on the island. The inconveniences of living on a tiny island with an enormous military footprint are too numerous to mention but there is one that deserves particular attention. Generations of Okinawan children have grown up hearing and learning impaired from the constant and literally deafening roar of the military aircraft that take off and land at those bases. The horrible sound of it all also depresses tourism—the mainstay of Okinawa’s economy—on what would otherwise be a tropical paradise.

Okinawans are not only unhappy with Obama’s deal but they’re incredibly angry about the way they were treated. Nobody asked them what they thought should be done.  They don’t think it is fair that Okinawans should be forced to accept the construction of a new base on an island already packed with them.  And Obama could not have picked a worse place to build it. The construction of the new base will destroy one of the most beautiful and biodiverse areas of the island, which contains a precious coral reef that is home to a number of beloved and endangered species.

Obama’s team tried to sell the deal to the local population with the fiction they were just moving the old base from a bad spot to a better one. They didn’t buy it. In a recent referendum on the new construction more than 70% voted to stop it. The current governor, the former governor and a majority of the prefecture’ s elected officials have used every legal means at their disposal to try to stop that base from being built. Elderly villagers laid their bodies down in front of enormous earth moving vehicles to slow the construction down.

One hope for the people of Okinawa is for the US Congress to acknowledge their basic human right to have a say in the matter and pull the plug.  Another is for President Trump to sit down with Okinawa’s Governor Denny Tamaki and cut his own deal.  Tamaki may be open to considering the financial compensation Trump wants in exchange for stopping the construction of a new base more than 70% of his constituents don’t want.

Who Breathes the Dirtiest Air from Vehicles in the Northeast and Mid-Atlantic?

UCS Blog - The Equation (text only) -

Photo: frankieleon/Flickr

Most people know that cars, trucks, and buses  from our highways and city streets are a significant source of air pollution.  While pollution from transportation impacts all communities in the region to some degree, the people who face the greatest exposure to transportation pollution are those who live near highways, along major freight corridors, and in urban areas.

To help understand exactly which communities bear the greatest burden and breathe the highest concentrations of this dangerous air pollution, we used a computer model to estimate the amount of fine particulate matter air pollution (known as PM2.5) created by on-road vehicles that burn gasoline and diesel. The findings, which  are not likely not to be a surprise to many residents, are quite troubling: they show that people of color disproportionately breathe dirtier air than white people do:

  • On average, Latino, Asian American and African American residents are exposed to more PM5 pollution from cars, trucks, and buses than white residents of the region. These groups are exposed to PM2.5 pollution 75, 73, and 61 percent higher, respectively, than white residents.
  • Almost one-fifth of the region’s 72 million people live in census tracts where PM5 pollution levels are more than one-and-a-half times the average of the state where they live; more than 60 percent of the residents of those tracts are people of color.
What is PM2.5 and why is it important?

The science is clear: no level of particulate matter is safe to breathe, says the American Lung Association. While fine particulate matter – referred to as PM2.5 – is not the only air pollutant that adversely affects health, it is estimated to be responsible for approximately 95 percent of the global public health impacts from air pollution. Breathing PM2.5  is linked to increased illness and death, primarily from heart and lung diseases.

These minuscule particles are only visible to the naked eye when their concentration in the air is high, such as when a truck belches black smoke. They include particles smaller than 2.5 millionths of a meter in diameter – at least 20 times smaller than the diameter of fine human hair— so they can penetrate deep into the lungs. The ultrafine particles  – smaller than 0.1 millionths of a meter – are particularly dangerous, as some can enter into the bloodstream.

Chronic exposure to PM2.5 causes increased death rates attributed to cardiovascular diseases, including heart attacks and strokes, and has been linked to other adverse impacts such as lung cancer, reproductive and developmental harm and even diabetes and dementia. Chronic exposure to PM2.5 in children has also been linked to slowed lung-function growth and development of asthma.

PM2.5  is formed in many ways. A significant source of PM2.5  is fuel combustion. The combustion engines of cars burn gasoline and diesel. Power plants burn natural gas and other fuels to produce electricity. Burning wood for cooking and in residential fireplaces, as well as wildfires, are some examples of biofuel combustion.  To make things worse, not only does burning fossil fuels and biofuels produce PM2.5 directly, but the combustion reaction also emits gases such as nitrogen oxides, sulfur dioxide and volatile organic compounds that go on to form additional PM2.5 through complex chemical reactions in the atmosphere.

Because there are so many ways in which particulate matter is formed, you may ask yourself if some pose more health risks than others. Indeed particles can bind with bacteria, pollen, heavy metals, elemental carbon, dust and other building blocks, and so have a broad range of effects on human health. But size is one of the most important factors, and PM2.5  is responsible for a very heavy burden of disease, disability and death.

Greater pollution for people of color

We estimated exposure to PM2.5 pollution using a recently developed model from the University of Washington and data from the US Census Bureau. This model allows us to calculate how vehicle tailpipe and refueling emissions ultimately lead to ground-level pollution exposure, so we can understand how exposure to PM2.5 varies among groups and locations.

The results are clear: PM2.5 pollution burden from cars, trucks, and buses is inequitable when looking at the exposure experienced by racial and ethnic groups in the region. Looking at the region as a whole, Latino residents are exposed to 42 percent higher PM2.5 concentrations than a person breathing polluted air equivalent to the state’s average PM2.5. Asian Americans and African Americans experience concentrations 42 percent and 40 percent higher, respectively, than the average resident (Figure 1).  At the same time, white residents have an average exposure that is 19 percent lower than the average for the region. This means that, on average,  Latino, Asian American and African Americans are exposed to more PM2.5 pollution 75, 73 and 61 percent higher, respectively, than white residents.

Figure 1. Disproportionately High Exposure for People of Color in the Northeast and Mid-Atlantic Note: This analysis uses the following US Census Bureau–defined racial groups: White; Black or African American; American Indian or Alaska Native; Asian; Native Hawaiian or Other Pacific Islander; Hispanic; Latino; and Some Other Race. In the chart above, Latino includes census respondents who select Hispanic, Latino, or both; Other Race includes respondents who select Some Other Race as their only race.

When we zoom in to the census tract level, defined as an area with approximately 4,000 people, pollution inequity is just as evident as the inequity we see at the regional level (Figure 2). In census tracts with low pollution and cleaner air (where average annual PM2.5 concentrations are less than half of the state average), whites make up 85 percent of the total population, although they constitute less than two-thirds of the total population in the Northeast and Mid-Atlantic. In contrast, more people of color live in census tracts where pollution is more than one and a half times the state average. In these areas, people of color constitute slightly more than 60 percent of the population, compared with about 35 percent of the regional population.

Figure 2. PM2.5 Unequal Exposure in Different Pollution Areas. Note: Each column refers to census tracts in areas with similar PM2.5 pollution concentrations. The columns show the fraction of people belonging to each of eight racial groups living in those areas. The least polluted areas are on the left and the most polluted on the right. The 0–50% area refers to census tracts where PM2.5 pollution is less than half the regional average, the 50–100% area refers to tracts where pollution is from half the regional average to the regional average, etc. The column at the far right shows the region’s racial composition.

We were also curious about the inequities in air pollution relative to income distribution, and found that exposure inequities are more pronounced between racial and ethnic groups than between income groups. Disparities based on income are not significant because the fractions of people in each income bracket are distributed fairly evenly over areas with different pollution levels.

Pollution also varies across the region

Of all the states in the region, New York ranks highest in the region in average PM2.5 concentration from on-road vehicles, followed by Maryland, Delaware and New Jersey, all of which have averages higher than the regional average. Pennsylvania holds a close fifth place (Figure 3).

Figure 3. PM2.5 Exposure Varies Greatly across the Northeast and Mid-Atlantic. Note: Metropolitan areas in the District of Columbia, Maryland, New Jersey, New York, Pennsylvania, and Rhode Island have many areas with PM2.5 pollution at least twice as high as the regional average. There is much variability between exposure in urban and rural areas of all states.

But averages can be deceptive, and so looking at the range of PM2.5 concentrations within each state paints a clearer picture. Even if a state average is low, pockets of racial and ethnic inequity pop up frequently in the analysis, showing that very high concentrations may afflict some areas, many of which are located near junctions of major highways.

For example, New York State has the census tracts with the highest PM2.5 concentrations in the entire region. These tracts are in the Bronx, Queens, and Manhattan. The Philadelphia area also has very high PM2.5 concentrations compared with the Pennsylvania average: pollution in the state’s dirtiest census tracts is more than three times as high as Pennsylvania’s average. On the other hand, Washington, DC, has a higher average than New York State’s because it is urban – but the most polluted air in the District of Columbia is only about two-thirds the concentration of the most polluted areas in New York State.

New Jersey, New York, and Pennsylvania, the region’s three most populous states with a total of 41.4 million people, have higher PM2.5 averages than the other states. In other words, almost 58 percent of the region’s population live in states where the average pollution from on-road vehicles ranges from 94 percent to almost 150 percent of the regional PM2.5 average.

In New York State, one-third of the population experiences PM2.5 pollution levels that are more than 150 percent of the state average. Because New York is the region’s most populous state, this higher level of pollution affects 6.3 million people, almost 70 percent of whom are people of color. The most polluted census tract in New York State is in Morris Heights in the West Bronx, at the juncture of interstates 95 and 87. This neighborhood is 70 percent Latino and 29 percent African American – and only 0.2% are white.

In Pennsylvania, while the state is 78 percent white, the areas where this pollution is less than half the state average are 93 percent white; the areas where it’s more than twice the state average are only 42 percent white. Even though the state’s average pollution level is slightly lower than the average for the entire Northeast and Mid-Atlantic region, the state has the second most polluted census tract in the region, just below the pollution level of the worst census tract in the region, in the West Bronx of New York City.

Massachusetts is another state where the state average can be deceptive. Residents of Suffolk County, where Boston is located, experience pollution levels that are almost twice as high as the Massachusetts average. In the two most polluted census tracts in the state, which are in downtown Boston, encompassing Chinatown, inequity is blatant: 70% of the population consists of people of color.

There are many such pockets of inequity throughout the region.

What is to be done?

Clearly air pollution from on-road transportation such as diesel and gasoline vehicles places significant, inequitable and unacceptable health burdens on residents of the Northeast and Mid-Atlantic. This inequity reflects decades of local, state, regional, and national decisions about transportation, housing, and land use. Decisions concerning where to construct highways, where to invest in public transportation, and where to build housing have all contributed to a transportation system that concentrates emissions in communities of color. In many cases, transportation policies have left those communities with inadequate access to public transportation, divided by highways, and exposed to air polluted by congested highways serving suburban commuters.

We have the tools and the technologies to transform our transportation system away from diesel and gasoline and toward clean, modern, equitable solutions. With targeted actions in electrification and clean fuels, the region can save more than $30 billion by 2050 and save thousands of lives.

Electrification of vehicles, both passenger and freight, could greatly reduce emissions. Battery-electric and fuel cell vehicles have no tailpipe emissions, with the exception of minor amounts of PM2.5 emissions from tire and brake wear. Not just that, but these vehicles eliminate emissions associated with refueling. The electricity used to charge the vehicle can produce some emissions from electricity generation, but it’s critical to remember that these emissions are lower than those of an average gasoline car, even if it charges in coal country, and emissions vary depending on the location where the vehicle is charged. In the Northeast and Mid-Atlantic, the Regional Greenhouse Gas Initiative, along with investments in solar, wind, and other renewable electricity resources, has greatly reduced emissions from electricity generation.

Significant new funding is necessary to expand access to clean transportation in these communities, as are strong regulations that limit transportation emissions and put a price on carbon pollution. And the communities most affected by transportation pollution often have the fewest available resources.

In December 2018, nine states in the region and the District of Columbia agreed to create a regional, market-based program that would limit transportation emissions and invest in clean transportation.  They plan to use funds raised from pollution permits to make strategic investments in clean transportation. States should seek input from communities disproportionately burdened by transportation pollution and ensure that equity is a key consideration in both design processes and future investment decisions.

Specific investments that could reduce inequities include:

  • Investments in electric transit and school buses, with a priority on serving communities exposed to the highest levels of gasoline and diesel emissions
  • Expansion of electric vehicle rebate programs to provide financing assistance and larger rebates to low- and moderate-income residents
  • Utility investments in electric vehicle charging infrastructure, with a priority on serving communities exposed to the highest levels of gasoline and diesel emissions
  • State programs that provide aid to municipalities to support clean transportation, with a priority on serving communities exposed to the highest levels of pollution.
  • While residents of the region can make a difference by choosing cleaner vehicles and driving less, much of today’s air pollution comes from sources outside the direct control of individuals. States need regulations, incentives, and other policies to reduce vehicle emissions, with equity and the meaningful involvement of affected communities as key considerations in designing policies and strategies to reduce pollution from vehicles.

States need to continue to reduce emissions, placing a high priority on actions that reduce the inequitably distributed burden of air pollution in the Northeast and Mid-Atlantic. This analysis provides important quantitative evidence of the need for and importance of such programs, and it can help inform and shape future actions to reduce pollution exposure and environmental inequities in the region.

Photo: frankieleon/Flickr Sources: US CENSUS BUREAU 2018; EPA 2014. Sources: US CENSUS BUREAU 2018; EPA 2014. Source: US CENSUS BUREAU 2018; EPA 2014.

Electric Airport Shuttle Buses Are Taking Off

UCS Blog - The Equation (text only) -

Photo: Jimmy O'Dea

Today, California is expected to pass a standard that will transition airport shuttle buses to zero-emission battery and fuel cell electric vehicles.

While California established a standard for zero-emission transit buses last year, airport shuttle operators are distinct enough from public transit agencies that a different policy is fitting.

The shuttle bus standard covers an estimated 950 vehicles operating at 13 airports. Transitioning these buses to zero-emission technologies by 2035 will reduce global warming emissions by an estimated 35,000 metric tonnes CO2e per year, the equivalent of taking 7,400 of today’s cars off the road each year.

The operational characteristics of shuttle buses (i.e., fixed, short routes and stop-and-go operation) are well matched to today’s electric vehicle technology. There are already 14 companies that make over 30 different models of electric buses ranging from large transit-style buses to small shuttle buses.

And airports are beginning to adopt these vehicles. One hundred electric shuttle buses are on order or operating at 9 of the 13 airports in California that will be covered by the standard. Notably, San Jose recently unveiled 10 electric shuttle buses and Los Angeles is expected to receive 20 electric buses soon. There are already 16 off-airport electric shuttles taking customers between LAX and a nearby parking garage.

“Why bother?”

Some might say that 950 vehicles is small compared to California’s 1.9 million heavy-duty vehicles. Or that 35,000 metric tonnes of emission reductions isn’t that much compared to the state’s annual 430 million metric tonnes of global warming emissions.

It may look like a small step in the right direction, but there are several reasons this policy—and others like it—can be big leaps.

First, if we’re going to reduce carbon emissions and inequitable exposure to air pollution by electrifying as many or all of the vehicles on the road, we have to start somewhere and airport shuttles are well-suited to be an early adopter of electric technologies. In fact, the policy for airport shuttle buses is even stronger than the one now in effect for transit buses, requiring every bus purchased beginning in 2023 to be zero-emission for airports. Compare that to 2029 for transit agencies.

Second, we usually don’t get big policy shifts without passing small policies first. The good news is that bigger policies—covering all categories of heavy-duty vehicles—are in the works. Even when bigger policies are in place, it often takes smaller policies to further strengthen them.

Third, shuttle buses have a lot in common with trucks. Just look at the two vehicles on the top. One carries passengers and the other carries packages, but otherwise they are the same vehicle.

The same goes for the shuttle bus and box truck on the bottom. They have the same business in the front, just different parties in the back.

What this means is that electrifying shuttle buses will increase the availability and market for all electric trucks.

Finally, heavy-duty vehicles disproportionately contribute to global warming and air pollution compared to cars. Buses and trucks are large vehicles with large engines that consume more fuel per mile than cars. Electric buses offer zero tailpipe emissions and 75 percent lower global warming emissions on today’s grid in California compared to diesel and natural gas buses.

Replacing just one diesel or natural gas bus with an electric vehicle has the same effect as eliminating the emissions from several cars. As mentioned above, this policy’s transition of 950 buses to electric technologies will have the same effect (from a global warming perspective) as taking 7,400 of today’s cars off the road each year.

Policy details

The standard applies to shuttle buses serving all 13 major airports in California, including: Los Angeles (LAX), San Diego (SAN), San Francisco (SFO), Burbank (BUR), Oakland (OAK), Ontario (ONT), Santa Ana (SNA), Sacramento (SMF), San Jose (SJC), Fresno (FAT), Long Beach (LGB), Palm Springs (PSP), and Santa Barbara (SBA).

The standard applies to both public and private airport shuttle buses, but only those with fixed routes less than 30 miles long. Types of vehicles falling under the standard include buses operating between airport terminals, rental car sites, off-site parking lots, or airport hotels. Door-to-door charter services, taxis, and ridehails (i.e., Uber and Lyft) are not included in this policy.

Under the standard, any airport shuttle bus purchased after January 1, 2023, must be a battery or fuel cell electric vehicle. Fleets must achieve the following percentages of zero-emission vehicles on the road by these dates:

  • 33 percent by December 31, 2027
  • 66 percent by December 31, 2031
  • 100 percent by December 31, 2035

With fuel and maintenance savings expected from electric vehicles compared to diesel, natural gas, and gasoline, as well as decreases in vehicle purchase costs, the standard is estimated to save $30 million across the state from 2020 to 2040.

Significant state funding is available to incentivize early action before 2023, providing savings above and beyond the estimated $30 million. California’s HVIP voucher program, for example, provides $25,000 to $160,000 in funding for the purchase of battery electric shuttle buses (depending on the vehicle size) to offset higher purchase costs.

This is just the beginning

With policies only for transit buses and airport shuttle buses, many types of heavy-duty vehicles remain ripe for electrification. Nearly every truck operating in an urban setting with a local operating radius is suited for electrification today.

California is currently working to: a) set standards for manufacturers to make electric trucks and buses, and b) set standards for fleets beyond transit and shuttle buses to purchase these zero-emission vehicles, such as refuse trucks, delivery trucks, and port drayage trucks.

UCS supports the standard for zero-emission airport shuttle buses. It is the result of more than two years of public meetings and significant analysis of the airport shuttle bus industry.

No single policy will solve all of our air quality and climate problems, but progress is the sum of all things, airport shuttle buses included.

Photo: Jimmy O'Dea Photos: Jimmy O'Dea, Thomas R. Machnitzki, and MobiusDaXter

The Best School Lunch News You Never Heard

UCS Blog - The Equation (text only) -

This spring, the US Department of Agriculture (USDA) released a groundbreaking new study showing that kids and schools alike have benefited enormously from new school nutrition standards adopted over the course of the last seven years. This is the first comprehensive assessment of how schools across the nation have fared since the standards were first rolled out in 2012-2013.

But if you missed the press release, it’s because there wasn’t one.

The report, which should have served as a glowing testament to the bipartisan Healthy, Hunger-Free Kids Act of 2010 and the USDA’s subsequent work implementing its improved nutrition standards, was unceremoniously posted in a quiet corner of the agency website, presumably meant to collect the cyberspace equivalent of cobwebs.

Why?

For starters, the report undermines statements made by USDA secretary Sonny Perdue to justify his recent rollbacks to some of those same nutrition standards. When Perdue first announced the rule that knocked down nutrition standards for milk, sodium, and whole grains in school meals, he declared, “It doesn’t do any good to serve nutritious meals if they wind up in the trash can.” That would be true—except the new report by his own department’s researchers found that food waste was essentially unchanged after these nutrition standards were adopted.

And with an update of the law behind school nutrition regulations on the horizon, this new research comes at an inconvenient time for an administration that might like to see its evidence-based public health protections eroded. Congress is now beginning child nutrition reauthorization, a legislative process that sets nutrition standards for federal programs providing school lunches, breakfasts, snacks, and summer and after-school meals, as well as grocery staples for low-income women, infants, and children. Although these laws are supposed to be reauthorized every five years, it has been nearly ten years since the passage of the Healthy Hunger-Free Kids Act, the landmark bipartisan legislation championed by Michelle Obama that finally brought school nutrition standards up to speed with the science-based Dietary Guidelines for Americans. Given the recent USDA report and research that has been published in the interim, it’s going to be difficult for the Trump administration to ignore the data that shows just how successful these standards have been.

So—in lieu of a press release—we’re highlighting the report for you. Here’s what you should know about the findings from the 2019 USDA School Nutrition and Meal Cost Study.

School meals have gotten healthier

The new USDA report compares the healthfulness of school lunches and breakfasts before the new nutrition standards were implemented (school year 2009-2010) and after (school year 2014-2015). Using a tool called the Healthy Eating Index (HEI), a measure of diet quality that scores eating patterns on a scale of 0 to 100, the researchers found that scores significantly improved for both school lunches and breakfasts. The mean total HEI score for lunches increased 41 percent, achieving a score of 81.5 out of 100 after the new standards were adopted, while the mean total score for breakfasts increased 44 percent, achieving a score of 71.3 out of 100. This is good news for the 30 million students eating school meals daily, and is particularly important for kids from low-income and food-insecure families who rely more on school breakfast and lunch to meet their nutritional needs.

First Lady Michelle Obama championed stronger nutrition standards in the Healthy, Hunger-Free Kids Act of 2010.

Healthier meals are linked to higher participation

One of the most encouraging results from the report? Healthier school meals seem to go hand-in-hand with higher participation—in other words, when schools are serving healthier meals, more kids are buying or receiving them. According to its authors, “There was a positive and statistically significant association between student participation in the [National School Lunch Program] and the nutritional quality of [National School Lunch Program] lunches.” When the researchers ranked the healthfulness of school meals using HEI scores, they found that schools in the top half of HEI scores had student participation rates of about 60 percent, compared to only 50 percent at schools in the bottom half of HEI scores. This is good for kids, who don’t seem to be dissuaded from purchasing healthier school food, but it’s also good for schools, which rely on enough students buying meals to keep their budgets balanced.

Kids aren’t wasting more food (but they’re still wasting too much)

Despite the claims made by Secretary Perdue, kids aren’t wasting more food under the new nutrition standards. The study found that plate waste, a measurement of the food thrown away or not eaten at mealtime, was “comparable to findings from studies that examined plate waste prior to implementation of the updated nutrition standards.” This confirms the findings of a 2015 study from the Rudd Center for Food Policy and Obesity.

That being said, it’s clear that there’s still work to be done to help kids eat (and enjoy) more of the healthy foods that schools are working hard to prepare. The USDA study found that nearly one third of all vegetables served on lunch trays went to waste, followed by milk (29 percent), fruits and fruit juice (26 percent), and side dishes containing grains and breads (23 percent). The study also found that the timing of lunch periods was associated with plate waste: the percentage of calories wasted was much lower in lunch periods starting at 12:00 PM or later than for lunch periods starting before 11:30 AM, meaning kids may be tossing food in part because they’re not hungry yet.

Schools are generally meeting nutrition requirements

Based on previous reports, we already knew that a vast majority of schools are complying with school nutrition standards. According to the USDA, in 2016, more than 99 percent of schools nationwide reported that they were meeting standards for breakfast and lunches. But a closer look at what’s in the serving line revealed that not all meals would qualify as reimbursable. The new report shows that more than 90 percent of daily lunch menus meet quantity requirements for fruits, meats and meat alternatives, and milk, while about 80 percent meet requirements for vegetables and grains. Similarly, more than 80 percent of school breakfast menus met daily requirements. However, many of the meals fell outside of the calorie ranges specified for different age groups—the study found that elementary and middle school lunches often had too many calories, and high school lunches often had too few.

Preparing to champion child nutrition

This USDA report delivered a lot of good news for students and schools. And as Congress prepares to rewrite the legislation that guides our child nutrition programs, it could not have come at a better time for policymakers and public health advocates.

The USDA research provides critical confirmation of what independent studies have been suggesting for several years now: bringing our school nutrition programs into better alignment with the Dietary Guidelines for Americans is not only possible, but it can be profitable, too.

Despite its clear importance, public health advocates and science champions should be prepared to push for this research to feature prominently in forthcoming policy discussions. The Trump administration has a demonstrated track record of burying studies it finds unpalatable—most recently in its refusal to publicize government-funded studies on climate change—and this one may be no exception.

Congress Must Lead with National Energy Standards to Save the Climate

UCS Blog - The Equation (text only) -

Photo: Leaflet/Wikimedia Commons

It’s well past time for a national standard for low-carbon electricity.  In order to avoid the worst impacts of climate change we must rapidly decarbonize our power sector while rapidly electrifying as much of the transportation, industry, and buildings sectors as possible.  That means adding a lot more carbon-free electricity generation as quickly as possible, and renewables are by far our cheapest option.  A national standard for low-carbon electricity is our best opportunity to accelerate clean energy deployment without costs to ratepayers or taxpayers.

With today’s introduction by Senator Tom Udall (D-NM) of the Renewable Electricity Standard Act (RES), the Senate now has two proposals that would create a national standard for low-carbon electricity generation.  Senator Tina Smith (D-MN) kicked off the conversation earlier this year with the introduction of the Clean Energy Standard Act (CES), which would decarbonize the power sector by mid-century using a variety of low and zero carbon sources.  The Udall proposal is focused on ramping up renewables over the next 15 years (in every state), putting the US on a trajectory to decarbonize the power sector by mid-century.

These two proposals take different approaches toward the same goal of transitioning all electricity generation to carbon free sources in a time horizon consistent with the best available climate science, and they do so while preserving the voluntary markets and without interfering with state policy.

If you aren’t supporting one of these bills, you are standing on the sidelines of clean energy progress.

How do they compare?

  • The Smith CES goes till 2050, while the Udall RES goes till 2035 with a requirement to consider revising and extending the policy to decarbonize the power sector by 2050.
  • The Smith CES credits both new and existing renewables, nuclear, and fossil energy with carbon, capture and sequestration (CCS), while the Udall RES mostly credits new renewables like wind and solar, with very little existing generation.
  • The Smith CES credits energy technologies through a carbon intensity standard that ratchets down over time while the Udall RES credits renewables-only at full credit.
  • Both bills focus on “growth rates” rather than an overall national percentage target by a certain year.
  • The Smith CES would ask utilities to grow by 2.75% every year until they hit 60% existing carbon-free electricity (1.75% till they hit 90%), while the Udall RES requires that utilities grow at 2% through the next decade and 2.5% through 2035, until they hit at least 60% renewable electricity.
  • Both bills tier the growth rates, with smaller utilities required to grow at lower rates than large utilities (with retail sales over 2 million and 1 million MWh per year respectively).
  • The Smith CES allows states to opt-out when they reach 90% carbon free generation, while the Udall RES allows states to opt-out if they have 60% renewables, or if the state’s RES or CES requires growth at or above the federal floor-setting standard.
  • The Smith CES incentivizes innovative, first-of-its-kind-projects as well as firm, dispatchable generation through additional credits (1.5 credits declining over time), while the Udall RES incentivizes renewable energy development on Native American lands, and in environmental justice and coal communities (2 credits).
These policies deliver similar outcomes through 2035

Due to the use of different models and different assumptions about natural gas prices, a comparison of the Smith CES using the E4ST analysis by Resources for the Future and the Udall RES using NREL’s ReEDS analysis won’t yield much clarity on the relative benefits of each policy (apples to oranges).  But UCS did model a 95% by 2050 Clean Energy Standard in ReEDS last year with similar assumptions for natural gas prices, which is a good proxy and should give us a rough estimate of how the Smith CES shakes out compared to the Udall RES (apples to apples).

2018 UCS ReEDS Modeling

The graph above looks at the projected electricity generation mix using four scenarios out to 2035 and includes a 95% by 2050 Low-Carbon Energy Standard (LCES), better known as a CES.  Under this policy the US would achieve 50% renewables by 2035; about the same as the Udall RES.  Electricity from coal is projected to be less than a percent by 2035 with either policy.  However, electricity from natural gas is projected to be about 24% by 2035 with the Smith CES bill relative to 36% with the Udall RES.

While the two policies deploy roughly the same amount of renewables by 2035 the location of those renewables would likely be different given the different policy designs.  The Udall RES likely distributes renewables more evenly around the country, with better penetration in laggard clean energy states.

But the Smith CES reduces slightly more power-sector emissions by 2035 (53% in the LCES analysis relative to 46% for the Udall RES bill), the primary reason being that it requires utilities to grow at 2.75% right off the bat, which is pretty steep, especially for states that are adding renewables at or below the national average of 1% annually.  Another reason the Smith CES reduces more emissions is because it helps preserve the carbon-free generation of existing nuclear power plants and delivers about 7% electricity from natural gas with CCS by 2035.

So, pick your policy.  Both policies are affordable and have considerable economic, public health and consumer benefits.  Renewables certainly have public health and environmental benefits that technologies like nuclear and fossil with CCS do not.  That’s an important consideration for equity given that economically vulnerable communities are more likely to be exposed to pollution from electricity generation.  But the Smith CES bill assures that our power sector is virtually carbon-free by mid-century, and the policy is designed to optimize least-cost, it preserves important existing low-carbon nuclear generation (that also means preserving jobs).

Do you prefer a near-term policy to give us a boost and put us on the right trajectory, or a long-term policy that locks us into a carbon-free power sector by mid-century?  The good news is that both policies get us on a pathway to avoid the worst impacts of climate change.

I say, the best policy is the one that can pass both chambers of congress and get signed into law by the president.

Photo: Leaflet/Wikimedia Commons

Pages

Subscribe to Union of Concerned Scientists aggregator - Combined UCS Blogs