Combined UCS Blogs

The Way We Talk About Geoengineering Matters

UCS Blog - The Equation (text only) -

Photo: NASA

Solar geoengineering describes a set of approaches that would reflect sunlight to cool the planet. The most prevalent of these approaches entails mimicking volcanic eruptions by releasing aerosols (tiny particles) into the upper atmosphere to reduce global temperatures – a method that comes with immense uncertainty and risk. We don’t yet know how it will affect regional weather patterns, and in turn its geopolitical consequences. One way we can attempt to understand potential outcomes are through models.

Models are representations of complex phenomena that are used to approximate outcomes. While they have limitations, they are an important tool to help scientists and decisionmakers understand potential futures based on scientific, technological and policy changes. With both potential and profound risks and uncertainties, we need more expansive modeling research on solar geoengineering techniques – not only to understand possible environmental impacts and risks, but political and social consequences as well.

Without looking at this broader range of outcomes, the messaging behind solar geoengineering can then lead to simplifications and mischaracterizations of its potential in the media. In spaces where public familiarity is low and risks are high, scientists and journalists should both be responsible for capturing the nuance and complexities around geoengineering – only a full picture will enable an informed public debate.

How we use modeling must evolve

In the case of solar geoengineering, models offer the opportunity to examine questions on a global scale – a scale at which real world experiments aren’t yet feasible or ethical.  A small set of researchers have been examining the potential outcomes of solar geoengineering through modeling impacts for several years. This research has been valuable in gaining a deeper understanding of the possible consequences of deploying solar geoengineering. However, many of the scenarios analyzed have been under idealized, or “best case” conditions – in other words, we’re not comprehensively looking at what could go wrong.

And as we all know too well, the real world rarely imitates the best-case scenario. An example that comes to mind is that of DDT. Developed as an insecticide, DDT was extremely effective at reducing mosquito populations for a number of years during and after World War II. However, widespread use of the chemical led to massive environmental harm due to a failure to thoroughly investigate its impacts before widespread use – impacts that were not accounted for.

With more attention being paid to solar geoengineering, researchers need to explore a more meaningful range of deployment scenarios to understand risks and tradeoffs under a much broader set of conditions. Modeling is most helpful when used not just to predict a particular outcome under the best-case conditions, but rather to learn about many possible futures. With climate change, researchers have studied technical, economic and political narratives to capture a more realistic set of outcomes, and a similar strategy needs to happen for geoengineering. Only when research is done to know what can go wrong – in addition to what can go right – can we have a clearer idea of what the use of solar geoengineering could potentially entail.

In other high-risk fields, we require a high level of investigation about what could go wrong. Military war gaming exercises are a prime example: simulations of best- and worst-case scenarios are conducted by the government to see how politics, military strategy, and potential outcomes could interact in a myriad of ways – all before real combat takes place. Just as its the responsibility of a carpenter to “measure twice and cut once,” generals and admirals investigate war scenarios in order to save lives and minimize collateral damage. Solar geoengineering merits the same level of analysis.

Messaging and media portrayals can be dangerously misleading

Despite the risks of oversimplification, a new optimistic study titled, “Halving warming with idealized solar geoengineering moderates key climate hazards” was recently published in Nature Climate Change. Written by scientists at Harvard, Princeton, MIT, and the Georgia Institute of Technology, these researchers modeled a simplified proxy of solar geoengineering to counter half of future climate change (estimated as a doubling of carbon dioxide).

They found that under these specific conditions there could potentially be a decrease in some climate impacts (such as temperature, water availability, and tropical storm intensity) across regions. However, in addition to other limitations, the study used an idealized solar geoengineering system in the model – in other words, simply turning down the sun without the use of a particular approach, like aerosols. This can be helpful to understand aspects of solar geoengineering, but without a technology in place, it’s not realistic to make assertions about who might be worse off since use of that technology would come with its own set of risks.

With a lack of realistic exploration of solar geoengineering, the messaging behind the technology led to overstated conclusions and mischaracterizations of its impacts in the media. While the authors were upfront about their use of an idealized scenario in the title of the journal article, some media stories focused on the benefits of solar geoengineering with limited discussion of the modeling constraints. Researchers must be responsible for putting their results in the context of its overall significance. The lack of doing so led to many article headlines framing the study as having much broader implications than merited. Some of these include:

“The Side Effects of Solar Geoengineering Could Be Minimal”- WIRED

“The case for spraying (just enough) chemicals into the sky to fight climate change: A new study says geoengineering could cut global warming in half — with no bad side effects.” –Vox 

“Upon reflection, solar geoengineering might not be a bad idea” -The New York Times (subscription required)

“Radical plan to artificially cool Earth’s climate could be safe, study finds” –The Guardian

“Solar geoengineering could offset global warming without causing harm” –Axios 

In an era characterized by 280-character tweets, headlines matter. These oversimplifications from reputable news organizations do a disservice to geoengineering discussions. If readers moved past the headlines, they’d find that while journalists and authors often qualified the findings, there were extremely mixed messages about the real meaning of these results. Just as importantly, we need studies that would characterize a more realistic range of scenarios. As a newly emerging topic for public debate, it is crucial that solar geoengineering is presented in an accurate way. False impressions will only harm us when society needs to make critical decisions on how to approach it.

Photo: NASA

Make Electric Vehicle Rebates Available at the Point of Purchase

UCS Blog - The Equation (text only) -

New legislation proposed in Massachusetts would take a critical step towards making electric vehicles (EVs) affordable to consumers, by offering rebates to consumers at the point of sale.

While Massachusetts offers rebates for electric vehicles through its “MOR-EV” program, Massachusetts currently does not offer rebates at the point of purchase. Instead, customers who purchase an electric vehicle must fill out this application, identifying the VIN number, the purchase details, the dealership and the sales person. If there is still funding available when you make your purchase (and the program is constantly on the verge of running out of funding) the state sends the applicant a rebate check up to 120 days later.

Further, beginning in 2019, MOR-EV rebate levels were cut to just $1,500 for battery electric vehicles and $0 for plug in hybrids. Massachusetts has been forced to cut rebate amounts because the state has not developed a sustainable funding source for MOR-EV. Even with the cutbacks, the program is set to run out of funding in June. Given the central role of EVs in achieving the state’s climate limits, this is a critical issue that must be dealt with by the legislature immediately.

A budget amendment proposed by Representative Jonathan Hecht would address these problems by creating a new instant rebate of for low- and moderate-income consumers. In addition, the Hecht amendment would restore MOR-EV rebate amounts to the level they were in 2018 ($2,500 for battery electric vehicles and $1,000 for plug in hybrids). Taken together, Rep. Hecht’s legislation would make EVs a viable choice for most new vehicle purchasers.

For example, under the Hecht proposal, a middle-class customer interested in a Chevy Bolt with Quirk Chevrolet through Green Energy Consumers Alliance’s Drive Green Program might be able to lease the vehicle for no money down, and an equivalent lease rate of $150 per month on a 36 month lease. That is a great deal for a great car that will improve our environment, our public health and our economy.

We need to make EVs affordable for more drivers

MOR-EV is an important program. Its goal of encouraging the electric vehicle market, so that economies of scale would improve quality and reduce price, remains well founded. Yes, many of the direct beneficiaries are early adopters, tech enthusiasts and people with high incomes. But those initial investments have driven down costs and made these vehicles more accessible.

Today, the challenge facing EVs is how to bring the technology to all drivers. Analysis conducted by the state agencies demonstrate that widespread electrification is necessary to hit the requirements of Massachusetts’s important climate law, the Global Warming Solutions Act. Passenger vehicles are responsible for over 20 percent of global warming emissions in the state. The Comprehensive Energy Plan requested by Governor Charlie Baker and conducted by the Executive Office of Energy and Environmental Affairs looked at several potential scenarios to meet the state’s climate limits for 2030. They found that in even the least aggressive scenario, electric vehicles will have to be 2 of 3 passenger vehicles sold in Massachusetts by 2030. In the most aggressive scenario electric vehicles are 7 of 8 new vehicles sold!

A program that requires consumers to wait months before they receive their rebate is inadequate.

Many states offer EV rebates at the point of purchase

In contrast, most states that offer rebates for electric vehicles do so at the point of purchase. Most also offer larger total rebate amounts. The Delaware Clean Vehicle Rebate program offers rebates of $3,500 for consumers who purchase through participating dealerships at the point of purchase. Auto dealers who participate in Connecticut’s CHEAPR program or New York’s Drive Clean Rebate, both of which offer $2,000 for a battery electric vehicle, likewise do all the paperwork behind the scenes, giving Connecticut and New York consumers an immediate incentive without any paperwork. Colorado’s alternative fuel tax credit of up to $5,000 for a battery electric vehicle can be claimed by financing institutions at the point of purchase. New Jersey exempts EVs from the state’s sales tax, which effectively provides thousands in savings at the point of purchase.

California does not offer rebates at the point of purchase, although the state is working on pilot projects to preapprove income-eligible EV purchasers. However, California does offer much larger incentives for low- and moderate-income residents. California’s Clean Vehicle Rebate Program offers a rebate of up to $4,500 for the purchase or lease of a battery electric vehicle to low-income consumers statewide. People who live in the San Joaquin Valley or within the South Coast Air Quality Management District are further eligible for incentives to trade in an older, high-emissions car or truck for an electric vehicle or hybrid; taken together, these incentives “stack” to up to $14,000 for low income consumers. California is also exploring providing financing assistance to low income consumers.

Data from the Center for Sustainable Energy confirms that states such as New York and Connecticut that have introduced rebates at the point of sale do significantly better in stimulating the market for low- and moderate-income customers than Massachusetts.

Mass Save for vehicles

Making electric vehicle rebates available at the point of sale is one particularly obvious step towards bringing this technology to all consumers. But we need to figure out a larger and more comprehensive approach to vehicle electrification. The decision to purchase an electric vehicle can be complicated. It requires the consumer to consider a number of issues from long-term cost savings to charging infrastructure to access to offstreet parking. We need a program that will address multiple obstacles to vehicle electrification and help the consumer through the process of understanding this technology and making a purchase.

We have a great model for how to do that in the Bay State. It’s called the Mass Save program.

Thanks to Mass Save, all Massachusetts residents can enjoy a free Home Energy Assessment. As part of that assessment, a person comes to your house, explains what your options are, explains what incentives and programs are available to support you. Mass Save also combines direct, upfront rebates with financing assistance, offering zero-interest loans for technologies such as heat pumps, insulated windows, and solar water heaters. Several programs provide greater incentives to low-income residents – or   provide efficiency technologies for free to low-income residents. Mass Save is a big part of the reason why Massachusetts has been consistently rated the most energy efficient state in the country, saving consumers hundreds of millions per year on their energy bills.

Mass Save is an awesome program because Massachusetts has devoted real resources to Mass Save from multiple dedicated funding streams. Massachusetts’ Three Year Energy Efficiency Plan calls for $772 million in energy efficiency funding through Mass Save in 2019. Currently MOR-EV has a 2019 budget of $8 million, which is projected to last the state through June. Nobody knows how the state will fund EV incentives in July. It is very difficult to build a bold or comprehensive program that addresses multiple barriers to EV adoption when MOR-EV is constantly on the verge of running out of money.

We need to do better than this, and we can. Representative Hecht’s budget amendment would represent a good step towards making MOR-EV a program that works for all consumers. We encourage the legislature to work with the Baker administration to make point-of-sale rebates for low- and moderate-income customers a priority, and to provide the kind of sustainable funding source that can allow our EV programs to reach a lot more consumers.

Grendelkhan/Wikimedia Commons

SNAP Rule Change Would Disproportionately Affect Trump Country

UCS Blog - The Equation (text only) -

Photo: Frank Boston/Flickr/CC by SA 2.0

USDA Secretary Sonny Perdue has signaled he may be having second thoughts about a proposed rule that could force 755,000 work-ready adults off the Supplemental Nutrition Assistance Program (SNAP). The rule, which would restrict states’ ability to waive benefit time limits for adults struggling to find work, has faced substantial backlash since it was announced in late December.

Last week, at a House Agriculture Appropriations Subcommittee hearing, representatives raised concerns about the diverse population of adults the SNAP rule would affect, which includes veterans, homeless individuals, college students, young adults aging out of foster care, those with undiagnosed mental and physical ailments, and those not designated as caregivers, but who have informal caregiving roles. These individuals can be characterized as “able-bodied adults without dependents,” even though many face significant barriers to employment. The conversation prompted Secretary Perdue to respond that this definition “may need some fine-tuning.”

Indeed.

But is Secretary Perdue’s statement also a response to the dawning reality that many of the people and communities who would be affected by the rule change are the same who helped elect President Trump to office? If it’s not—maybe it should be.

SNAP proposal would disproportionately hurt counties that voted for Trump

I’ve written previously about how the administration’s proposed changes to SNAP would make it harder for unemployed and underemployed adults to put food on the table—and why that’s bad policy for all of us. According to new UCS analysis, the proposed rule would cause 77 percent of all US counties currently using waivers to lose them—that’s a total of 664 counties from coast to coast. And my colleagues and I have crunched the numbers to show who would be hurt most. Layering data from 2016 election results, we found that more than three-quarters of counties that would lose waivers went to then-candidate Trump in the presidential election. In total, that’s more than 500 counties (and over half of them rural) that put their faith in a president who promised to bring prosperity to every corner of the country, and isn’t delivering.

While the administration has boasted of low unemployment rates and high job creation during its tenure, these national figures belie the persistent need that still plagues an overwhelming number of communities. Since the 2008 recession, labor force participation has dropped, wages have remained stagnant, and hunger remains widespread: food insecurity rates in 2017 were still higher than pre-recession levels. Relying on unemployment data alone to determine whether states can receive waivers—particularly at the threshold specific in the rule—ignores critical considerations about what’s actually happening in communities, and why states are best suited to assess their populations’ needs.

Below are snapshots of three counties from around the country that would lose waivers under the proposed SNAP rule. Although each is unique, they are all difficult places to find stable employment—and they all voted for President Trump in 2016.

  • Murray County, Georgia, a mostly rural area located on the state’s northwest border, had a population of 39,358 in 2016. For this mostly white county (83.7 percent in 2016), the 24-month unemployment rate between 2016 and 2017 was 6.8 percent, a rate nearly three percentage points higher than the national rate and a poverty rate of 18.8 percent, which is 34 percent higher than the US poverty rate. Manufacturing employed the largest share of workers in the county (38.5 percent), and recent reports indicate that Murray County’s unemployment has ticked up slightly, even though Georgia’s urban areas are seeing job growth.
  • Trumbull County, Ohio, is on the eastern border of the state, with a population of roughly 200,000 and a 24-month average unemployment rate of 6.8 percent from 2016 to 2017 and a poverty rate of 17.5 percent. Just over one in five workers here are employed in manufacturing. In fall 2018, GM announced that it would close its Lordstown assembly plant in Warren, OH.
  • Butte County, California, is a mostly urban county with roughly 220,000 residents in 2016. The county is home to a diverse set of organizations and businesses, including California State University Chico, United Healthcare, and Pacific Coast Producers (a cooperatively owned cannery, owned by over 160 family-farms in Central and Northern California), to name a few. Butte is also home to Paradise, a town severely impacted by the Camp fire that occurred in 2018. The average unemployment rate in Butte County was 6.5 percent for the most recent 24-month period and 3 percent of the population lived in poverty in 2017.

Although the comment period for the proposed SNAP rule closed on April 10, Secretary Perdue’s comments—and continued debate among lawmakers—suggest that the issue may not yet be settled. For hundreds of thousands of adults and the communities they live in, that’s a good sign.

Photo: Frank Boston/Flickr/CC by SA 2.0

Science and Transparency: Harms to the Public Interest from Harassing Public Records Requests

UCS Blog - The Equation (text only) -

Photo: Bishnu Sarangi/Pixabay.

In my work as a professor and researcher in the Microbiology and Environmental Toxicology Department at the University of California, Santa Cruz, I investigate the basic mechanisms underlying how exposure to toxic metals contribute to cellular effects and disease. My lab explores how exposures to environmental toxins, such as lead, manganese, and arsenic can cause or contribute to the development of diseases in humans. For example, some neurobehavioral and neurodegenerative disorders, such as learning deficits and Parkinsonism have been linked to elevated lead and manganese exposures in children and manganese exposures in adults, respectively.

California condor in flight. Lead poisoning was a significant factor precluding the recovery of wild condors in California.

In my career spanning 25 years, I helped develop and apply a scientific method to identify environmental sources of the toxic metal lead in exposure and lead poisoning cases in children and wildlife. I helped develop laboratory methods for evaluating tissue samples, including a “fingerprinting” technique based on the stable lead isotope ratios found in different sources of lead that enables the matching of lead in blood samples to the source of the lead exposure.

In the early 2000s, I collaborated with graduate students, other research scientists, and several other organizations to investigate the sources of lead poisoning that was killing endangered California condors. Our research showed that a primary source of lead that was poisoning condors came from ingesting lead fragments in animals that had been shot with lead ammunition, and that this lead poisoning was a significant factor precluding the recovery of wild condors in California.

Our work provided important scientific evidence of the harm that lead ammunition causes on non-target wildlife, and it supported the passage of AB 821 in 2007 and AB 711 in 2013, which led to partial and full bans on the use of lead ammunition for hunting in California.

Gun lobby attempts to discredit research

Because of our research, I and other collaborators received five public records requests under the California Public Records Act (CPRA) between December 2010 and  June 2013 from the law firm representing the California Rifle and Pistol Association Foundation seeking, in summary: all writings, electronic and written correspondence, analytical data, including raw data related to my research on lead in the environment and animals spanning a six year period. The very broad records requests asked for any and all correspondence and materials that contained the word “lead,” “blood,” “isotope,” “Condor,” “ammunition,” or “bullet.”  The request essentially sought everything I had done on lead research for this time period.

One seeming goal of the requestors was to discredit our findings and our reputations, as made apparent on a pro-hunting website that attempted to discredit our peer-reviewed and published findings. We initially responded that we would not release data and correspondence relating to unpublished research, because of our concern that we would lose control of the data and risk having it and our preliminary findings be published by others. As a result, the California Rifle and Pistol Association Foundation sued us in California Superior Court.  Ultimately, the court ruled in favor of the university and researchers by narrowing the scope of the CPRA requests, and limiting the requests to published studies and the underlying data cited.

Impacts and harms from overly broad public records requests

These very broad public records requests have had a significant impact on my ability to fulfill my research and teaching duties as a faculty member at University of California, Santa Cruz. I personally have spent nearly 200 hours searching documents and electronic files for responsive materials; meeting with university counsel and staff; preparing and sitting for depositions, court hearings, and giving testimony. Our efforts to provide responsive materials are ongoing.

Overly broad public records requests deprive the public of the benefits that such research can bring, such as helping wildlife and endangered species (such as the California Condor) survive and thrive by removing sources of environmental lead contamination.

While these requests have had a personal and professional impact on me as an individual, they have caused broader harms to the university’s mission of teaching and production of innovative research that benefits students, California residents, and the public at large. Impacts include:

  • Interfering with my ability to pursue research funding, conduct research, analyze data, and publish my research because of the time required to search and provide responsive materials that takes away from time invested in other duties.
  • Squelching scientific inquiry, and research communications and collaborations with colleagues or potential colleagues at other research institutions.

By chilling research and discouraging graduate students and collaborators from pursuing investigations into topics that could put them at odds with powerful interests, these types of expansive records requests deprive the public of the benefits that such research can bring, such as helping wildlife and endangered species survive and thrive by removing sources of environmental lead contamination.

Why I support modernizing the California Public Records Act

I chose to testify in front of the California Assembly Committee on the Judiciary in support of AB 700 and the effort to modernize the California Public Records Act to protect the freedom to research and to help  streamline the ability of California public universities to process and manage public records requests. This bill establishes very narrow exceptions for researchers to protect unpublished data and some peer correspondence, which would help prevent task diversion, reputational damage, and encourage inquiry and knowledge production at public universities across the state. AB 700 would also reduce the serious burden from expansive and overly-broad records requests on researchers and on the courts and the long backlog of records requests. I think this bill strikes the right balance between public transparency and privacy for research. Ultimately, the public will be better served if the state provides more clarity about what information should be disclosable under the California Public Records Act.

 

Donald Smith is Professor of Microbiology and Environmental Toxicology at the University of California, Santa Cruz. He received his PhD in 1991 and he joined the faculty at UC Santa Cruz in 1996. He has over 20 years experience and published over 100 peer-reviewed articles in environmental health research, with an emphasis on exposures and neurotoxicology of environmental agents, including the introduction, transport and fate of metals and natural toxins in the environment, exposure pathways to susceptible populations, and the neuromolecular mechanisms underlying neurotoxicity.

Science Network Voices gives Equation readers access to the depth of expertise and broad perspective on current issues that our Science Network members bring to UCS. The views expressed in Science Network posts are those of the author alone.

Photo: Gavin Emmons Photo: Donald Smith

Clean Energy’s Progress, in One Simple, Uplifting Graphic

UCS Blog - The Equation (text only) -

Photo: AWEA

Lea en español

News about global warming can be sobering stuff, and some visual presentations are particularly effective at conveying the bad news. As serious as climate change is, though, it’s important to remember that we have some serious responses. A new way of looking at US wind and solar progress helps make that eminently clear.

Sobering graphic

If you, like me, find yourself at times swinging between a sense of the challenge of climate change on the one hand, and the excitement of clean energy on the other, you might appreciate the need for balance and perspective.

The progression of our effects on the global climate are captured powerfully (and frighteningly) in a viral graphic from UK climate scientist Ed Hawkins that shows variations in global temperatures since 1850. While it varies by month and year, the trend shown in the GIF is really (really) clear: The passing years have brought higher and higher temperatures.

Serious, sobering stuff, given all that comes with that global warming.

So it seems like we need things to counterbalance graphics like that, at least in part—not to take the pressure off, but to remind ourselves of where we’re making important progress, and laying the groundwork for a whole lot more.

Graphical remedy

One option is to take a look at what’s going on with clean energy in the power sector—and wind and solar, in particular, which have been marvels to behold in recent years.

A new graphic does just that, looking at the shared contribution of wind and solar to our nation’s electricity generation, in much the same way as the Hawkins graphic does: month in and month out, as the years roll by. Here it is:

The graphic, from the Union of Concerned Scientists, draws on electric power sector data from the US Department of Energy’s Energy Information Administration (EIA), and includes wind power, large-scale solar, and (importantly, given that it too often gets ignored) the increasingly significant contribution from small-scale/rooftop solar.

And this little GIF has a lot to say. It begins with wind and solar’s humble status early last decade, when wind barely registered, and solar wasn’t a factor at all. From there the spiral sure picks up steam, as each year has brought online more wind turbines (now 58,000 and climbing) and more solar panels (on nearly 2 million American rooftops, and far beyond).

On a monthly basis, the contribution of wind and solar has shot past 3% (2010), past 6% (early 2013), past 12% (April 2018)—where every additional 1% is the equivalent of more than 4 million typical US households’ electricity consumption. And on an annual basis, that progress has translated into the electricity contribution from just those two technologies going from 1 in every 71 kilowatt-hours in 2008 to 1 in every 11 in 2018.

And the graphic clearly conveys the momentum poised to carry solar and wind far beyond. There’s a lot more progress coming, it declares—clean energy milestones to be watching out for (and making happen).

Credit: J. Rogers/UCS

Why it matters

To be clear, the new graphic and all that it represents shouldn’t cause us to lose sight of what really matters: from a climate perspective, what’s happening to overall carbon emissions, and the resulting temperature changes. It’ll take a lot more clean energy—a lot less fossil energy—in our electricity mix to help us deal with climate change.

But the progress on clean energy is really important because of the power sector’s still-substantial contributions to our carbon pollution, and the need for a lot more action. And that progress also matters because the power sector is crucial for cutting carbon pollution from other sectors, through electrification of stuff like transportation (think electric vehicles) and home heating (heat pumps!).

That’s why keeping our eyes on stats like these is key: We need to celebrate the progress we’re making, even as we push for so much more.

Sartorial solar splendor on its way?

Meanwhile, it turns out that the Hawkins graphic in stripe form has gone on to become the basis for a line of must-have clothing and more.

We can hope that the good news about the progress of US solar and wind becomes just as desirable a fashion accessory.

Photo: AWEA Photo: Dennis Schroeder / NREL

El progreso de la energía limpia, en un gráfico sencillo e inspirador

UCS Blog - The Equation (text only) -

Photo: AWEA

Read in English

Las noticias sobre el calentamiento global pueden ser alarmantes, y algunas presentaciones visuales son particularmente efectivas para transmitir las malas noticias. A pesar de lo serio que es el cambio climático, sin embargo, es importante recordar que tenemos respuestas serias. Un nuevo gráfico sobre el progreso de la energía eólica y solar en los Estados Unidos ayuda a evidenciarlas claramente.

Un gráfico sombrío

Si tú, como yo, te encuentras a veces alternando entre un sentido realista del desafío que representa el cambio climático por un lado, y la emoción del progreso de la energía limpia por el otro, talvez puedes apreciar la necesidad de equilibrio y perspectiva.

La evolución de nuestro impacto en el clima global fue capturada poderosamente (y aterradoramente) en un gráfico viral del científico climático británico Ed Hawkins. El gráfico muestra las variaciones en las temperaturas a nivel global desde 1850. Mientras varía por mes y año, la tendencia mostrada en el GIF es sumamente clara: El paso de los años ha traído temperaturas más y más elevadas. Algo muy serio, dado todo lo que viene con ese calentamiento global.

Dado eso, parece que necesitamos herramientas para contrarrestar esta clase de gráficos, por lo menos en parte. No para eliminar la presión que sentimos de actuar, sino para acordarnos de las áreas en que estamos logrando un progreso importante, y creando una base sólida para mucho más.

Remedio gráfico

Una opción es observar lo que está pasando con la energía limpia, especialmente con las energías eólica (del viento) y solar, que han estado progresando de forma impresionante en los últimos años.

Un nuevo gráfico hace justamente eso, viendo la contribución que han hecho las turbinas eólicas y los paneles solares a la generación de electricidad en los EE. UU., de forma parecida al gráfico Hawkins: mes tras mes, con el paso de los años. Aquí está:

El gráfico, elaborado por la Union of Concerned Scientists, representa datos del Departamento de Energía de los EE. UU. e incluye la energía eólica, la energía solar a gran escala y también la energía solar a pequeña escala (lo cual es importante dado que muchas veces es ignorada).

Y ese pequeño GIF tiene mucho que decir. Empieza con el estado humilde en que se encontraban las energías eólica y solar al principio de la década pasada, cuando la energía eólica apenas si se podía reconocer en las cifras y el efecto de la energía solar no era distinguible. De allí la espiral se acelera, con la conexión cada año de más turbinas eólicas (ahora 58,000 y creciendo) y más paneles solares (en casi 2 millones de techos estadounidenses, y mucho más allá).

Desde el punto de vista de las cifras mensuales, la contribución de las dos tecnologías llegó al 3% en 2010, al 6% en la primavera del 2013 y al 12% en abril de 2018, con cada 1% adicional equivalente al consumo eléctrico de más de 4 millones de hogares típicos estadounidenses. Y desde el punto de vista anual, ese progreso se ha traducido en que su contribución a la matriz de generación eléctrica pase de 1 de cada 71 electrones en 2008 a 1 de cada 11 en 2018.

Y el gráfico transmite claramente el momento que está listo para llevar a la solar y la eólica mucho más allá. Hay mucho más progreso en ruta, declara, representado en hitos de energía limpia que esperamos (y que nos esperan a nosotros para llevarlos a cabo).

Credit: J. Rogers/UCS

¿Por qué es importante?

Este nuevo gráfico y todo lo que representa no debe hacernos perder de vista lo que realmente nos importa del punto de vista del clima: lo que está pasando con las emisiones de dióxido de carbono (CO2), y el calentamiento global. Vamos a necesitar mucha más energía limpia y mucha menos energía fósil en nuestra matriz eléctrica para ayudarnos a enfrentar el cambio climático.

Pero el progreso que estamos logrando con la energía limpia es muy importante dada la contaminación por CO2 por la cual el sector eléctrico sigue siendo responsable, y la necesidad que tenemos de mucha más acción. Y ese progreso es aún más importante porque el sector eléctrico es crucial para lograr reducir las emisiones de CO2 también en otros sectores a través de la electrificación del transporte (los vehículos eléctricos), por ejemplo, y la calefacción (las bombas de calor).

Es por eso que es clave seguir prestando atención a cifras como éstas: Tenemos que celebrar el progreso que estamos logrando, mientras empujamos a la vez para lograr mucho más.

La ropa que provoca

Mientras tanto, resulta que el gráfico Hawkins en forma de rayos se ha convertido también en la base de una variedad de ropa y otros accesorios “imprescindibles”.

Esperemos entonces que las noticias sobre el progreso de las energías eólica y solar se conviertan también en un deseado accesorio de moda.

Photo: AWEA Photo: Dennis Schroeder / NREL

California’s Wildfire Costs are Just the Tip of the Iceberg

UCS Blog - The Equation (text only) -

Photo: NASA

As California’s electric utilities grapple with the aftermath of record-breaking wildfires, the potential impact on customer bills is starting to come into focus. While it is still unclear who will end up paying for wildfire damages, one thing is clear: extreme wildfires are here to stay, and they will likely keep getting worse. With climate change increasing not only the risk of wildfires, but also threatening many other economic and human health impacts, the costs of preventing extreme climate change pale in comparison to the costs of inaction.

Wildfires in California

To cover the costs of only the 2017-2018 wildfires, one estimate indicates that residential utility bills for customers of the state’s largest utility, Pacific Gas and Electric (PG&E), would need to increase by $300 annually. However, another estimate indicates that, if wildfires in California continue to inflict as much damage as they have over the past two years, PG&E bills would need to double to cover the recurring costs, while bills for electricity customers across all of California would need to increase by 50%. Unfortunately, the last two years of wildfires have not just been an extraordinary fluke.

Over the past few decades in the Western US, the number of large wildfires has been rising and the fire season has been getting longer. While there are multiple factors driving these changes, climate change is increasing the risk of wildfires. As climate change drives up temperatures and changes precipitation patterns, California can expect more frequent wildfires and more acres burned in the future.

Costs of climate change inaction

But the costs of climate change will not just show up in higher electricity bills.

A recent report from scientists at the Environmental Protection Agency calculated the costs of climate change by the end of the century under different scenarios. While the report found that climate change will cost the US economy hundreds of billions of dollars annually, it also showed that a slow response to climate change, or worse, inaction, will cost us far more in dollars, property losses, public health and human lives.

If we limit global warming to two degrees Celsius, tens of billions of dollars in damages could be avoided every year by the end of the century – which works out to savings of $250 to $600 per person per year. This just goes to show how costly it will be not to address climate change.

The US economy can avoid billions of dollars in damages by reducing global warming emissions to stay on a lower-emissions trajectory. Figures are from the Fourth National Climate Assessment.

A vicious cycle

This brings us back to PG&E, which is grappling with massive wildfire costs. If these costs end up being passed on to electricity customers, it could ultimately hinder California’s ability to prevent further climate change. If electricity prices go up significantly, people who own electric vehicles or have all-electric homes will face much higher costs. Since vehicle and building electrification are key components of California’s strategy to reduce global warming emissions, substantially higher electricity costs would disincentivize electrification and make emissions reductions more difficult to achieve.

There is vicious cycle at play here:

  • Climate change is increasing the risk of wildfires.
  • Wildfire costs might increase the cost of electricity.
  • Higher electricity prices would disincentivize electrification, which is one of California’s main tools for preventing climate change.
  • Maintaining or, even worse, increasing, our global warming emissions trajectory will lead to more climate change impacts, such as extreme wildfires.

In short, climate change may make it more difficult for California to prevent climate change.

You have to spend money to save money

At the end of the day, this problem is not going to solve itself. We will need to make all sorts of investments to prevent further climate change and to adapt to the climate change we have already locked in.

Encouragingly, the governor of California is taking climate change prevention and adaptation very seriously. The governor’s office recently released a report that details a wide array of policy options meant to address the climate change and wildfire problems faced by California’s electric utilities.

While some of those policy changes will no doubt be necessary, California also needs to continue investing heavily in solutions that we know are necessary for the transition to a clean energy economy. Renewable energy, electric vehicles, energy efficiency, and many more solutions are critical to the state’s emissions reduction goals, and California needs to continue making these investments even in the face of expensive disasters exacerbated by climate change.

These investments will not just be out of the goodness of our hearts. With hundreds of billions of dollars in climate-change-caused damages on the line, putting money into climate change prevention is a wise investment.

Photo: NASA

What to Expect When You’re Expecting the 2020-2025 Dietary Guidelines

UCS Blog - The Equation (text only) -

Photo: Peter Merholz/Flickr

Pregnancy Advice: Caffeine’s ok. Some caffeine is ok. No caffeine.

Breastfeeding Advice: Start solids at 4 months. Start solids at 6 months. Exclusively breastfeed for one year.

First Foods Advice: Homemade baby food. Store-bought baby food. Spoon feeding. Baby-led weaning.

My experience of being pregnant and having a baby in modern times has meant getting conflicting advice from the different sources I consulted, specifically surrounding nutrition. Depending on the google search or midwife I spoke to, I heard different daily amounts of caffeine suitable while pregnant. Depending on the lactation consultant that popped into my hospital room, I heard different levels of concern about the amount I was feeding my newborn. And now that I’m about to start solid foods with my six-month old, I have heard conflicting information about when, how, and what to start feeding my child. How is it so difficult to find what the body of evidence says about these simple questions that parents have had since the dawn of time? When I discovered that past editions of the Dietary Guidelines didn’t address the critical population of pregnant women and infants from birth to two years, I wondered how it was possible that there was this huge gap in knowledge and guidance for such an important developmental stage. That’s why I’m very excited that the Dietary Guidelines Advisory Committee (DGAC) will be examining scientific questions specific to this population that will inform the 2020-2025 Dietary Guidelines and have recently begun that process.

In the meantime, I will be starting my daughter on solids this week and have been trying to find science-supported best practices. It has been shockingly hard to navigate and I became reminded of the interesting world of the baby food industry that I became acquainted with as I researched and wrote about added sugar guidelines specifically for the 2016 UCS report, Hooked for Life.

The history of baby food and nutrition guidelines

Amy Bentley’s Inventing Baby Food, explains that the baby and children’s food market as we know it today is a fairly new construction, stemming from the gradual industrialization of the food system throughout the last century. Early on in the history of baby food marketing, a strong emphasis was placed on convincing parents and the medical community of the healthfulness of baby food through far-reaching ad campaigns and industry-funded research. The Gerber family began making canned, pureed fruits and vegetables for babies in 1926 and in 1940 began to focus entirely on baby foods. During this time, it was considered a new practice to introduce solid foods to babies before one year. In order to convince moms of the wholesomeness of its products, Gerber commissioned research touting the health benefits of canned baby foods in the Journal of the American Dietetic Association (ADA) and the company launched advertising campaigns in the Journal and women’s magazines. Quickly, Gerber’s popularity and aggressive marketing campaign correlated with the decrease in age of earlier introduction of solid foods as a supplement to breast milk. Earlier introduction of foods meant an expansion of baby food market share, which meant big sales for Gerber.

All the while, there were no federal guidelines issued for infants. Gerber took advantage of this gap in 1990 when they released their own booklet, Dietary Guidelines for Infants, which glossed over the impacts of sugar consumption, for example, by telling readers that, “Sugar is OK, but in moderation…A Food & Drug Administration study found that sugar has not been shown to cause hyperactivity, diabetes, obesity or heart disease. But tooth disease can be a problem.” The FDA study that Gerber refers to was heavily influenced by industry sponsorship, and the chair of the study later went on to work at the Corn Refiner’s Association, a trade group representing the interests of high-fructose corn syrup manufacturers. In fact, evidence has since linked excessive added sugar consumption with incidence of chronic disease including diabetes, cardiovascular disease, and obesity.

Today, the American Academy of Pediatrics (AAP), World Health Organization, and the American Academy of Family Physicians all recommend exclusive breastfeeding until six months using infant formulas to supplement if necessary. AAP suggests that complementary foods are introduced around 4 to 6 months with continued breastfeeding until one year. But what foods, how much, and when is a little harder to parse out. Children’s food preferences are predicted by early intake patterns but can change with learning and exposure, and flavors from maternal diet influence a baby’s senses and early life experiences. There’s research that shows that early exposure to a range of foods and textures is associated with their acceptance later on. And of course, not all babies and families are alike and that’s okay! There are differences related to cultural norms in the timing of introduction of food and the types of food eaten. Infants are very adaptable and can handle different ways of feeding.

There’s a lot of science out there to wade through, but it is not available in an easy-to-understand format from an independent and reliable government source. That’s what the 2020 Dietary Guidelines have to offer.

2020-2025 Dietary Guidelines: What to expect

The Dietary Guidelines for Americans is the gold standard for nutrition advice in the United States and is statutorily required to be released every five years by the Department of Human Health Services (HHS) and the U.S. Department of Agriculture (USDA). These guidelines provide us with recommendations for achieving a healthy eating pattern based on the “preponderance of the scientific and medical knowledge which is current at the time the report is prepared.” Historically, the recommendations have been meant for adults and children two years and older and have not focused on infants through age one and pregnant women as a subset of the population.

The freshly chartered DGAC will be charged with examining scientific questions relating to the diets of the general child and adult population, but also about nutrition for pregnant women and infants that will be hugely beneficial to all moms, dads, and caregivers out there looking for answers.

Credit: USDA

While I was pregnant, my daughter was in the lower percentile for weight and I was told by one doctor to increase my protein intake and another that that wouldn’t matter. I would have loved to know with some degree of certainty whether there was any relationship between what I was or wasn’t eating and her growth. One of the questions to be considered by DGAC is the relationship between dietary patterns during pregnancy and gestational weight gain. I also wonder about the relationship between my diet while breastfeeding and whether there’s anything I should absolutely be eating to give my daughter all the nutrients she needs to meet her developmental milestones. DGAC will be looking at that question (for both breastmilk and formula) as well as whether and how diet during pregnancy and while nursing impact the child’s risk of food allergies. The committee will also be evaluating the evidence on complementary feeding and whether the timing, types of, or amounts of food have an impact on the child’s growth, development, or allergy risk.

At the first DGAC meeting on March 29-3o, the USDA, HHS, and DGAC acknowledged that there are still limits in evaluating the science on these populations due to a smaller body of research. Unbelievably, there’s still so much we don’t know about breast milk and lactation, and in addition to government and academic scholarship, there are really interesting mom-led research projects emerging to fill that gap.

The Dietary Guidelines are not just useful for personal meal planning and diet decisions but they also feed directly into the types of food made available as a part of the USDA programs that feed pregnant women and infants, like Supplemental Nutrition Assistance Program (SNAP); Special Supplemental Nutrition Program for Women, Infants and Children (WIC); and the Child and Adult Care Food Program (CACFP). Having guidelines for infants on sugar intake in line with the American Heart Association’s recommendation of no added sugar for children under two years old, would mean some changes to the types of foods offered as a part of these programs.

Nutrition guidelines will be a tool in the parent toolbelt

 But if there’s one thing I’ve learned as I’ve researched and written about this issue and now lived it, is that while the scientific evidence is critical, there are a whole lot of other factors that inform decisions about how we care for our children. Guidelines are after all just that. As long as babies are fed and loved, they’ll be okay. What the guidelines are here to help us figure out is how we might be able to make decisions about their nutrition that will set them up to be as healthy as possible. And what parent wouldn’t want the tools to do that?

As I wait anxiously for the report of the DGAC to come out next year, I will do what all parents and caregivers have done before me which is do the best I can. I have amazing resources at my disposal in my pediatrician, all the moms and parents I know, and local breastfeeding organizations. Whether my daughter’s first food ends up being rice cereal, pureed banana, or chunks of avocado, it is guaranteed to be messy, emotional, and the most fun ever, just like everything else that comes with parenthood.

Photo: Peter Merholz/Flickr

US Winter 2018-2019: Bomb Cyclones, Arctic Outbreaks, Abundant Snowfall, Flooding, and an Unseasonably Warm Alaska

UCS Blog - The Equation (text only) -

A car ventures out in the polar vortex on January 30, 2019. Photo: Down Dickerson/Flickr

As the northern hemisphere steadily moves beyond the spring equinox, its time for a look back on the US winter season. With the arrival of spring, days stretch longer and bud bursts dazzle passersby, and we almost forget what the winter brought us in terms of extreme weather and that, as counter-intuitive as it may seem at times, winter is still very much a part of a warming world.  And it is characterized by the changing behavior of the most unwelcome parts of any season: extreme weather.

Here we’ll review five notable patterns from this past cold season.

Cold season pattern #1: damaging ‘bomb cyclones’

The National Weather Service defines the winter season for the US as December through February and the cold season as November through March. The 2018–2019 winter season storm period kicked off with an exceptionally early Thanksgiving blizzard, and even now the storms aren’t quite over as Winter Storm Wesley is likely to break many April snow records.

Like other storms that are given names as they make headlines, Wesley is a rare bomb cyclone, a mid-latitude storm that undergoes a sudden and extreme drop in barometric pressure over 24 hours that leads to rapid intensification. (Note that the exact pressure drop over 24 hours that qualifies is based on the latitude.) As of this writing, Wesley was expected to bring blizzard conditions from Denver to the Minneapolis area, as well as hail to Kansas and eastern Nebraska.

It is rare to have a bomb cyclone within the continental United States, yet two have occurred this season. Along with Wesley, Winter Storm Ulmer (March 12-14) brought blizzard conditions, high winds, sudden melt of ground cover snow and subsequent devastating flooding that destroyed levees. Unfortunately, record-breaking bomb cyclone Ulmer made the US list of billion-dollar weather and climate disasters. During Ulmer, Denver set an all-time low pressure record (MSLP) and the strongest non-thunderstorm gust on record (80mph); gusts of 96 mph hit Colorado Springs; high winds derailed a train in New Mexico; and multiple stations logged gusts of more than 100 mph (San Augustin Pass, NM; Cloudcroft, NM; Pine Springs, TX).

Cold season pattern #2: Arctic outbreaks to the Lower 48

Repeated Arctic outbreaks of cold air into the Lower 48 were another feature of this cold season. These are tied to a weakening of the stratospheric polar vortex. Marking an early start to the cold season, Kansas City, Missouri, logged the coldest November temperature on record, with much of the Lower 48 registering below average or much below average minimum November temperatures. Illinois set an all-time record low of -38 degrees Fahrenheit on January 31, 2019.

Imagine what these cold temperature records (as registered on an official thermometer) actually felt like if you were outside and exposed to blowing winds, with increased chances for hypothermia under severe windchill. One Arctic outbreak included a winter storm that brought snow from the northern plains to the Great Lakes and the Northeast; the coldest temperatures in years to the Midwest; and closed schools, cancelled flights, and, tragically, brought an associated weather-related death toll.

old outbreaks (Jan 29 and April 11 of 2019)

Figure 1. Examples of cold outbreaks (Jan 29 and April 11 of 2019). Image source: ClimateReanalyzer.org.

Cold season pattern #3: abundant snowfall

Many winter sports enthusiasts enjoy their favorite activities after a fresh snowfall and this cold season brought abundant snow November through February.  he repeated Arctic cold outbreaks helped because when precipitation occurred, it often fell in the form of snow versus rain, which has been a problem cropping up more frequently in recent seasons.  The ski industry welcomed a much needed boost in visitors and winter gear sales.

snowfall for November 2018 through February 2019

Figure 2. Inches of snowfall for November 2018 through February 2019. Image source: NOAA

Cold season pattern #4: intense precipitation and flooding

This winter season also featured an emerging El Niño, a phase of a natural Pacific Ocean cycle that can bring wet conditions to the southern US region during winter. When storm tracks bring moisture from lower latitude ocean regions (such as the Pacific or the Gulf of Mexico) these can dump intense precipitation over the US. Such events were also a feature of this season and brought devastating floodng to many communities in the US.

Unfortunately, this is another pattern emerging over recent years in a warming world and one that was on full display in Louisiana, where the Bonnet Carré Spillway had to be opened for only the thirteenth time since its construction to protect the city of New Orleans from floodwaters of the Mississippi River. We notice (and I have seen it opened myself during a random winter visit to the city) that it has been opened more frequently in recent years, and that this season marks the first time that the spillway was opened in consecutive years for the first time in its 88-year history.

Cold season pattern #5: unseasonably warm Alaska

To top off this look back toward the cold season, we find that Alaska was unseasonably warm.  The records were for record warmth compared to historical cold season trends and were such that traditional dog sled races had to be cancelled. Perhaps no surprise as recent years we have read headlines that remark on the truckloads of snow being brought to Anchorage to allow the start of the Iditarod, Alaska’s most famous dog sled race.

It all adds up to record-breaking extreme weather for the US during this past winter season.  As the Intergovernmental Season on Climate Change special report on extreme weather has noted, one of the signatures of climate change is more extreme weather events. Winter 2018-2019 in the US was no exception.

Alaska this winter had above or much above average maximum temperature

Figure 3. The entire state of Alaska this winter had above or much above average maximum temperature. Source: NOAA

Photo: Down Dickerson/Flickr climatereanalyzer.org NOAA NOAA

Uncharted Territory: The EPA’s Science Advisors Just Called Out Administrator Wheeler

UCS Blog - The Equation (text only) -

EPA Administrator Andrew Wheeler Photo: USDA/Flickr

Yesterday the EPA Clean Air Scientific Advisory Committee (CASAC) published a letter to Administrator Andrew Wheeler making recommendations on the agency’s approach to updating the ambient air pollution standard for particulate matter (PM). Chiefly, the science advisors have now acknowledged the group has inadequate expertise to conduct the review.

We are now in uncharted territory with the EPA in a tough position on both its PM and ozone pollution standard updates. Here are some key highlights from the letter and their implications.

CASAC asks that the particulate matter review panel be reinstated

“The CASAC recommends that the EPA reappoint the previous CASAC PM panel (or appoint a panel with similar expertise) as well as adding expertise…,” the committee members write in their consensus comments.

This is significant. Last October, then Acting Administrator Wheeler left CASAC high and dry by disbanding the particulate matter review panel—a group of experts that boosted the span of expertise CASAC and the EPA had access to in their review. The ~20-person pollutant review panels have for decades augmented CASAC’s expertise, helping to review the EPA’s science assessment on particulate matter and health and welfare effects. Since that time, I (and many, many, many others) have repeatedly called on the EPA to reinstate the panel.

In November, a letter from former ozone review panel members asked EPA leaders to reinstate the panel. In December, a letter from former PM panel members asked the same, and this group sent a second letter in March. A separate letter from former CASAC chairs asked for the panel as well. And an additional letter  from 206 air pollution and public health experts asked that the panel be brought back. This is on top of many other public comments echoing similar concerns from scientists, scientific societies, and other experts in the air quality and health arena. Since November, CASAC members themselves have been saying they need more expertise, but the CASAC Chair had ignored these pleas, until now.

The fact that the committee now agrees it needs the panel is important. It sends a clear signal to the EPA administrator that the process for the review of the science informing the PM standard is inadequate. And the committee lays this out in no uncertain terms, declaring that, “Additional expertise is needed for [CASAC] to provide a thorough review of the [PM] National Ambient Air Quality Standards (NAAQS) documents. The breadth and diversity of evidence to be considered exceeds the expertise of the statutory CASAC members, or indeed of any seven individuals.” This is of course what we’ve been saying all along.

This acknowledgement of needed expertise puts agency leadership in a tough spot, given that just last week EPA Administrator Andrew Wheeler claimed that CASAC had a “good balance of expertise” despite disbanding of the panel. With an administrator who directly contradicts his agency’s science advisors, what’s the EPA to do? One thing is clear, this is an atypical process and it is sure to face legal challenges.

Chair Cox’s views on causality have been cast aside

CASAC Chair Dr. Tony Cox released a draft of this letter on March 7, which included eyebrow-raising language asking the EPA to throw away the time-tested and scientifically backed weight-of-the-evidence approach it has long used to assess the links between air pollutants and health effects. (More on Cox’s proposal and why its problematic in my Science magazine piece here).

In the final letter, this language on manipulative causality has thankfully been outsourced to Cox’s individual comments and a few places where it notes that “some CASAC members think…” This is considerably dampened from the draft letter where it appeared as a consensus recommendation for upending EPA’s weight-of-the-evidence process for developing a science-based PM standard.

The committee still cannot agree on scientific facts

The final letter has maintained language noting that CASAC could not come to agreement on the relationship between fine particle pollution (PM2.5 ) and early death, writing, “CASAC did not reach consensus on the causality determination of mortality from PM2.5 exposure.” This is striking given that the link between fine particulate matter exposure and early death is well-documented. It has been repeatedly demonstrated in different scientific studies, in different locations, and at different concentrations. Past CASACs and PM panels as well as some members of the divided current committee have acknowledged this relationship, and yet some members of the current committee are breaking with past science advisors and the greater scientific community.

CASAC criticizes the EPA’s science assessment

As in the draft letter, CASAC continue to be highly critical of the EPA’s science assessment, insisting on the document’s “Lack of comprehensive, systematic review.”  (But who are they to judge if they’ve already admitted they aren’t the appropriate advisors?) To be clear, it is expected and desired that the committee would have suggestions and criticisms of the science assessment and would want to see a revised draft. (This, after all, is the hallmark of peer-review). However, the tone and extent of the criticism in this letter takes it up a notch.

By contrast, a group of 17 scientists from the disbanded panel, while detailing a number of revisions needed for improving the science assessment, stated, “We commend EPA staff for development of an excellent first draft of the ISA that provides comprehensive and systematic assessment of the available science relevant to understanding the health impacts of exposure to particulate matter.”

Given the committee’s own admission that the group is in inadequate to conduct the review, this does raise questions about whether the group is qualified to offer some of the detailed technical criticisms it does, such as on the adequacy of non-threshold models to estimate health associations at low concentrations and the need for study exclusion criteria.

What’s next on PM and ozone reviews?

The EPA will now decide what to do with this science advice. Will it revise the science assessment and send it back to CASAC or simply declare it does not need a second review? Will the PM panel be reconvened to review a second draft science assessment? What about the timeline for the PM standard update? We know the administration is working on an expedited schedule. Administrator Wheeler has made this clear.

And what about the ozone process? If CASAC has concluded it has inadequate expertise for the PM review, it is difficult to imagine they will feel qualified to conduct the upcoming ozone review, given it relies on a similar breadth of scientific disciplines. (The EPA is set to release the ozone science assessment this spring). EPA leadership failed to convene an ozone review panel last October so CASAC is again poised to review a massive scientific assessment with one hand tied behind its back. The agency could decide to plow forward in the PM standard update process, ignoring CASAC’s advice.

Regardless of what the agency does next, it is clear the process is broken, and its science advisors know it too.

Photo: USDA/Flickr

Prodded by Coal Industry, the EPA Decides Mercury Is Fine, Just Fine. Remind Them: It’s Not.

UCS Blog - The Equation (text only) -

Lyntha Scott Eiler/Flickr Photo: Lyntha Scott Eiler/Flickr

From the gaping maw of coal baron greed slithers another brazen ploy.

This time: guiding our nation’s Environmental Protection Agency (EPA) to arrive at the stunning discovery that mercury spewed from coal plants is actually A-Okay.

That’s right. Under the direction of (former Murray Energy coal lobbyist) Administrator Andrew Wheeler, the EPA is now proposing to find that mercury, a potent neurotoxin that can ruin a person’s fair shake at life before they’re ever born, is neither appropriate nor necessary to regulate from coal plants—by far mercury pollution’s largest source.

Which is awfully convenient news for the desperate heads of coal mining corporations that are existentially dependent on power plants consuming more coal. For them, this regulatory turn would usher in a new refrain: puff away, coal plants, puff away! And with it, too, the devastating confirmation that today’s EPA is officially Not Okay.

This brash attack on the health and welfare of untold millions in favor of the fortunes of a coal-laden few is underpinned by an analytical sleight of hand buried deep in the regulatory fine print. It’s obscure, it’s dull—and it’s incredibly effective. The pernicious combination has polluters hoping to slip a game-changing precedent through without garnering the level of attention warranted by the staggering ramifications therein.

And so we go, once more unto the breach.

Public comments on this proposed rule are incredibly important, to officially record objections to an outright decimation of the value of public health in favor of polluter preference.

The Union of Concerned Scientists has made it easy for you to submit your own comments. For all the details and background, we also wrote a technical guide to help inform discussion, introduced by my colleague Rachel here; below, I’ll offer context, and highlight four key points. The deadline for public comments is April 17

Mercury protections, and mercury attacks

At immediate issue is the “appropriate and necessary” finding underpinning the 2012 Mercury and Air Toxics Standards (MATS) for coal- and oil-fired power plants.

In the 1990 Clean Air Act Amendments, Congress directed the EPA to regulate hazardous air pollutants—including mercury, as well as things like nickel, arsenic, and chromium—from coal-fired power plants, provided the agency first found such regulations to be “appropriate and necessary.”

Which the agency did. Repeatedly. And unsurprisingly, given the devastating health effects of mercury, the dominating contribution of coal plants to mercury pollution, and the fact that effective controls readily existed and were already installed on approximately 60 percent of the existing coal fleet when MATS was released.

And by 2016, in line with deadlines, virtually all covered coal plants were in compliance. Far under expected cost, with no negative effects on grid reliability, and achieving a 96 percent reduction in annual emissions of hazardous air pollutants—including an 86 percent, or 25-ton, drop in mercury—by 2017. Which means that a lot less mercury is now in the air, settling on the ground, entering the food chain, and accumulating in our bodies, not to mention the bodies of all those still to come.

So why, why, why this new proposed reversal by Wheeler’s EPA?

Because coal consumption has taken a hit, and Robert Murray—fervent supporter of President Trump and Founder, Chairman, President, and CEO of Murray Energy, a coal mining empire wholly dependent on domestic consumption of coal— is hitting back, leading the coal industry charge in an attempt to tear down every hurdle in sight.

And because some polluters have long complained about the costs of pollution standards (compared to previously polluting for free), and this rule provides a chance to permanently change the math.

Which brings us to this action, and a spectacular kowtowing to both: a bold hand-out to Robert Murray and his coal company cohorts, coupled with the establishment of precedent to permanently tip the regulatory scales in favor of polluter profits over public health.

How to make a good rule look bad

So how does Administrator Wheeler pull it off?

By assuming a dark and dismal view, where human health matters not and polluter preference matters lots. Here, the agency’s four-step approach to making all the compelling reasons for regulation go away:

First, refuse to consider co-benefits. Co-benefits are benefits that occur because of a rule but were not the principal target of the rule. Like when power plants burn coal, lots of pollutants are released, so attempts to limit any one pollutant often means a lot of other pollutants are reduced, too. This is a good thing! It means efficiency, and cost-effective health improvements. Long-standing regulatory guidance has been to ensure that these co-benefits count. But not according to the EPA’s refreshed perspective, which wipes these co-benefits right off the map, excluding an estimated annual reduction of 11,000 premature deaths, 130,000 asthma attacks, and 4,700 heart attacks, valued on the order of $37 billion to $90 billion each year.

Second, ignore benefits which are known to occur but can’t be easily monetized. Although mercury and hazardous air pollutants have been recorded as causing or contributing to a range of severe negative effects, including neurological damage in developing brains, chronic respiratory diseases, and various cancers, at the time of the EPA’s 2011 evaluation, the agency was only able to fully quantify a single effect, for a single exposure pathway, for a single pollutant. In the past, the EPA acknowledged this significant omission, but because these unquantifiable benefits further supported the agency’s conclusions that regulation was appropriate, the lack of quantification was not a problem. Now, because they’re counter to its tack, the EPA “acknowledges the importance of these benefits,” then dismissively waves them away.

Third, disregard new information. The EPA last performed a quantitative analysis in 2011. Lots of new research has been performed since that time, including to help quantify previously unquantifiable benefits, as well as to identify new benefits that were not previously known. What’s more, because MATS already went into effect, the actual—as opposed to industry-projected—costs of compliance are now known. And they are much, much lower than previously guessed. Which all suggests a major shift to the ledger: benefits orders of magnitude higher, costs orders of magnitude lower. Or at least, it should suggest. But the EPA? It now insists that the agency should only consider what it knew back in 2011, however wrong or incomplete that knowledge may well be.

Fourth, pretend it’s all a lark. Throughout the proposed rule, EPA insists that it does not intend to rescind MATS itself, just the entire regulatory framework upon which it stands. This, of course, is patently absurd. The EPA can’t have it both ways, and it knows it: by employing such an approach, the agency is positioning challengers to be able to knock the whole thing down, while attempting to avoid the firefight of undoing MATS outright.

And quod erat demonstrandum: the previously inconceivable. Where once, twice, three times the EPA found that the towering benefits of limiting toxic pollution from coal plants were well worth the costs, now, in the alpenglow of the deregulatory agenda, it appears that mercury pollution is fine, just fine.

Fight back!

Mercury is bad. Really bad. For human health, and for the environment.

Or at least it was, until this proposed rule “discovered” otherwise.

Don’t stand for it! Speak up, speak out, and make the many count more than the favored few—for this vital public health protection, and all the health protections still to come.

Submit your comment today!

Photo: Lyntha Scott Eiler/Flickr

Nuclear Weapons in the Reiwa Era

UCS Blog - All Things Nuclear (text only) -

Japan will soon have a new emperor and a new dynastic name to mark the traditional Japanese calender: Reiwa (令和). Interminable commentary on the significance of the name is just beginning, but in the end it will be defined not by words but by deeds. One of the most important acts the Japanese people may be compelled to take as the new era begins is to decide whether to allow their government to introduce US nuclear weapons into Japan. They may have to choose between continuing to honor the legacy of Hiroshima and the warnings of the hibakusha or abandoning Japan’s longstanding role as a leading voice for peace and nuclear disarmament.

Prime Minister Abe and the foreign policy elite of his Liberal Democratic Party (LDP) are pushing the United States to increase the role of US nuclear weapons in Asia. They told US officials they want to alter Japan’s Three Non-Nuclear Principles to permit the introduction of US nuclear weapons into Japan. They also want to revise Article 9 of Japan’s post-war constitution, in which the Japanese people “forever renounce war as a sovereign right of the nation and the threat or use of force as a mean of settling international disputes.” The Abe government’s desire to re-write the constitution and re-arm Japan is well known and hotly debated. But its efforts to bring US nuclear weapons into Japan are a closely guarded secret, known only to a small group of officials in Japan’s foreign policy establishment.

UCS obtained a document that contains a detailed description of the Japanese foreign ministry’s requirements for US nuclear weapons. Multiple conversations with the Japanese official who presented this document to his US counterparts not only confirmed its content, they also revealed this small group of hawkish officials wants to train Japanese military personnel to deliver US nuclear weapons. They would even like the United States to grant Japanese leaders the authority to decide when to use them. Japanese officials refer to this arrangement as “nuclear sharing.”

This information is not being kept from the Japanese people for security reasons. The responsible officials believe it is important for China to know Japan has the authority to make such a decision and the capability to carry it out. Preparations to make “nuclear sharing” a reality are being kept secret because these officials are afraid the Japanese public would oppose it. Their covert nuclear weapons wish list blatantly violates both the letter and the spirit of Japan’s constitution and the Three Non-Nuclear Principles.

Public opinion polls indicate many Japanese people would like to make the use or threat to use nuclear weapons illegal, which is the purpose of the recently adopted UN Treaty on the Prohibition of Nuclear Weapons (TPNW). A large majority of their elected representatives, even within Abe’s ruling LDP, want to uphold Japan’s Three Non-Nuclear Principles, which forbid “nuclear sharing.” Many Japanese people take pride in the belief that their country plays a leading role in advancing nuclear disarmament.

The gap between the public’s aspirations and the private machinations of its current leaders is difficult to reconcile.

Prime Minister Abe, like US President Trump, governs his country with a mix of nationalism and authoritarianism. His political opponents seem incapable of mounting a serious challenge to his leadership or his policies. But the absence of effective opposition is not an indication of popular support. Abe’s approval rating is not that much better than Trump’s. And like the current US president, he holds on to power with a dedicated minority of loyalists, disingenuous manipulation of the mass media and the resignation of a dispirited majority who see no compelling alternative.

Abe appears to have injected his nationalist agenda into the selection of the name for the new era. Press reports highlight that Reiwa (令和) is the first Japanese dynastic name not taken from the Chinese classics. The collection of Japanese poetry that inspired Abe’s selection was popular among the military officers of Imperial Japan who led their nation into World War II. Critics panned Reiwa as a cold expression of Abe’s authoritarian tendencies, but it seemed to be well-received and gave an immediate lift to the popularity of a man on track to become the longest serving prime minister in Japanese history.

Abe told the press Reiwa suggests a period when “culture is born and nurtured as the people’s hearts are beautifully drawn together.” His cabinet secretary told the world that Reiwa should be translated into English as “beautiful harmony.” So it may be that the initial appeal of the new name is more in line with the widespread public support for Japan’s pacifist constitution and the spirit of international cooperation than with Abe’s atavistic appeals to the chauvinist ambitions that led to Pearl Harbor and Hiroshima.

Only time will tell. Japanese attitudes towards nuclear weapons may be the most important window into the ultimate meaning of Reiwa. Making sure the Japanese people know what their government is saying and doing about nuclear weapons may be the best way to ensure that window is clear.

Also: today we’re releasing a short documentary that we filmed in Hiroshima last year. It covers some of the issues around the Japanese Foreign Ministry and US nuclear weapons, as well as firsthand accounts of the bombing.

Three Things EPA Administrator Andrew Wheeler Doesn’t Understand About Ambient Air Pollution Standards

UCS Blog - The Equation (text only) -

Photo: Eltiempo10/Wikimedia Commons

Last week, EPA Administrator Andrew Wheeler talked to Congress. Members had questions about his recent changes to the National Ambient Air Quality Standards updates for particulate matter and ozone. Wheeler’s comments last week and earlier make clear that he either doesn’t understand or isn’t being honest about how the EPA is proceeding as it sets health-protective air pollution standards. Here’s the reality around three points that Administrator Wheeler isn’t clear on.

1. CASAC doesn’t have the expertise it needs

The Clean Air Scientific Advisory Committee (CASAC) concluded at its most recent meeting that it does NOT have the expertise needed to adequately provide science advice to the EPA on development of the particulate matter standard. The committee’s conclusion directly conflicts with Administrator Wheeler’s comments on the hill this week after the CASAC meeting. Rather than listen to CASAC’s conclusion that it does not have the expertise, Wheeler doubled down on his earlier comments to Congress in insisting the committee has “a very good balance of talents.” It seems someone should give Wheeler the notes from EPA’s own committee’s meeting. Instead of denying this need for additional expertise, Wheeler could and should reconvene the particulate matter review panel that he disbanded last October.

The administrator also appeared confused about what expertise does exist on CASAC. When asked about epidemiologic expertise on the committee, he said, “I believe one person had to resign who I believe was an epidemiologist who we — we weren’t able or we — we haven’t yet replaced that person, if I’m remembering the right board. It was either the Science Advisory Board or the CASAC.” Since the administration appointed new CASAC members last October, there has not been an epidemiologist on the committee—a huge gap given how central epidemiologic evidence is to assessing the health outcomes of ambient air pollutant exposure. Given this shortcoming, on top of the lack of pollutant review panels, it is no wonder that CASAC itself recognizes its need for more expertise on its teleconference two weeks ago.

2. Pollutant review panels don’t slow down the process

In his comments to Congress, Wheeler said that the particulate matter review panel was disbanded because pollutant review panels were slowing down the process of reviewing ambient air pollution standard. “We took a hard look at what was causing the delay because the agency had never met the five-year timeframe for ozone or PM,” he told the Senate Appropriations Committee. This is objectively false and runs counter to Wheeler’s previous statements where he insisted that the panels were unnecessary. The process of ensuring a robust scientific review of air pollution standards is of course not the fastest process in the world. Just as the peer-review process tends to be slow, so too is review of thousands of pages characterizing the state of the science on a pollutant and its health and welfare effects by a group of the top experts in the field. But the pollutant review panels simply augment the expertise of CASAC. The panel’s review of the documents happens in the very same meetings that CASAC already has, and must have, according to Federal Advisory Committee Act rules. Sure, the additional experts in the room from inclusion of the panel might mean longer discussions, or an extra conference call, but this is far from a huge slow down on the process. Instead, a bigger reason that ambient air quality standard updates aren’t speedy is because of the limited capacity on the EPA side. If the agency were given more resources to conduct and prioritize reviews, this could speed up the process—if this were, in fact, the goal of this Administration.

Wheeler claims to be concerned about whether or not the review happens within the Clean Air Act mandated five-year window. It is true that reviews are often not completed within five years, but the courts have generally recognized the need for thorough scientific reviews in standard updates. Instead, the administration is insisting that both the particulate matter and ozone reviews happen by the end of 2020.

3. Science advisors should be chosen based on diversity of expertise not geography

In his testimony, Wheeler asserted that “CASAC members and the members of the Science Advisory Board were selected in large part for geographic diversity, geographic diversity of — of — of viewpoints and backgrounds.” This one should be intuitive. If you took a chemistry class in Cleveland, you probably learned the same thing as a chemistry student in Miami. There is, of course, no reason geography should matter when it comes to understanding of science. Universities, academic journals, and scientific conferences don’t curate activities through a geography lens, and neither should the EPA. Instead what the EPA should do, and always has done, is select members of scientific advisory committees for diversity of expertise. To get the best science advice, the agency should make sure the committee includes experts in diverse areas. For CASAC, that means including experts in atmospheric science, medicine, toxicology, epidemiology, etc. Yet, the current CASAC excludes key areas of expertise like epidemiology.

Wheeler blames the selection of CASAC members on EPA staff saying he, in fact, did not pick the members, EPA staff did. He told Congress last week, “I didn’t hand-select any of the people on the CASAC. They were recommended to me … by the career staff and … and the Science Advisory Board Office.” This is curious given that it is the EPA administrator who decides committee membership. EPA staff always make recommendations to the administrator for who would be good candidates for a committee, given balance of expertise, but there has never been a committee like this, with so little membership from active researchers in the field and instead heavily weighted toward regulators. It is hard to imagine that EPA staff would select such a committee without input from political level staff.

Sacrificing both quality and speed

Wheeler’s need for speed has not yielded results. Thus far, the PM review has not been faster than recent reviews that included the review panel. Currently, CASAC is finalizing its letter to the EPA recommending how the agency should revise its science assessment on particulate matter. This letter will confirm that the committee agrees it doesn’t have the needed expertise and make specific recommendations to EPA staff on the document. The EPA can then move forward without the expert advice its science advisors say it needs, or it can delay the process and reconvene the robust particulate matter review panel necessary for a science-informed process.

As of now, Administrator Wheeler is getting neither speed nor quality out of its particulate matter review. It is looking more and more like he won’t get what he wants out of the particulate matter standard update. But if the EPA fails to set a PM standard based on science, the public won’t either.

Photo: Eltiempo10/Wikimedia Commons

Electric Utilities Can Accelerate Electric Truck and Bus Deployment

UCS Blog - The Equation (text only) -

Photo: Greensboro Transit Authority

Today, in my inaugural blog post, I am excited to share a set of recommendations for electric utility investments in electric truck and bus charging programs.

Swapping diesel trucks and buses for electric models is a critical strategy for both reducing greenhouse gas emissions to mitigate climate change and reducing local air pollution to improve public health. The good news is that high-performance electric trucks and buses are becoming increasingly available for many vehicle uses, notably medium-duty delivery vehicles, cargo equipment, transit buses, and school buses. The challenge is that widespread deployment of those vehicles requires a large-scale, coordinated effort by policymakers, private investors, and—you guessed it—electric utilities.

For their part, electric utilities are an important early investor in charging programs for all EVs, including trucks and buses for several reasons. First, grid-related investments to support electricity demand from EVs are well within utilities’ wheelhouse. Second, utilities’ expertise in managing the grid make them an important partner in managing electric truck and bus loads to maximize potential benefits to the grid. For example, smart charging of EVs can make renewable energy easier to incorporate into the grid.  Finally, utilities have access to debt and capital to make investments that kick-start the comparative market for private investments.

Utilities across the country are starting to take a serious look at EV programs to support the growing demand for electric cars, trucks, and buses.  Many utilities are moving forward with vehicle electrification proposals to state utility regulators, some of which include consideration for heavy-duty vehicles. Proactive state regulators and electric utilities can take advantage of the growing availability of models to accelerate electric truck and bus deployment to help realize the health, climate, and grid benefits from medium and heavy-duty vehicles.

UCS has laid out the principles for how electric utilities should invest in EV charging. The recommendations we release today, Utility Investment in Truck and Bus Charging: A Guide for Utilities, build on those principles by providing high-level guidance on the design of utility programs for truck and bus charging.

How should utilities go about designing programs, and what should state regulators look for when evaluating programs?

Consider various strategies to address barriers to truck and bus charging. 

Different electric truck and bus uses may require different program strategies, depending on vehicle model availability and the business case for electrification in a specific service territory. For charging infrastructure, this means utilities may need to make use of a variety of ownership models in order to effectively accelerate EV deployment. These ownership models extend beyond “business as usual” up to “end-to-end” utility ownership from the customer meter to the charger (see figure).

 

Diagram showing models of utility investment in EV charging infrastructure

Models of Utility Investment in Electric Vehicle Charging Infrastructure

Set fair commercial rates that account for truck and bus charging and provide incentives for grid services.

Operating costs are one of the most important factors vehicle operators, particularly those who operate fleets, consider when deciding whether to switch to electric models.  Fair, sensible rates for commercial EV charging will ensure that vehicle operators have an opportunity to save on fuel costs and provide an incentive for charging at beneficial times for the electric grid.

Scale up programs based on their potential impact and the readiness of vehicles for electrification.

Vehicle applications such as transit buses, medium-duty delivery trucks, and cargo equipment have the potential to positively impact climate emissions and public health and are highly ready for electrification. As such, those vehicles are ready for large-scale utility programs. Utilities can also advance more nascent vehicle applications through pilot projects.

Prioritize serving communities overburdened by air pollution.

Diesel pollution and the consequential human health impacts are not distributed uniformly. Utility programs can have maximum impact for each charger deployed by focusing on areas that suffer disproportionately large amounts of diesel pollution. However, prioritizing overburdened communities is not just a best practice for cost-effectiveness. Because low-income communities and communities of color are overrepresented in overburdened areas, prioritizing charger and EV deployment in these areas is an important way to reduce public health inequities.

Coordinate and leverage multiple funding sources.

While utilities are well-suited to be an early investor in the EV charging space, other funds for EV charging are available. As UCS has previously discussed, the VW settlement and other funds fall short of providing the scale of investment needed for widespread electrification of truck and buses. Even so, those funds are an important resource for accelerating EV adoption. Utilities can maximize the reach of their own programs by coordinating with and leveraging other funding sources.

Consider fleet programs that accelerate electrification across vehicles classes.

Utilities can identify opportunities to include trucks and buses alongside passenger vehicles in fleet programs to make the most of synergies in information sharing between the utility and fleet customers.

Consult with truck and bus fleet managers when developing programs.

Utilities’ customer relationships with fleet managers can become strategic partnerships for the development of utility charging programs.  Utilities can collaborate with fleet operators to understand the use and charging needs of electric trucks and buses in order to inform infrastructure programs and rate designs.

Set minimum charging system capabilities to enable managed charging.

Managed charging of truck and bus loads is critical to realizing the greenhouse gas benefits and fuel cost savings those vehicles can offer. A “smart” system in which chargers can communicate with a network system is necessary to enable managed charging. Requiring such capabilities for chargers supported by utility programs will enable managed charging, while also making it easier to upgrade charger software over time.

Future-proof investments by preparing charger sites for additional deployments.

It is important to take a long-term view of electric truck and bus deployment when designing programs. Utilities can future-proof “make-ready” investments—the upgraded panels, new conduit and wires to make the site ready for chargers—by considering expected future charging demand when determining the capacity of the make-ready installation.

I am encouraged to see some utilities already step up to support truck and bus electrification. We need many more to follow suit with significant investments to make timely progress on climate and public health. These recommendations will help make utility investments more effective in meeting these urgent goals.

For a fuller discussion of each recommendation, including program examples, be sure to check out the full policy brief.

Photo: Greensboro Transit Authority

Will Congress Extend the EV Tax Credit? A New Bipartisan Bill Gives me Hope

UCS Blog - The Equation (text only) -

Photo: John Brighenti/Flickr

Electric vehicles (EVs) are our best choice for significantly reducing emissions from cars and light trucks.  Here at UCS, we spend a lot of time thinking about EVs, how they work, what they do for the environment, how to get more consumers to think about buying one, how to make sure the benefits of electrification are widespread and equitable, and how to best incentivize these vehicles for consumers.

Numerous polls and studies show that reducing the upfront cost of EVs is key to accelerating adoption.  The purchase price of EVs are currently higher than their conventional gasoline-powered counterparts, so the federal $7,500 tax credit for plug-in electric EVs helps make them cost competitive and is critical for deployment.  The credit is structured differently than most other tax credits– the full credit is available until an auto company hits 200,000 EV sales – after a manufacturer exceeds that number of sales, there is a year-long phase down period where buyers receive a partial tax credit.

Why does this matter?

Two U.S. manufacturers have hit the 200,000 sales mark and are currently in the phase down – Tesla and General Motors.  Nissan will likely be the next manufacturer to hit the cap.  As consumers are shopping for a new EV, they will find that they will not be able to take the tax credit for vehicles made by these manufacturers, which creates a disincentive to buy EVs from these companies.  With about 40 EV models on the market (compared to nearly 300 models for conventional vehicles), the already restricted consumer choice on EVs shrinks even more.  Further, many of these EVs are only available in select markets, so depending on where you live, you may have far fewer EV models to choose from. This also penalizes the companies that have been leading the way on electrification as they are now competing with companies that have been slower to market and whose vehicles are still eligible for the tax credit.

It’s not that this is a bad structure, but the biggest problem with the current tax credit is that 200,000 vehicles isn’t considered scale in the auto industry.  For example, in 2018 over 240,000 Jeep Cherokees, 325,000 Honda Civics, and 909,000 Ford F-series trucks were sold.  These vehicles are all being produced at scale, but not a single EV model has had sales anywhere close to these numbers over their many years on the market.

As battery costs decline and manufacturing scale increases, these vehicles will become cost-competitive with conventional vehicles – both our analysis and new analysis from ICCT show that we can expect to see price parity in the mid-2020’s.  We strongly support expanding or modifying the tax credit for a defined period before EVs are cost-competitive with conventional vehicles.  There are a number of ideas on how to do this; some change the credit to be a more conventional tax credit and allow for it to be used for a set number of years.  Others increase the number of vehicles (the “manufacturer cap”) that are eligible for the tax credit.  We are open to evaluating any of these solutions.

We are nearing a tipping point in the next decade where electrification will be mainstream — costs for batteries are coming down, and manufacturers are nearing deployment of EVs in every class of vehicle. But it will take bipartisan support and investment to make that vision a reality, if the US is to lead the world towards a more sustainable transportation future.

What’s new this week?

Last night, the first bipartisan and bicameral piece of legislation that would increase the tax credit was unveiled.  It has the support of 60 organizations, including the auto companies (all of them – this is no small feat), utilities, auto suppliers, environmental groups, health groups, business groups, and security groups.  In other words, this is legislation that has widespread support and could potentially become law.

In the Senate, Senators Debbie Stabenow (D-MI), Lamar Alexander (R-TN), Gary Peters (D-MI), and Susan Collins (R-ME) are the primary architects of the Driving America Forward Act.  Representative Dan Kildee (D-MI-5) is the lead sponsor in the House of Representatives.  This proposal would increase the per manufacturer cap to 600,000 and reduce the tax credit value for the additional 400,000 units to $7,000 per vehicle (it’s currently a maximum $7,500 per vehicle).  The bill also extends the tax credit for hydrogen fuel cell electric vehicles for 10 years, which will incentivize the development and deployment of additional low carbon, zero tailpipe emissions options, which UCS also supports.

What will the bill really do?

Some relatively simple math shows the benefits of EVs. The average EV driving on electricity in the US will generate 3.3 tons FEWER CO2e (CO2 equivalent) emissions per year than an average gasoline-powered car (which right now gets about 30 mpg).  If I could wave a magic wand and replace 400,000 conventional vehicles with EVs tomorrow, the reduction would be 2 million metric tons of CO2e emissions per year, roughly the same emissions as from the electricity use of almost 350,000 homes in  a year.

These climate benefits are real and are only going to get better as the grid gets cleaner.  My colleagues have been looking at the emissions impacts of driving an EV in different parts of the country for years now and we have already seen a dramatic shift in the several years since when we first started this work.  In 2009, we found that 45 percent of people lived in areas where an EV would produce the same tailpipe global warming emissions as a conventional vehicle that gets 50 mpg.  By our more recent analysis in 2018, that number was up to 75 percent (the toggle function on the map in this blog is really fun).  In large parts of the country, EVs emit much less than even the most efficient conventional vehicle.  That’s a significant change over a relatively short time period.  Unlike gasoline, electricity is a transportation fuel that can get (and has gotten!) significantly cleaner over time – as the grid gets cleaner, the emissions from EVs charged on that grid automatically go down.

In addition to the climate benefits, this bill would also result in lower oil use – to the tune of about 480 gallons per year per car.  In my magic wand scenario above, that would be nearly 200 million gallons of gasoline that are not used.  That’s a lot of oil.  Speaking of oil – you know who isn’t going to like this bill?  The Koch brothers and the oil industry.  We have been keeping an eye on their lobbying activities around the EV tax credit – I’m sure it won’t be terribly surprising to learn that they are actively trying to abolish it.   This means that the oil companies think that EVs pose a real threat to their business.  To me, that means we’re on the right path, but we can’t afford to deviate now.  We must keep moving forward, and that means increasing EV sales and making sure that charging infrastructure is available so we can dramatically reduce emissions from transportation.

EVs may be a threat to the oil industry, but they are critical to the auto industry

US leadership in a critical industry is also riding on our ability to deploy EVs domestically.  Globally, there is really no question that we are moving towards electrification.  The International Council on Clean Transportation has written several reports on the global EV market and what other countries are doing to incentivize EV purchases – not surprisingly, China is setting itself up to eat our lunch.

In 2018, 64 percent of the EVs sold in the US were made domestically.  GM, Tesla and Nissan EVs have been rolling off assembly lines in MI, CA, and TN, for example. That’s a pretty good news story.  But if we, as a country, do not continue to invest in electrification, we are not going to be able to keep posting these numbers.  We are going to wind up importing more EVs, and maybe more importantly, the intellectual capacity on innovation and leadership in the advanced automotive industry is going to shift elsewhere.  As ICCT put it “Economies like Japan, Germany, and the United States, among others where there is major automobile manufacturing, have the most to lose if they do not lead in the transition to electric vehicles.  China, on the other hand, is now the leading automobile market and has the most to gain from staking out a leadership position in the shift to electric.”  If we don’t stay at the table, we can’t win.

It would be great for more Senators to support the bipartisan bill to extend the EV tax credit – you can ask your Senators to co-sponsor the bill by taking this action.

Photo: John Brighenti/Flickr

Yes, EPA: Regulating Mercury Pollution Is “Appropriate and Necessary”

UCS Blog - The Equation (text only) -

Photo: Mrs. Gemstone/Flickr

It doesn’t take a health care professional, public health expert or environmental scientist to understand the value of clean air and the need for regulatory safeguards that protect our families and communities from toxic air pollution. While killer smog may seem like a historical artifact, air pollution exacts a significant toll globally and on our own nation’s health and economy.

The scientific evidence and health data are clear; exposure to toxic and hazardous air pollutants can result in premature death and cause a host of cancers, lung and heart diseases, adverse reproductive outcomes, birth defects, and neurological and cognitive impairments that can have lifetime impacts. In addition to pain, suffering and disability, these health impacts have significant economic, social and emotional costs for patients, their families and their caregivers—from doctors’ appointments, emergency department visits, and medications, to lost workdays, missed school days, and restrictions in daily living.

And it doesn’t take an advanced degree to know that mercury is an especially bad actor—a toxin particularly hazardous to pregnant women, to the neurological development of their fetuses and to young children—causing impairments that can last a lifetime.

None of this is really news. What may be news is that the Environmental Protection Agency (EPA) has proposed a change to its Mercury and Air Toxics Standards (MATS) in the form of a new formula for calculating the human health benefits of reducing some of the most hazardous air pollutants from power plants: chemicals that in even relatively small quantities are potent carcinogens, mutagens, teratogens and neurotoxins. Congress specifically recognized these hazards when it enacted Section 112 of the Clean Air Act.

In a somewhat wonky sleight of hand and one that does not bode well for future clean air protections, the agency has proposed a revision to its own finding on MATS. Incredibly, the agency now asserts that it is no longer “appropriate and necessary” to regulate mercury and hazardous air pollution emitted from power plants under the Clean Air Act. In the U.S., power plants are the largest source of mercury, chromium, arsenic, nickel, selenium and the acid gases hydrogen fluoride, hydrogen cyanide and hydrogen chloride. These are highly hazardous pollutants that cause serious harm to humans, wildlife and the environment. And the human health damage is borne disproportionately by people of color and the poor.

In 2012, the EPA estimated that its MATS rule would prevent 11,000 premature deaths and over 100,000 asthma and heart attacks each year, as a result of the co-benefits of the reduction in particulate matter pollution that occurs when plants reduce their mercury emissions. The agency estimated the health benefits of reductions in all air pollutants associated with MATS would range from $37 billion to $90 billion, with compliance costs to industry estimated at $7.4 billion to $9.6 billion annually.

But the EPA has now decided that the health benefits of controlling MATS emissions are only $4 million to $6 million max, if you don’t (and the agency opines that you shouldn’t) count the benefits of controlling the related emissions. This despite the fact that scientists have concluded that the EPA’s 2011 regulatory impact assessment greatly underestimated the monetized benefits of reducing mercury emissions from power plants. Also notable is the fact that most coal plants have already come into compliance with MATS by installing the necessary pollution control technology.

In suggesting that it is no longer “appropriate and necessary” to regulate mercury and air toxics from power plants, EPA Administrator Andrew Wheeler is basically saying that the health benefits are too paltry to justify the costs.

As public health professionals, we strenuously disagree. We suspect that most members of the public will, too.

So, what to do? The EPA proposal is open for public comment through April 17, 2019. It is critical that scientists, health professionals, economists, community advocates, public interest organizations and concerned members of the public express their strong opposition to this drastic narrowing in how the agency evaluates the costs and benefits of critical public health protections.

The American Public Health Association has joined professional medical societies and public health groups in taking legal action to protect limits on MATS pollution and has filed an amicus brief in a related court case. The Union of Concerned Scientists has produced a public comment guide to support and encourage submission of detailed and specific comments during this open comment period. Both organizations will be submitting comments as well.

Make no mistake. Though the current proposal focuses specifically on MATS, it directly challenges the very foundation of clean air regulations. That’s why we and our organizations are speaking out—for the health of our families and our communities today and into the future. Our individual and collective voices matter. We urge you to join us.

 

This post was co-authored with  Georges C. Benjamin, MD, Executive Director of the American Public Health Association, and originally appeared on Scientific American.

Photo: Mrs. Gemstone/Flickr

Fires in Texas Spark Interest in Chemical Safety

UCS Blog - The Equation (text only) -

LadyDragonflyCC/Flickr

Watching the news last week as clouds of thick black smoke billowed over Houston, I worried about my family. They are surrounded by chemical plants. Hearing state and local officials saying there is no air quality issue, and then ordering everyone to “shelter in place” terrified me. In truth, the monitors either weren’t working or were under maintenance, and there didn’t seem to be an evacuation plan. Why not? The law requires one.

In the past month, there have been at least two major chemical fires or explosions at EPA Risk Management Plan (RMP) facilities. The Union of Concerned Scientists has been extensively writing on the RMP rule and its provisions and participated in the victorious court case that required the EPA to implement the Obama era rule.

RMP standards are aimed to provide additional information to communities surrounding facilities, require facilities to coordinate with first responders on emergency evacuation plans in case of an emergency, and to research safer technology alternatives that may make their facility less prone to catastrophic incidents.

Despite these important provisions, the Trump Administration has moved forward with rolling back this rule and in doing so, they have proposed to “remove all preventative measures.”  What is a risk management plan if it doesn’t lower risks? The final rule is being finalized now, and we expect with the concerns we had over the proposed rule that this final rule will weaken standards. The two major chemical fires and explosions this month should demonstrate to the EPA that implementing the RMP protections are the least they could do for environmental justice communities, first responders, and workers to protect their public health and safety.

The KMCO plant explosion in Crosby, Texas occurred on April 2nd at the chemical manufacturing facility when isobutylene ignited. This facility was no stranger to incidents, its past explosion occurred in 2010 causing worker injuries and a worker death. The facility has been cited for lacking an appropriate emergency action plan, benzene leaks, and lack of monitoring. This time around, the explosion killed one and injured two others. Communities surrounding this facility are still seeking information from TCEQ and the facility directly on potential health hazards and air quality monitoring in the wake of the explosion.

The Intercontinental Terminals Company (ITC) fire in Deer Park, Texas on March 17th burned for days and once the fires from the various containers were put out, a new shelter in place order for two additional days was issued due to excessive benzene levels detected. Congressional members representing sections of the Houston area came together to call on TCEQ to provide more information on the air monitoring and information sharing after this fire.

Unfortunately, explosions like these add insult to injury for many communities. People living near petrochemical facilities like these already face disproportionate exposure to toxic emissions from the facilities on a regular basis, in addition to other nearby sources of air pollution, like increased truck traffic around the facility and other industrial and transportation-related pollution and stressors. Preventable chemical disasters only add to the burden faced by these communities on a daily basis.

Yvette Arellano from the Texas Environmental Justice Advocacy Services (T.E.J.A.S.), an environmental justice group working on the ground in Manchester and Greater Houston Area, stated “While regulatory agencies protect these facilities from acts of terrorism, who protects us from these facilities which terrorize us on a daily basis? The simple daily acts of life from brushing our teeth in the morning to going to sleep are made traumatic by these events, and the ITC disaster is yet to be over. We never asked to live a life in which we are scared of being at home, forced to live with plastic on the windows and doors, with no ventilation in a city where temperatures regularly skyrocket to over 100. We are suffering out of sight, made silent, and forced into the shadows-living under dark clouds, not of our making. This is not just, it is not freedom or liberty this is an act of terror on our lives.”

I wish I could be clinical and detached from this issue, but I can’t. My family lives in one of the largest concentrations of these RMP facilities outside of Houston, Texas. Every time I hear about another incident I think of my nieces and nephew and whether they were outside at school when this happened. I worried about them while they were locked in their homes in a shelter in place during the ITC fire and subsequent benzene leak. Like all young children, they deserve to be free to run outside without fear of a chemical cloud keeping them indoors.

These incidents at chemical facilities in Texas are unfortunately perfect examples of why the Risk Management Plan standards should not only be maintained by the Trump Administration but should also be strengthened during this rulemaking process. Congress should hold the EPA accountable and call on them to issue a strengthened RMP rule that provides the communities outside of these facilities better access to information regarding the chemicals on-site, better coordination with first responders to create better safety plans that aren’t limited to shelter in place.

Chemical facilities need oversight and high standards for safety in order to protect environmental justice communities, first responders, and workers. Companies should be held accountable, particularly those with multiple incidents and fines like the two facilities above. They need to be willing to more readily share information with communities so families like mine can make the best decision in case of a fire or explosion and need to take strong measures to reduce the risk of these incidents happening in the future.

LadyDragonflyCC/Flickr

4 Questions for the New Census of Agriculture

UCS Blog - The Equation (text only) -

Photo: Lance Cheung/USDA

Every five years, number-hungry analysts eagerly await the release of the US Census of Agriculture to get a fresh glimpse at the state of the US food and farm system. The newest version, which contains data reflecting conditions in 2017, is expected to be released on April 11, marking the first update to the crucial dataset since 2012. In addition to offering updated data for many characteristics that have been monitored for decades, this Census included some new questions expected to offer critical insights for a rapidly changing world.

What makes the Census of Agriculture so special?

Since 1840, the Census has been used to create a rich dataset that tracks trends on the nation’s farmlands and rangelands, such as shifts in demographics, farming practices, economics, and more. This comprehensive and consistent survey is conducted by the USDA’s National Agriculture Statistics Survey (NASS) and covers all states and counties in the nation, and US farms and ranches of all shapes and sizes. The survey is mandatory for operations with expected sales of at least $1,000. This time around, the survey was mailed to 3 million operations, and 72 percent of those surveyed responded (nearly 75 percent responded in 2012 and 78 percent in 2007).

Given its breadth, Census data is a key resource used by decision-makers—including farmers, ranchers, community leaders, legislators, and companies—to understand and plan for the future of agriculture. Census data influences decisions about programs and funding for research, safety nets, infrastructure investments, and more. As we count down the days to the release, here are some things to think about.

Four questions we’ll be asking of the new US Census of Agriculture

One of the first things I’ll be looking for on April 11 is to see whether some key trends from 2007 to 2012 have continued into 2017. I’ll also be curious to see what can be learned from new survey questions incorporated in 2017. My top questions include:

  1. Are farmers getting older? According to the 2012 census, the average US farmer was 58 years old, male, and white, suggesting an aging and homogenous workforce. These demographics have drawn concern, especially given the growing global population, continued rates of food insecurity in the US and abroad, and the increasingly urgent need for a more sustainable agriculture. The recent growth of organizations such as the National Young Farmers Coalition has been encouraging, and changes in the 2018 Farm Bill suggest momentum in support of new and more demographically diverse farmers. But how much have things changed since 2012?
  2. Has farm consolidation increased? The characteristics of farms in recent years have been trending toward higher production expenses, increasing concentration of value in the largest farms, and other factors that make the idea of making a living on a farm hard to fathom for most. Are there any signs of change on these fronts as of 2017, or is there even more work ahead of us?
  3. Are conservation and other healthy soil practices catching on? Yet another thing that will be interesting to see is whether there are any shifting trends in farming practices since 2012. In the past few years, there has been a growing dialogue on healthy soils among US farmers and ranchers, alongside an expanding body of reports outlining the threats posed to them by climate change. In response, there’s evidence that farmers and ranchers are adopting more conservation practices, such as cover cropping, that can help them build resilience to extreme weather. But the release of the 2017 Census will be an important opportunity to gauge whether change is actually happening on a larger level.
  4. What insights can we glean from new questions in the 2017 census? In addition to giving us more data on trends tracked in the past, the 2017 Census will provide benchmarks in some new categories, such as military veteran status. It also asks new questions about farm demographics, decision making, and more. These new questions could provide a foundation for new learning and inquiry in future years.
More data worth waiting for

Some of the data from this round of the Census is yet to come, so there will still be plenty to look forward to even after April 11. Upcoming data will include results from Puerto Rico and other US territories, as well as the 2018 Irrigation and Water Management Survey and the 2018 Census of Aquaculture. And we’ll have to wait even more patiently for the Organic Survey, the Census of Horticulture Specialties, and the Local Food Marketing Practices Survey, which will be available beginning in late 2020.

As rich as the Census has become, it’s also true that it can’t be expected to capture everything. Therefore, some of the data I’d personally love to see (like soil carbon content, details of cover crop diversity, and so on) will still only be on my wish list—at least in the short-term.

Protecting the past and future of historical data and research integrity

The most recent edition of the nation’s most comprehensive agricultural dataset is guaranteed to give us a valuable look at the state of farming and ranching in the US. But what seems less guaranteed is the future of the agencies that we rely on to collect and make sense of it all. These include NASS and the USDA’s Economic Research Service (ERS), both of which are at risk of tighter budgets, and the latter of which is facing potential relocation.

So, as you prepare to dig in, consider taking a moment to think about the names behind the numbers, and what you can do to maximize and protect this national treasure trove of information.

Photo: Lance Cheung/USDA

Farmers Are Excited About Soil Health. That’s Good News for All of Us.

UCS Blog - The Equation (text only) -

Before switching to no-till, farmer Gary Hula described the soil as having the consistency of flour. Just four years later and the structure and moisture in the soil is undeniable. Photo courtesy USDA/Flickr

“When we think about the challenges in agriculture, carbon—and how to sequester it—is near the top.” So said Roger Johnson, the president of the National Farmers Union (NFU), in opening the grassroots organization’s 2019 annual convention in March. Storing carbon in farm soils is an important climate change solution, but building the health of those soils is also critical for ensuring clean water for communities and helping farmers be productive while coping with the consequences of a climate that is already changing. And throughout the NFU’s three-day gathering, the phrase “soil health” and talk about strategies to achieve it seemed to be on everyone’s tongue.

Though it is hard to quantify, surveys suggest that many US farmers are already taking steps to build soil health and store carbon in their soils. Science has shown that practices such as no-till farming (in which soil isn’t disturbed by plowing), cover crops, extended crop rotations, perennial crops, and integrating crops and livestock deliver myriad benefits. These can include preventing erosion, suppressing weeds, reducing the need for pesticides and added fertilizers, increasing wildlife habitat and beneficial insects, and creating “spongier” soils that drain and hold water better, increasing resilience to both floods and droughts.

Recognizing these benefits, legislatures in many states have passed or introduced bills aimed at increasing soil health. For example, Nebraska state legislators recently introduced the Soil Health & Productivity Incentive Act, which seeks to increase adoption of cover crops and other soil-building farming techniques. In Maryland, the state’s Healthy Soils Program—launched in 2017—offers incentives, including research and technical assistance, to farmers to implement farm management practices that promote soil health.

NFU President Roger Johnson: We’re seeing more and more focus on soil health in the context of the farm bill. and that’s a good thing. When we think about the challenges in agriculture, carbon and how to sequester it is near the top. #NFU2019 pic.twitter.com/0p5TIZGieB

— National Farmers Union (@NFUDC) March 4, 2019

Congress also acknowledged the healthy soil movement in last December’s new farm bill, creating a voluntary Soil Health and Income Protection Pilot Program to assist farmers in converting less-productive cropland to carbon-storing, water-holding grassland. The new farm bill also includes a new Soil Health Demonstration Trial to provide incentives for new soil carbon sequestration practices and to establish protocols for measuring soil carbon levels. (Unfortunately, Congress also simultaneously cut funding for the existing USDA program that is best equipped to facilitate healthy soil practices on many of the nation’s farms and ranches.)

Farmers share their soil health stories

At the NFU convention, multiple presentations focused on soil health and the regenerative agriculture practices known to further it. And outside the workshops, farmers were eager to share stories of changes they’ve made and the impressive results they’ve garnered. Just a couple of examples:

Living roots and plant diversity help overcome climate challenges in Oklahoma. Russ and Jani Jackson run a diverse farm operation on 4,100 acres of crop- and grassland in Kiowa County, Oklahoma. Until 2006, they plowed their fields extensively and planted, as Russ says, “wheat, wheat, and wheat.” That monocrop system led to significant erosion problems, and something had to change. “We learned that it’s better to keep a living root in the ground to feed the biology in the soil.” Now they grow as many as 10 crops: canola, cotton, grain sorghum, soybeans, and more. They double-crop and plant off-season cover crops. And while they’ve long raised cattle on their grassland, they now also set the cows loose on cropland to graze down those cover crops.

The Jacksons’ innovation is paying off in a climate in which their farm can go 120 days with less than a quarter-inch of rain…and then get a 4-inch deluge in an afternoon. With technical assistance from the non-profit Noble Research Institute, they’ve measured changes in the soil’s structure and its water-filtering capacity. On one clay soil field, a light rainfall—less than 6/100th of an inch per hour—used to lead to runoff. Now, with all the organic matter the Jacksons have added with cover cropping and other practices, that same field can handle a steady rain of 2.7 inches per hour before it becomes saturated, a 45-fold increase in what is called the infiltration rate. Even on fields with less-dramatic increases, the difference between the Jacksons’ property and their neighbor’s is plain to anyone passing on the highway after a significant rainfall.

In Russ Jackson’s field on the left, green cover crops protect the soil and increase its capacity to filter and hold rainwater. The neighboring field on the right is bare and flooded after rain. Photo courtesy of Jim Johnson, Noble Research Institute

Science-based pasture practices help a Minnesota dairy farm keep water and nutrients in the soil. On his family dairy farm near Franklin, Minnesota, James Kanne had always had the pasture at the top of the hill, and he’d long planted a grass swale downslope to catch the inevitable runoff before it could muddy the stream. So when his daughter came home from college and said, “Dad, you’re doing the pasture wrong,” the sixth-generation dairy farmer didn’t bat an eye. Instead, he took on her suggestions for a new rotation system, informed by her study of biology and environmental science and her visits to farms in New Zealand. Within two or three years, the grass swale was no longer necessary, and the pastures were more productive.

Why? The grazing rotation they adopted—moving the cows from one small area of the pasture to another, keeping them on each for as little as a single day—was keeping the grass trimmed to the optimal height for plant growth. And the cow pies, which he says previously would dry and harden where they were dropped (“until you could throw them like frisbees”) were now being dismantled in a matter of days by dung beetles, and those precious nutrients moved deep into the soil to fertilize the grasses from the roots.

In the process, the beetles and other organisms created pores through which rainwater could percolate, rather than running down the hill. And the deep-rooted perennial grasses that are part of the farm’s new customized pasture mix can draw that stored water back up when needed between rains. Informed by science, it’s a near-perfect ecological system that virtually ended the farm’s contribution to the pollution that today continues to drain from farm fields across the Midwest, fouling local drinking water supplies and devastating fisheries in the Gulf of Mexico.

Finding soil solutions that work for farmers—from deals with food companies to a Green New Deal?

Though farmers like James Kanne and the Jacksons are already taking action, there’s a critical need for solutions to help improve soil health and combat climate change on farms nationwide. Some of these solutions will need to come from Congress, and groups like NFU and UCS will need to help ensure that those will work for farmers.

Of course, it’s not all up to Congress. The private sector also has a role to play in driving the shift to regenerative, healthy-soil practices in agriculture. And some companies are beginning to step up. Cereal-maker General Mills hosted a panel at the NFU convention, showcasing farmers with whom the company has agreements to purchase ingredients produced using regenerative practices. The company—which posted a $2 billion+ net profit in 2018—is touting its commitment to healthy soil, with a pledge to shift 1 million farm acres to such practices by 2030.

And while talk of a “Green New Deal” is gathering steam and could lead to bold, unifying action to avert a climate catastrophe, the resolution recently voted down by Senate Republicans didn’t spell out concrete policy measures. Farmers haven’t been involved in such conversations to date, and despite their enthusiasm for healthy soil and climate action, they are understandably cautious. Still, farmers have signaled that they want and need to be at the table as new policies are developed, and scientists who study agroecology have called for a food and agriculture platform to be part of any future Green New Deal.

.@Kriss4Wisconsin: Farmers don't want to be on the menu (on climate change). We want to be the chefs. #NFU2019 #GreenNewDeal

There are Faster, Cheaper, Safer and More Reliable Alternatives to the Energy Department’s Proposed Multibillion Dollar Test Reactor

UCS Blog - All Things Nuclear (text only) -

Department of Energy (DOE) Secretary Rick Perry recently announced the launch of the Versatile Test Reactor (VTR) project, flagging it as one of the department’s top priorities. The project, which would be the first new DOE test reactor in decades, would differ from the DOE’s operating test reactors because it would be cooled by liquid sodium instead of water, enabling it to produce large numbers of “fast” neutrons. The DOE says that such a facility is needed to develop new reactors that use fast neutrons to generate electricity. US nuclear plants today are light-water reactors, which use slow (“thermal”) neutrons.

The Union of Concerned Scientists (UCS) questions the need for a dedicated fast neutron test reactor and, more generally, has serious concerns about fast reactor safety and security, detailed in a critique it released last year. Fast reactors pose nuclear proliferation and terrorism risks in part because they commonly use fuels containing plutonium, a nuclear weapon-usable material. Most fast reactor concepts also involve reprocessing of their spent fuel, which separates plutonium in a form that is vulnerable to theft.

Perry reported that the DOE has determined the “mission need” for the reactor, the first milestone for any new large department project, but the mission-need statement fails to make the case that the VTR is needed for fast-reactor development. Regardless, the Nuclear Energy Innovation Capabilities Act, signed into law in September 2018, directed the department to build the VTR by the end of 2025. For is part, the department does not believe it can be completed before the end of 2026.

A Hefty Price Tag

Missing from Secretary Perry’s February 28 announcement was any estimate of the cost. In response to a Freedom of Information Act (FOIA) request, UCS has learned that a “rough order-of-magnitude” estimate for the VTR’s construction and startup is $3.9 billion to $6.0 billion. To build the reactor over the next seven years would require the DOE to spend, on average, $550 million to $850 million annually, which is comparable to the department’s total fiscal year 2019 budget for nuclear technology development of approximately $740 million. The DOE has requested $100 million for the project (which it now refers to as the Versatile Advanced Test Reactor) in fiscal year 2020.

Cheaper, Faster Alternatives

The FOIA documents also reveal that the DOE’s determination of mission need misquotes its own 2017 user needs assessment to justify the new test reactor. In fact, there are ways to simulate the range of neutron speeds typical of a fast reactor in an already existing test reactor, such as the Advanced Test Reactor at Idaho National Laboratory or the High Flux Isotope Reactor at Oak Ridge National Laboratory. This could be accomplished by using neutron filters and possibly a different type of fuel. Going that route would be significantly cheaper: A 2009 DOE assessment suggests that this approach could achieve the minimum requirements necessary and would cost some $100 million to develop (in 2019 dollars), considerably less than the VTR project’s projected price tag.

Equally important, using one of the two currently operating test reactors could likely provide developers with fast neutrons more quickly  than the VTR project. The proposed test reactor would not be operational before the end of 2026, according to DOE’s proposed schedule, which it describes as “aggressive.” A recent DOE study estimated that it would take about 10 to 13 years for such a reactor to begin operation. Moreover, after the VTR startup, it would need to operate for some time—perhaps a few years—before it could be reliably used for testing, assuming there will be at least a few unforeseen problems. Thus it could be well over a decade before the VTR would become available. In contrast, the DOE estimated it would take seven years for the alternative system to become available at an operating test reactor.

The VTR mission-need statement also exaggerates the technical capabilities needed by the reactor developers who would use a fast test reactor. One of the main objectives of a test reactor is to bombard fuels and other materials with neutrons to study how they withstand radiation damage. This damage can be measured by a unit called “displacements per atom”—that is, the number of times each atom in a material sample is affected by a neutron. The more displacements per atom that a test reactor can provide per year, the faster a given test can be completed. The mission-need statement claims that reactor developers need a facility that can achieve at least 30 displacements per atom per year, although the reference it cites, a 2017 user needs assessment, only calls for a minimum of 20.

This difference is significant because researchers have shown that using the cheaper filtering approach in the High Flux Isotope Reactor at Oak Ridge National Laboratory could provide about 20 displacements per atom annually, so that the higher rate provided by a new reactor would not be needed. The mission need statement did not assess whether the modest additional capability that the VTR could provide—that is, 30 instead of 20—would be worth the substantial additional cost. If a reactor developer wanted to test a fuel sample up to 200 displacements, it would take 6.7 years in the VTR at a rate of 30 per year, compared to 10 years at the High Flux Isotope Reactor at a rate of 20 per year. But given that it might take more than a decade for the VTR to become available, it is far from clear it would help developers to achieve their goal any sooner than the alternative approach, which could be available in seven years.

A Better Test Reactor

Perhaps the clearest statement casting doubt on the need for the VTR was made by Westinghouse, one of the potential commercial users of fast reactor technology that the DOE surveyed in its user needs assessment. The originator of the pressurized light-water reactor (LWR), Westinghouse is now interested in developing a liquid lead-cooled fast reactor. However, in its response to the DOE survey, it states that any new test reactor should include capabilities for light-water reactor testing because “LWR technology will continue to be the backbone of nuclear energy for decades to come.”

UCS agrees with that statement. If the United States needs a new test reactor—and it may soon, given existing test reactors are many decades old—it would make more sense to build a new thermal neutron test reactor with the capability of generating fast neutrons if necessary, not the other way around. The technology is well-established, and the commercial need for a source of thermal neutrons is far more likely than for fast ones.

Pages

Subscribe to Union of Concerned Scientists aggregator - Combined UCS Blogs