Combined UCS Blogs

Rick Perry Rejects Facts in Favor of Coal and Nuclear Bailouts

UCS Blog - The Equation (text only) -

Photo: Greg Goebel/Wikimedia Commons

Much has been written on coal and coal miners since the president began campaigning in earnest in 2016. Since taking office, he has continued that dishonest and dangerous rhetoric—and has directed his agencies to do something. Anything. Except, of course, anything that represents real solutions for coal miners and their communities, instead proposing (initially at least) to cut federal programs that invest in those communities.

The president continues to push for a misguided federal bailout of the coal industry—a blatant political payoff to campaign donors using taxpayer money with no long-term solutions for coal workers. The latest shiny object masquerading as reasoning? National security. But as we know, bailing out uneconomic coal plants only exacerbates the real national security issues brought on by climate change, while continuing to saddle our country with the public health impacts of coal-fired electricity—which hurt real people in real communities.

As is typical with this administration, substance and science and evidence are inconsequential compared to ideology, and their attempts to bail out money-losing coal and nuclear plants are no exception. Here’s a quick take on how we got here and what to expect next.

Let’s see what sticks…

The administration didn’t exactly hit the ground running after the 2016 election—no one bothered to show up at the Department of Energy until after Thanksgiving of 2016, even though career staff were readily available and prepared to brief the incoming administration on the important work of the agency. But by the spring, it had become clear that Energy Secretary Rick Perry would be the front-man in leading the charge for a federal bailout of coal and nuclear plants. His shifting rhetoric and poor justifications for using consumers’ money to prop up uneconomic coal plants suggests that he and his inner circle are desperate to find an argument that sticks and survives legal challenges.

Briefly:

So, the game of whack-a-mole continues.

False arguments

In short, the administration is proposing to use emergency authorities to force grid operators and consumers to buy electricity from uneconomic coal and nuclear plants. Let’s break down the arguments one by one.

Reliability. Despite claims to the contrary, there is no reliability crisis. Lost in the rhetoric around the need for baseload resources is the fact that grid operators already have systems in place to ensure there is adequate supply of electricity when needed. The North American Electricity Reliability Corporation (NERC) projects more-than-adequate reserve margins in almost every region of the country—a few areas of concern but certainly no crisis. PJM Interconnection has repeatedly stated there is no threat to reliability from plant retirements (because they study the impacts on the system of every proposed plant retirement before approval):

“Our analysis of the recently announced planned deactivations of certain nuclear plants has determined that there is no immediate threat to system reliability. Markets have helped to establish a reliable grid with historically low prices. Any federal intervention in the market to order customers to buy electricity from specific power plants would be damaging to the markets and therefore costly to consumers.”

Resilience. Despite claims to the contrary, coal and nuclear plants do not offer additional system resilience because of onsite fuel stockpiles. It turns out, coal piles and equipment can freeze in cold weather, and flooding from extreme rain can affect power plant operations and prevent delivery of coal. A new report suggests that the administration is focusing on the wrong thing (fuel stockpiles) and should instead be focusing on performance-based metrics looking at the overall electricity system. For starters, between 2012 and 2016, a whopping 0.00007 percent of electricity disruptions were caused by fuel supply problems.

National Security. Arguments are now being couched in terms of national security and cyberattacks on the grid. The thing is, coal and nuclear facilities are vulnerable to cyberattacks just like other parts of the electricity grid, a fact completely absent from the leaked memo. Obviously, everyone cares about national security, but there is zero evidence to support the idea that keeping uneconomic power plants online will make us safer.

What we don’t know

Much uncertainty remains about Perry’s latest attempt to make something stick. UCS will keep an eye out for answers to important questions that remain, like:

  • Who will pay? Will DOE ask Congress to authorize funding for the bailout, meaning taxpayers get to foot the bill? Or will DOE ask FERC to use its power under the Federal Power Act, implying an additional cost to consumers? In either case, hold on to your wallet.
  • How much will it cost? In short, no one really knows, because DOE’s plan is light on details. Estimates range from $17 to $35 billion (with a “b”) per year according to recent studies.
  • Which power plants will qualify? Will every single coal and nuclear plant qualify for a handout? Only those that are “near” military installations and could somehow be tied to the administration’s national security rationale? Only the ones that are losing money? Only the ones that donated to the president’s campaign?
  • How will qualifying plants get paid? How would the bailout be structured and how exactly would owners of money-losing plants get compensated?
Perry charges ahead—and we must be relentless in our opposition

As we continue to wait for additional details about DOE’s bailout proposal, we are gearing up for a fight. Led by Secretary Perry, the administration continues to make false and misleading arguments about the purported need for keeping uneconomic plants from retiring early—and this issue will be with us as long as the current president is in office. Perry has long since dropped any pretext for caring about market economics or actual information to inform his proposals. In response to Congressional questioning last fall, Perry remarked:

“I think you take costs in to account, but what’s the cost of freedom? What does it cost to build a system to keep America free?” -Secretary Rick Perry, 12 October 2017

When the facts don’t support your argument, you’re forced to rely on empty bumper-sticker statements like this one to make your point.

And stack the deck by putting biased people who support your ideas in decision-making positions. We’ll be watching closely as the process unfolds for nominating someone to fill the vacant seat at FERC.

At UCS, we’re going to continue the fight to hold the administration accountable and stop this misguided and disastrous proposal from being implemented. The facts are on our side—there is no grid reliability crisis and no grid resiliency crisis, but there is a climate crisis, and bailing out coal plants will only add to the climate crisis with real adverse consequences to the economy and public health. Stand with us.

 

Photo: Greg Goebel

Science Prevails in the Courts as Chlorpyrifos Ban Becomes Likely

UCS Blog - The Equation (text only) -

Photo: Will Fuller/CC BY-NC-ND 2.0 (Flickr)

Today, children, farmworkers, and the rest of us won big in the Ninth Circuit Court of Appeals, as the court ordered EPA to finalize its proposed ban of the insecticide chlorpyrifos. Ultimately, the judge determined that EPA’s 2017 decision to refuse to ban the chemical was unlawful because it failed to justify keeping chlorpyrifos on the market, while the scientific evidence very clearly pointed to the link between chlorpyrifos exposure and neurodevelopmental damage to children, and further risks to farmworkers and users of rural drinking water.

Under the Federal Food, Drug, and Cosmetic Act (FFDCA), the EPA is required to remove pesticide tolerances (effectively banning them) when it cannot find they are safe with a “reasonable certainty.” The judge found that when former Administrator Pruitt’s refused to ban the chemical, he contradicted the work of the agency’s own scientists, who found the chemical posed extensive health risks to children. His failure to act accordingly violated the agency’s mandate under the FFDCA.

This attack on science was fueled by close relationships that Scott Pruitt and President Trump have with Dow Chemical Company, which makes chlorpyrifos. Unfortunately, this was just one of many recent EPA actions that not only lack justification and supporting analysis, but actively undermine the agency’s ability to protect public health—and in this case specifically, the health of children. Acting Administrator Wheeler should learn from this particular case that EPA’s decisions must be grounded in evidence, and that the public will continue to watch and demand as much.

The petition was filed by a coalition of environmental, labor, and health organizations. The EPA now has 60 days to ban chlorpyrifos.

Photo: Will Fuller/CC BY-NC-ND 2.0 (Flickr)

Massachusetts Clean Energy Bill 2018: Continuing the Journey

UCS Blog - The Equation (text only) -

Photo: J. Rogers

On Massachusetts’ journey toward a clean, fair, and affordable energy future, the energy bill that just passed is an important waystation. But it can’t be an endpoint—not by a long shot, and not even for the near term; we need to get right back on the trail. So here are the successes to celebrate, the shortcomings to acknowledge, and why we need to saddle up for next year.

The good

The Massachusetts legislature ended its two-year session last week with a flurry of activity. So, where’d we end up, in terms of clean energy? The best news is that we got a bill (even that was in doubt). And there’s certainly stuff to celebrate in An Act to advance clean energy:

The bill also includes various other pieces. Some clarifications about what kinds of charges Massachusetts electric utilities are (or aren’t) allowed to hit solar customers with (given bad decisions earlier this year). A new “clean peak standard” aimed at bringing clean energy to bear (and avoiding the use of dirty fossil fuel-fired peaking plants) to address the highest demand times of the year. A requirement that gas companies figure out how much natural gas is leaking out of their systems.

An RPS increase and another move on offshore wind—two pieces worth celebrating (Credit: Derrick Z. Jackson)

The… less good

Equally notable, though, is what’s not in the final bill. And particularly stuff that was in the bills passed by one chamber or the other:

  • A strong RPS – The leading states are a whole lot more ambitious, RPS-wise, than Massachusetts, even with the bump-up: California, New York, and New Jersey all have requirements of 50% renewables by 2030. In Massachusetts, the senate’s version had a strong target, increasing the requirement 3% per year, which would have surpassed even those states’. But the final ended up basically where the house was: 2% per year. (And, though 2030 is a long way away, having the “compromise” bump the annual increase back down to 1% after 10 years is particularly irksome.)
  • A strong storage requirement – It’s good to have something storage-related on the books, something stronger than “aspirational”. The top states for energy storage, though, are up at 1,500 megawatts (MW) by 2025 (New York) or 2,000 MW by 2030 (New Jersey), and the NJ level of ambition is what the Massachusetts senate’s bill would have gotten us. Note, too, the units in the final Massachusetts bill: megawatts vs. megawatt-hours. If we’re looking at storage being available for something like several hours at a pop, simply dropping the “-hours” piece—having the requirement be in MW, not MWh—would have made the 1,000 number a much stronger target.
  • Solar fixes – Rooftop solar systems larger than residential ones (think businesses, municipalities) are stuck in much of the state because of the caps placed on how much can “net meter”, set as a percentage of each electric utility’s peak customer demand. There are also issues and opportunities around expanding solar access to lower-income households. Here again, the senate included great language on both… and that’s where it ended: with various barriers to solar firmly in place.
  • Appliance efficiency standards – Helping our appliances do more with less was the subject of a good bill that passed the house, but also fell by the wayside en route to the negotiated final.

Then there’s the fact that the clean peak standard is an untested concept, toyed with in a couple of other states but never actually implemented. Trailblazing isn’t always bad, and dirty peaks are an issue, but there are probably better/simpler ways to tackle the problem (see, e.g., “A strong storage requirement”, above).

And there’s all kinds of important stuff around carbon pollution in sectors other than electricity, and around climate change more broadly, that didn’t make it. The new clean energy bill missed the chance to tackle transportation, for example, which accounts for 40% of our state’s emissions.

Why put the brakes on solar (and solar jobs)? (Credit: Audrey Eyring/UCS)

When measures come up short

There are certainly things to celebrate in the “good” list above. But it’s also true, as the “less good” list presents, that Massachusetts could have done much better than that. UCS’s president Ken Kimmell said, “Massachusetts scored with the energy bill passed today, but this game is far from over.” Other reactions were less favorable (see here and here, for example).

Ben Downing, the former state senator who was an architect of Massachusetts’s impressive 2016 energy bill, had some choice words on the process itself, the end-of-session crunch that he points out gives “special interests defending the status quo… an outsized voice.” Those special interests’ interests were certainly reflected in the bill’s “measured approach” to energy progress.

It’s telling that the chair of the house’s climate change committee, Rep. Frank Smizik, who has been a solid voice for climate action for at least the dozen years I’ve known him (but is retiring), couldn’t bring himself to vote for the bill, and cast the lone dissenting vote the bill received in either chamber, and was less than flattering in his characterization of the bill.

But the senate climate champions who worked out the compromise (and were clearly not pleased with what got left on the cutting room floor) had comments that were particularly on point in terms of next steps.

Sen. Michael Barrett, a champion of carbon pricing, made it clear that this bill isn’t the endgame—and that energy now needs to be an every-session kind of thing.

And Sen. Marc Pacheco, the lead author of the very strong senate bill that fed into the compromise, promised that “The day after the session ends, my office will be beginning again to pull together clean energy legislation for the next session.”

Next stop? (Don’t stop.)

And those points should be the main takeaway. We’ve got what we’re going to get from the Massachusetts legislature for the 2017-2018 session, and we’re glad for what did make it through the sausage-making. The successes are a testament to UCS supporters, who sent thousands of messages to their legislators, and to the work of our many allies in this push, in the State House and far beyond.

But we should all be hungry for a whole lot more. Every single time the legislature meets. Including next year.

The energy sector is evolving so quickly, and climate impacts are too (including in Massachusetts), that if we’re standing still, we’re losing ground. There’s no way we should accept any suggestion that, because the legislature dealt with energy in one term, they shouldn’t the next term.

At the state level, as elsewhere, progress on climate and clean energy is a journey, not a destination. There’ll be waypoints along the way, steps forward—like the new Massachusetts energy bill. But none of those should be invitations to take off our boots and kick back. This stuff is too important to leave for later.

We’re not done till we’re done, and there’s no sign of doneness here. Saddle up.

Pipe Rupture at Surry Nuclear Plant Kills Four Workers

UCS Blog - All Things Nuclear (text only) -

Role of Regulation in Nuclear Plant Safety #7

Both reactors at the Surry nuclear plant near Williamsburg, Virginia operated at full power on December 9, 1986. Around 2:20 pm, a valve in a pipe between a steam generator on Unit 2 and its turbine inadvertently closed due to a re-assembly error following recent maintenance. The valve’s closure resulted in a low water level inside the steam generator, which triggered the automatic shutdown of the Unit 2 reactor. The rapid change from steady state operation at full power to zero power caused a transient as systems adjusted to the significantly changed conditions. About 40 seconds after the reactor trip, a bend in the pipe going to one of the feedwater pumps ruptured. The pressurized water jetting from the broken pipe flashed to steam. Several workers in the vicinity were seriously burned by the hot vapor. Over the next week, four workers died from the injuries.

Fig. 1 (Source: Washington Times, February 3, 1987)

While such a tragic accident cannot yield good news, the headline for a front-page article in the Washington Times newspaper about the accident (Fig. 1) widened the bad news to include the Nuclear Regulatory Commission (NRC), too.

The Event

The Surry Power Station has two pressurized water reactors (PWRs) designed by Westinghouse. Each PWR had a reactor vessel, three steam generators, and three reactor coolant pumps located inside a large, dry containment structure. Unit 1 went into commercial operation in December 1972 and Unit 2 followed in June 1973.

Steam flowed through pipes from the steam generators to the main turbine shown in the upper right corner of Figure 2. Steam exited the main turbine into the condenser where it was cooled down and converted back into water. The pumps of the condensate and feedwater systems recycled the water back to the steam generators.

Fig. 2 (Source: Nuclear Regulatory Commission NUREG-1150)

Figure 2 also illustrates the many emergency systems that are standby mode during reactor operation. On the left-hand side of Figure 2 are the safety systems that provide makeup water to the reactor vessel and cooling water to the containment during an accident. In the lower right-hand corner is the auxiliary feedwater (AFW) system that steps in should the condensate and feedwater systems need help.

The condensate and feedwater systems are non-safety systems. They are needed for the reactor to make electricity. But the AFW system and other emergency systems function during accidents to cool the reactor core. Consequently, these are safety systems.

Both reactors at Surry operated at full power on Tuesday December 9, 1986. At approximately 2:20 pm that afternoon, the main steam trip valve (within the red rectangle in Figure 2) in the pipe between steam generator 2C inside containment and the main turbine closed unexpectedly.

Subsequent investigation determined that the valve had been improperly re-assembled following recent maintenance, enabling it to close without either a control signal nor need to do so.

The valve’s closure led to a low water level inside steam generator 2C. By design, this condition triggered the automatic insertion of control rods into the reactor core. The balance between the steam flows leaving the steam generators and feedwater flows into them was upset by the stoppage of flow through one steam line and the rapid drop from full power to zero power. The perturbations from that transient caused the pipe to feedwater pump 2A to rupture (location approximated by the red cross in Figure 1) about 40 seconds later.

Figure 3 shows a closeup of the condensate and feedwater systems showing where the pipe ruptured. The condensate and condensate booster pumps are off the upper right side of the figure. Water from the condensate system flowed through feedwater heaters where steam extracted from the main turbine pre-warmed it to about 370°F en route to the steam generators. This 24-inch diameter piping (called a header) supplied the 18-in diameter pipes to feedwater pumps 2A and 2B. The supply pipe to feedwater pump 2A featured a T-connection to the header while a reducer connected the header to the 18-inch supply line to feedwater pump 2B. Water exiting the feedwater pumps passed through feedwater heaters for additional pre-warming before going to the steam generators inside containment.

Fig 3 (Source: Nuclear Regulatory Commission NUREG/CR-5632)

Water spewing from the broken pipe had already passed through the condensate and condensate booster pumps and some of the feedwater heaters. Its 370°F temperature was well above 212°F, but the 450 pounds per square inch pressure inside the pipe kept it from boiling. As this hot pressurized water left the pipe, the lower pressure let it flash to steam. The steam vapor burned several workers in the area. Four workers died from their injuries over the next week.

As the steam vapor cooled, it condensed back into water. Water entered a computer card reader controlling access through a door about 50 feet away, shorting out the card reader system for the entire plant. Security personnel were posted at key doors to facilitate workers responding to the event until the card reader system was restored about 20 minutes later.

Water also seeped into a fire protection control panel and caused short circuits. Water sprayed from 68 fire suppression sprinkler heads. Some of this water flowed under the door into the cable tray room and leaked through seals around floor penetrations to drip onto panels in the control room below.

Water also seeped into the control panel to actuate the carbon dioxide fire suppression system in the cable tray rooms. An operator was trapped in the stairwell behind the control room. He was unable to exit the area due to doors locked closed by the failed card reader system. Experiencing trouble breathing as carbon dioxide filled the space, he escaped when an operator inside the control room heard his pounding on the door and opened it.

Figure 4 shows the section of piping that ruptured. The rupture occurred at a 90-degree bend in the 18-inch diameter pipe. Evaluations concluded that years of turbulent water flow through the piping gradually wore away the pipe’s metal wall, thinning it via a process called erosion/corrosion to the point where it was no longer able to withstand the pressure pulsations caused by the reactor trip. The plant owner voluntarily shut down the Unit 1 reactor on December 10 to inspect its piping for erosion/corrosion wear.

Fig. 4 (Source Nuclear Regulatory Commission 1987 Annual Report

Pre-Event Actions (and Inactions?)

The article accompanying the darning headline above described how the NRC staff produced a report in June 1984—more than two years before the fatal accident—warning about the pipe rupture hazard and criticizing the agency for taking no steps to manage the known risk. The article further explained that the NRC’s 1984 report was in response to a 1982 event at the Oconee nuclear plant in South Carolina where an eroded steam pipe had ruptured.

Indeed, the NRC’s Office for Analysis and Evaluation of Operational Data (AEOD) issued a report (AEOD/EA 16) titled “Erosion in Nuclear Power Plants” on June 11, 1984. The last sentence on page two stated “Data suggest that pipe ruptures may pose personnel (worker) safety issues.”

Indeed, a 24-inch diameter pipe that supplied steam to a feedwater heater on the Unit 2 reactor at Oconee had ruptured on June 28, 1982. Two workers in the vicinity suffered steam burns which required in hospitalization overnight. Like at Surry, the pipe ruptured at a 90-degree bend (elbow) due to erosion of the metal wall over time. There was a maintenance program at Oconee that periodically examined the piping ultrasonically.

That monitoring program identified pipe wall thinning of two elbows on Unit 3 in 1980 that were replaced. Monitoring performed in March 1982 on Unit 2 identified substantial erosion in the piping elbow that ruptured three months later. But the thinning was accepted because it was less than the company’s criterion for replacement. It’s not been determined whether prolonged operation at reduced power between March and June 1982 caused more rapid wear than anticipated or whether the ultrasonic inspection in March 1982 may have missed the thinnest wall thickness.

Post-Event Actions

The NRC dispatched an Augmented Inspection Team (AIT) to the Surry site to investigate the causes, consequences, and corrective actions. The AIT included a metallurgist and a water-hammer expert. Seven days after the fatal accident, the NRC issued Information Notice 86-106, “Feedwater Line Break,” to plant owners. The NRC issued the AIT report on February 10, 1987. The NRC issued Supplement 1 on February 13, 1987, and Supplement 2 on March 18, 1987, to Information Notice 86-108.

The NRC did more than warn owners about the safety hazard. On July 9, 1987, the NRC issued Bulletin 87-01, “Thinning of Pipe Walls in Nuclear Power Plants,” to plant owners. The NRC required owners to respond within 60 days about the codes and standards which safety-related and non-safety-related piping in the condensate and feedwater systems were designed and fabricated to as well as the programs in place to monitor this piping for wall thinning due to erosion/corrosion.

And the NRC issued Information Notice 88-17 to plant owners on April 22, 1988, summarizing the responses the agency received in response to Bulletin 87-01

UCS Perspective

Eleven days after a non-safety-related pipe ruptured on Oconee Unit 2, the NRC issued Information Notice 82-22, “Failures in Turbine Exhaust Lines,” to all plant owners about that event.

The June 1984 AEOD report was released publicly. The NRC’s efforts did call the nuclear industry’s attention to the matter as evidenced by a report titled “Erosion/Corrosion in Nuclear Plant Steam Piping: Causes and Inspection Program Guidelines” issued in April; 1985 by the Electric Power Research Institute.

Days before the NRC issued the AEOD report, the agency issued Information Notice 84-41, “IGSCC [Intragranular Stress Corrosion Cracking] in BWR [Boiling Water Reactor] Plants,” to plant owners about cracks discovered in safety system piping at Pilgrim and Browns Ferry.

As the Washington Times accurately reported, the NRC knew in the early 1980s that piping in safety and non-safety systems was vulnerable to degradation. The NRC focused on degradation of safety system piping, but also warned owners about degradation of non-safety system piping. The fatal accident at Surry in December 1986 resulted in the NRC expanding efforts it had required owners take for safety system piping to also cover piping in non-safety systems.

The NRC could have required owners fight the piping degradation in safety systems and non-safety systems concurrently. But history is full of wars fought on two fronts being lost. Instead of undertaking this risk, the NRC triaged the hazard. It initially focused on safety system piping and then followed up on non-safety system piping.

Had the NRC totally ignored the vulnerability of non-safety system piping to erosion/corrosion until the accident at Surry, this event would reflect under-regulation.

Had the NRC compelled owners to address piping degradation in safety and non-safety systems concurrently, this event would reflect over-regulation.

By pursuing resolution of all known hazards in a timely manner, this event reflects just right regulation.

Postscript: The objective of this series of commentaries is to draw lessons from the past that can, and should, inform future decisions. Such a lesson from this event involves the distinction between safety and non-safety systems. The nuclear industry often views that distinction as also being a virtual wall between what the NRC can and cannot monitor.

As this event and others like it demonstrate, the NRC must not turn its back on non-safety system issues. How non-safety systems are maintained can provide meaningful insights on maintenance of safety systems. Unnecessary or avoidable failures of non-safety systems can challenge performance of safety systems. So, while it is important that the NRC not allocate too much attention to non-safety systems, driving that attention to zero will have adverse nuclear safety implications. As some wise organization has suggested, the NRC should not allocate too little attention or too much attention to non-safety systems, but the just right amount.

* * *

UCS’s Role of Regulation in Nuclear Plant Safety series of blog posts is intended to help readers understand when regulation played too little a role, too much of an undue role, and just the right role in nuclear plant safety.

Farmers Markets and SNAP: Thanks, New York…Your Move, Congress

UCS Blog - The Equation (text only) -

This National Farmers Market Week, we have some things to celebrate. There’s peak summer produce, of course…I mean, who doesn’t like a perfectly ripe tomato? And now, we may be a little bit closer to a day when that lovely red orb is accessible to anyone who wants one on a hot day in August. But first, let’s talk about a crisis averted.

Late last month, the state of New York and the New York Farmers Market Federation came to the rescue of thousands of farmers markets—and the shoppers that rely on them. The emergency? A host of technical and financial problems threatened the sudden collapse of the systems that allow farmers markets to accept food stamp benefits electronically. And although that didn’t happen (thanks, New York!) the events that unfolded over the past month illustrate the need for more extensive and permanent infrastructure to connect low-income consumers with farmers and help local food systems thrive.

SNAP EBT problem solved, for now

Since 1997, the Supplemental Nutrition Assistance Program (SNAP, or food stamps) has provided benefits via Electronic Benefits Transfer (EBT) cards, offering convenience and minimizing stigma in transactions at grocery stores. But EBT posed a challenge for farmers markets held outdoors without secure data lines, leading companies like the Austin-based Novo Dia Group to develop mobile software solutions. The firm now uses wireless technology to process some 40 percent of SNAP transactions at farmers markets nationwide.

But in early July, Novo Dia announced it was unable to fulfill the remainder of its contract with the USDA and would end its service within the month. This would have affected a significant number of the nation’s 8,720 farmers markets, at which low-income shoppers redeemed more than $22.4 million in SNAP benefits last year.

The factors contributing to the shutdown remain somewhat unclear; Novo Dia cites high operational costs associated with its wireless platform, compounded by the company’s exclusion from a new contract between the USDA and Financial Transaction Management (FTM)—a new and relatively unknown company that won the bid to provide equipment to farmers markets in March 2018. (For clarity: USDA previously contracted with the Farmers Market Coalition for this service, who then subcontracted with Novo Dia.)

But regardless of the reasons, the consequences would have been devastating. A shutdown would have left some 1,700 farmers markets across the country without a way to redeem SNAP EBT—meaning SNAP participants would lose access to fresh, nutritious, affordable food and farmers would lose customers and revenue—right smack in the middle of the season.

Enter the state of New York, which jumped in last week to provide financing that will keep Novo Dia’s system operating nationwide through early 2019. Novo Dia’s financial viability aside, questions remain about why the USDA selected FTM—a little-known company that will replace Novo Dia with other unknown subcontractors—to serve the nation’s farmers markets. Yes, it could be an unremarkable outcome of a routine government bidding process. But the USDA’s mid-July press statement describing the situation isn’t wholly reassuring.

Congress can build lasting solutions

While New York bails out Novo Dia, there’s much more that Congress can do to enable long-term solutions; namely, by connecting farmers with consumers to support the growth of economically vibrant local food systems in communities throughout the country. Using data from our 50-State Food System Scorecard, UCS health analyst Sarah Reinhardt wrote recently that in states whose farmers grow more fruits and vegetables—and who can rely on better infrastructure to get that healthy food onto people’s plates—diet and health outcomes are better.

And this is where Congress comes in. This month, the 2018 farm bill is entering a critical stage of negotiations, as leaders in the US Senate and House of Representatives come together to finalize this massive 5-year legislative package, which shapes everything about how we eat in this country and who can afford nutritious food. Earlier this summer, the Senate passed a version of the bill that would make much-needed investments in local food systems, creating an innovative Local Agriculture Market Program (LAMP). The bipartisan provisions of this program would help communities expand access to fresh, nutritious food for many consumers, grow the customer base for small and midsize farmers, and offer a much-needed boost to struggling rural economies. That’s a win all-around.

However, the House version of the farm bill passed up the opportunity to make such investments. And now, members of the House and Senate are coming together in a conference committee to hash out their differences and negotiate a final bill. It’s time to insist that negotiators prioritize smart local food policies and include LAMP provisions in the bill that goes to the president’s desk.

What you can do: Local food programs that connect producers to consumers can support profitable farms, enable more people to afford healthy food, and keep food dollars in rural communities. So while you’re celebrating farmers markets this week with a slice of tomato mayo toast (my latest obsession), take a moment to sign our petition to support healthy local food solutions in the farm bill TODAY!

Extreme Heat and Wildfire in California

UCS Blog - The Equation (text only) -

There are currently 11 active wildfires burning in California, including the largest fire in the state’s modern history.

California is burning (again). As a climate scientist living in California, the state’s wildfires over the past few years, while startling, have not been particularly surprising. This is, after all, what scientists have been predicting for a very long time. But there’s a profound difference between being clear-headed and understanding of predictions and feeling existential nausea because  this is the reality we have created for ourselves and our children.

Our hearts ache for those who have lost their homes to wildfire or have had to evacuate and live with uncertainty. They ache as we drive on winding roads through forests devastated in years past by fires, by bark beetles. They ache as we imagine the majesty of Yosemite Valley cloaked in smoke and closed indefinitely, thousands of potential visitors unable to look up in wonder at Half Dome.

For my children, who have grown up in this state, it is normal to see ash raining down from metallic yellow skies in July. It is normal to have recess indoors because the air is unfit for little lungs. And while we seek out pristine forests for vacations, they see it as normal to drive through acres of blackened trees. Knowing that this is their normal, knowing that the future does not look much brighter gives me a sinking feeling in the pit of my stomach.

Inspired by all that heartache, let’s take a step back and put this year’s fire season into perspective.

Are fires like this the new normal for California?

The length of the fire season globally has been increasing since the 1980s.

As of today, there are 11 active wildfires burning in California. The largest of these, the Mendocino Complex Fire, which is made up of the River and Ranch Fires, is over 290,000 acres in area, making it the largest wildfire in the state’s modern history. That means that the two largest fires in the state’s modern history have both occurred within the last eight months–a troubling signal. Farther north, the Carr Fire, in the Redding area, is affecting over 160,000 acres and has destroyed over 1,000 structures.

Calling the current state of wildfire in California a “new normal” isn’t quite right because this particular point in time is, unfortunately, part of a long-term trajectory toward increasingly devastating fires. Globally and for the western US, wildfire trends are very clear. Whether measuring by the length of the fire season, the area burned, the number of large fires, or a variety of other metrics, it is clear that wildfire activity has been increasing since about 1980. On a global basis, a study from earlier this year found that the length of the fire season has increased by nearly 20% since 1979.

In California, interestingly, data provided by CalFire shows that the number of fires in the state has been declining since the 1980s, but the number of acres burned has been slowly increasing, which means that the average fire size has increased over time.

It’s important to note that these statistics are only one measure of how destructive wildfires are. While 2017 does not stand out in this data as particularly exceptional, it was the most destructive season on record with over 10,000 structures damaged or destroyed, which speaks to the fact land use and development play strong roles in determining overall wildfire risk.

The number of fires in California has been slowly declining, but the acreage burned and the average fire size have been increasing.

Human-caused climate change plays a role in driving wildfire

Average temperatures over the past three months have been above average over most of the state.

There are many factors that contribute to overall wildfire risk, some climate-related (such as temperature, drought, soil moisture) and others not (such as vegetation type, land use, and fire suppression practices). Because of the range of variables involved in determining wildfire risk, attributing trends in wildfire activity to human-caused climate change is more difficult than for other types of extreme events such as heat waves or coastal flooding.

Recent studies, however, have attributed over half of the recent trends in the aridity of wildfire fuels and forest fire areas directly to climate factors. In particular, warming temperatures–particularly in spring and summer–earlier snowmelt, and drying soil are contributing to heightened wildfire risk across the western US.

This summer has been a scorcher for California. Nearly statewide, both average and daily minimum temperatures have

Minimum temperatures across the state have been higher than normal over the last three months, allowing wildfires to continue to grow even at night.

been well-above average, and many cities have seen their high temperature records broken. Warmer minimum temperatures in particular are changing the pace at which wildfires are growing.

While there has not yet been any research to specifically attribute California’s hot weather this summer to human-caused climate change, in general, such extreme heat has been linked to climate change. High temperatures dry vegetation, which then becomes fuel for wildfires. So while we can’t say for certain yet that human-caused climate change is responsible for the devastating wildfires in recent years, it’s clear that warming temperatures are increasing the likelihood of such events.

More extreme heat on the way for Californians

The Carr Fire in Northern California broke out just days after the area was under an excessive heat warning issued by the National Weather Service. In the week leading up to the onset of the fire, temperatures were running as high as 110 degrees F–up to 10 degrees above the historical average.

California is bracing for another round of extreme heat this week, with broad swaths of the state expecting temperatures of 100 degrees Fahrenheit or hotter. Coupled with the seasonally-typical lack of rain and extremely dry vegetation, these conditions could pose additional wildfire risks, tax our already weary firefighters and present health risks to residents already weary of this season’s heat and smoke.

The future of heat and wildfire

As temperatures continue to rise in the coming decades, the risk of wildfire is expected to grow. Wildfire is far from the only danger of rising temperatures, however. Extreme heat can cause a wide range of impacts to human health and society, from increased heat mortality to deterioration of asphalt roadways.

The excessive heat warnings for California this week alert residents of the increased potential for heat-related illnesses, particularly for children, the elderly, and those without access to air conditioning.

In Shasta County, California, where the Carr fire is burning, a preliminary analysis by UCS suggests that the number of extreme heat days in the county is poised to rise as global temperatures continue to warm. While wildfire risk increases as humidity decreases, direct heat-related impacts to human health tend to increase when hot temperatures and high humidity occur together. Looking at the heat index—a combination of heat and humidity that the National Weather Service relies on when issuing heat advisories—the number of days of extreme heat in Shasta County is poised to increase significantly in the coming decades.

With a business-as-usual emissions scenario, the number of days with a heat index above 100 degrees Fahrenheit will go from an average of 4 days per year historically in Shasta County to 17 days per year by mid-century and over 30 days per year by the end of the century. That’s an additional two weeks to a month of conditions that will pose additional health, infrastructure, and wildfire challenges to the region.

About 38,000 people in Shasta County were forced to evacuate because of the Carr fire, and while many of them have since been allowed to return to their homes, continued heat and dangerous levels of air pollution make living conditions far from safe.

What can we do to prevent extreme heat and wildfire

Climate change is here. It’s now. But we are not powerless to help those who are in need now or to help our future generations. May we pour our hearts into helping those affected, and may we pull ourselves out of the pit of climate despair to advocate for the policies that will be critical to limiting future harm. Increasing renewable energy and decreasing coal use, bringing clean vehicles to the forefront of our transportation system, and implementing proactive, science-based forest management practices are all part of creating the kind of future we want for ourselves and our children.

Preliminary analyses provided by Juan Declet-Barreto.

Daria Devyatkina/Flickr Jolly et al. 2018 CalFire 2018 Western Regional Climate Center Western Regional Climate Center

Obstruction of Injustice: Columbia Generating Station Whitewash

UCS Blog - All Things Nuclear (text only) -

There’s been abundant talk recently about obstruction of justice—who may or may not have impeded this or that investigation. Rather than chime in on a bad thing, obstruction of justice, this commentary advocates a good thing—obstruction of injustice. There’s an injustice involving the Columbia Generating Station in Washington that desperately needs obstructing.

Raising the White Flag

The NRC dispatched a Special Inspection Team to the Columbia Generating Station in Richland, Washington in late 2016 after a package containing radioactive materials was improperly shipped from the plant facility to an offsite facility. The NRC team identified nine violations of federal regulations for handling and transport of radioactive materials, the most serious warranting a White finding in the agency’s Green, White, Yellow, and Red classification scheme. This White finding moved the Columbia Generating Station into Column 2 of the Reactor Oversight Process’s Action Matrix in the first quarter of 2017.

Columbia Generating Station would remain in Column 2 until the first of two things happened: (1) the NRC determined that the problems resulting in the improper transport of radioactive materials were found and fixed justifying a return to Column 1, or (2) additional problems were identified that warranted relocation into Columns 3 or 4.

Check that: There’s a third thing that happened to improperly transport Columbia Generating Station back into Column 1—the injustice that needed obstructing.

Raising the Whitewash

After the plant owner notified the NRC that the causes of the radioactive material mishandling had been cured, the NRC sent a team to the site in late 2017 to determine if that was the case. On January 30, 2018, the NRC reported that its investigation confirmed that the problems had been resolved and returned the Columbia Generating Station to Column 1 and routine regulatory oversight after closing out the White finding.

In response, an NRC staffer submitted a Differing Professional Opinion (DPO) contending “that the decision to close the WHITE finding was not supported by the inspection report details.” The DPO originator provided two dozen very specific reasons for the contention.

The NRC formed a three-person panel to investigate the DPO. The DPO Panel issued its report on June 28, 2018, to the Regional Administrator in NRC Region IV (Fig. 1).

Fig. 1 (Source: Unkown)

The DPO recommended that the NRC either re-open the WHITE finding or revise the January 30, 2018, report to include an explanation for why it was closed even though the problems resulting in the WHITE finding had not been remedied.

In other words, the DPO Panel agreed with the contention raised by the DPO originator. En route, the DPO Panel substantiated 20 of the 24 specific reasons provided by originator.

Detailing the Whitewash

On July 21, 2017, another DPO Panel released a report validating 18 concerns raised by the DPO originator with how the NRC allowed Palo Verde Unit 3 to continue operating with a broken backup power generator far longer than permitted by the law, established policies, and common sense. Despite agreeing with essentially every concern raised by the DPO originator in that case, the DPO Panel somehow concluded the NRC had properly let Palo Verde continue to operate.

This time, the DPO Panel also agreed with the DPO originator’s concerns and also agreed with the DPO originator’s conclusion that the NRC had acted improperly. To quote the DPO Panel:

…the Panel concluded that NRC Inspection Report 05000397/2017-011, dated January 30, 2018 (ML18032A754), does not depict all the bases to support the conclusion that the objectives of the IP [inspection procedure] were met and thus does not support closure of the WHITE finding.”

A common thread among the DPO originator’s concerns was the Root Cause Evaluation (RCE) developed by the plant owner for the problems resulting in the WHITE finding. The RCE’s role is to identify the causes for the problems. Once the causes are identified, appropriate remedies can be applied. When the RCS identifies the wrong cause(s) and/or fails to identify all the right causes, the remedies cannot be sufficient. Through interviews with NRC staff involved in the inspection and its review of materials collected during the inspection, the DPO Panel reported “… a belief by the 95001 inspection team and other NRC staff with oversight of this inspection that the licensee’s written root cause evaluation (RCE), even in its seventh revision, was poorly written and lacked documentation of all the actions taken in response to this event.”

In case this verbiage was too subtle, the DPO Panel later wrote that “… the licensee’s “documented” RCE was grossly inadequate, which was confirmed through interviews by the Panel” [emphasis added].

And the DPO Panel stated “… the root cause evaluation could not have been focused on the right issue and the resulting corrective actions may not be all inclusive.”

Later the DPO Panel reported “… it is not clear how the inspectors concluded that what the licensee did was acceptable.”

A few paragraphs later, the DPO Panel stated “…the Panel could not understand the rationale for finding the licensee’s extent of condition review appropriate.”

A few more paragraphs later, the DPO panel reported “What appears confusing is that interviewees told the Panel that the licensee’s written RCE was grossly inadequate, yet the inspectors were able to accept it as adequate, without requiring the licensee to address the discrepancies through a revised RCE.”

Later on that page, “The Panel found that the report does not discuss the licensee’s corrective actions.” The inspection team found the root cause evaluation “grossly inadequate” and did not even mention the corrective actions the RCE was supposed to trigger.

The DPO Panel reported “… the inspectors concluded that the licensee met the inspection objectives of IP 95001. However, this appears to the Panel to be a leap of (documentation) faith that appears counter to the inspection requirements and guidance of IP 95001 as well as IMC [inspection manual chapter] 0611.”

Still not out of bricks, the DPO Panel concluded “It is difficult to imagine that the licensee’s definition of the problem statement, extent of condition and cause, and corrective actions are appropriate.”

The DPO Panel also stated “…the Panel can only conclude that the 95001 report justified closure of the WHITE finding based on significant verbal information that was not contained in the final RCE and not discussed in the 95001 report.”

That’s contrary to the NRC’s purported Principles of Good Regulation—Independence, Openness, Efficiency, Clarity, and Reliability, unless they are like a menu and Region IV is on a diet skipping some of the items.

As noted above, these findings led the DPO Panel to recommend that the NRC either re-open the WHITE finding or revise the January 30, 2018, report to explain why it was closed even though the problems resulting in the WHITE finding had not been remedied. So far, the NRC has done neither.

UCS Perspective

This situation is truly appalling. And that’s an understatement.

The NRC identified nine violations of federal regulatory requirements in how this plant owner was handling and transporting radioactive materials. Not satisfied by this demonstrated poor performance, the NRC properly issued a WHITE finding and moved the reactor into Column 2 of the ROP’s Action matrix where additional regulatory oversight was applied.

By procedure and standard practice, the WHITE finding is to remain open until a subsequent NRC inspection determines its cause(s) to have been identified and corrected.

Yet, the NRC inspectors found the root cause evaluation by the owner to be “grossly inadequate.”

And the NRC inspectors did not mention the corrective actions taken in response to the “grossly inadequate” root cause evaluation.

So, the NRC closed the WHITE finding—an injustice plain and simple as amply documented by the DPO Panel.

Where’s obstruction of injustice when it’s needed?

The DPO Panel found it “difficult to imagine” that the plant owner’s efforts were appropriate without “a leap of faith.” This is not like fantasy football, fantasy baseball, or fantasy NASCAR. Fantasy nuclear safety regulation is an injustice to be obstructed. If NRC Region IV wants to go to Fantasyland, I’ll consider buying them a ticket to Disneyland. (One-way, of course.)

The NRC’s Office of the Inspector General should investigate how the agency wandered so far away from its procedures, practices, and purported principles.

The NRC Chairman, Commissioners, and senior managers should figure out what is going terribly awry in NRC Region IV. If for no other reason than to obstruct Region IV’s injustices from corrupting the other NRC regions.

Americans deserve obstruction of injustice when it comes to nuclear safety, not fantasy nuclear safety regulation.

As States Target University Students for Voter Suppression, Student Groups are Fighting Back

UCS Blog - The Equation (text only) -

Photo: KOMUnews/Flickr

As the 2018 general midterm election approaches, college student voting rights are under attack.  Students are being specifically targeted for voter suppression in a number of states by excluding student identification as an acceptable form of voter identification, tightening up residency requirements, and selectively spreading misinformation. Fortunately, in several states, campus-wide and student-led movements are organizing and mobilizing college voters in a recognition of the historic role that students have played in the civil and voting rights movements in the United States and abroad.

New hurdles for students

Many states still do not allow absentee voting, often preventing students from outside of their birth states from casting a ballot.  More specifically, and more frequently since the Supreme Court overturned sections of the 1965 Voting Rights Act in 2013, dozens of states have implemented voter identification requirements, some of which either exclude student identification as a valid form of ID, or require proof of residency with forms (electric bills, etc.) that students living on campus are less likely to have.  A University of Michigan study has demonstrated the recent decline of drivers’ license ownership among college students, a form of identification frequently used in states with strict voter ID laws.

Perhaps most notoriously, the state of New Hampshire recently legislated the equivalent of a poll tax on out-of-state students, a residency requirement that includes registering one’s vehicle with the state and getting a New Hampshire driver’s license, which can cost several hundred dollars.  New Hampshire students have one chance , this November, to overturn the law: because it does not go into effect until 2019, they have an opportunity to mobilize and change the leadership in the legislature, a legislature that intentionally targeted them.

Direct, intentional targeting of students to suppress their votes in not solely the province of legislatures.  For example, in 2016, campuses in Maine were targeted with flyers providing false information about voting and registration requirements, and past elections have seen campuses targeted by organizations that fraudulently register students without completing their registration.

Nevertheless, students organizing to protect voting rights  in other states have achieved significant victories.  North Carolina’s strict voter ID law, which excluded student IDs as valid, was struck down in 2016 before the general election, after a group of college students, along with the Department of Justice, the North Carolina NAACP, the ACLU, and the League of Women Voters filed lawsuits against the state of North Carolina.  Just this month, U.S. District Judge Mark Walker sided with students and struck down Florida’s ban on early voting sites on college campuses as “facially discriminatory on account of age.”

Of course, the struggle continues.  A newly refurbished voter ID law is actually back on the ballot this November in North Carolina, with no mention of whether student IDs would be a valid form of identification.

Voter ID is also back on the ballot in Arkansas, and restrictive election laws already on the books weaken electoral integrity and threaten to disenfranchise voters across the United States.

Student organizing brings ballot access

For decades, student movements have been critical to expanding and defending ballot access throughout the United States. We need students to fulfill their historic role as agents of change in the expansion and protection of voting rights, from the Women’s Suffrage movement to Selma, where the Student Nonviolent Coordinating Committee (SNCC) had been organizing since 1963.

Organizing is already under way.  Across the country, campaigns like The Big Ten Voting Challenge are working to increase the number of eligible registered students across the country, through an extension of the Turbo Vote Challenge.  Student groups like Turn Up Turnout have a straightforward, non-partisan approach to political action: sign students onto Turbovote; organize turnout initiatives; administer workshops to explain the importance and effectiveness of voting, especially in midterm and local elections; and provide workshop materials that other campuses can use to develop their own initiatives.  The goal is not just to increase on-campus voting, but “that students should vote where they want—at home or at school—that it should be their choice” according to professor Edie Goldberg, who helped initiate the group at the University of Michigan.

The freedom to choose.  That is ultimately what students are fighting for this election cycle.  Not just for the right to free, fair and competitive elections, but for the ultimate ends of political action: the health of their communities, the health of the planet, as well as the academic means, the science that supports the sorts of policy that will get us there.  Science Rising is one such effort, a coalition of scientists, students, and activists fighting to ensure that knowledge generated for the public good continues to play a central role in policymaking, despite recent attacks on science and the scientific community.

Organizers and activists are creating windows of opportunity this election season. Opportunities to ensure that our democracy has an adequate supply of its two basic components: the energy of citizens, equally empowered to associate and express their collective goals; and the knowledge required to make informed, ethical and humane choices about what those goals are.  Ultimately, democracy depends on “strong people” in the words of SNCC organizer, advisor, and mentor Ella Baker; “Strong people don’t need leaders…we were strong people.  We did strong things.”

Preliminary data look promising. Record turnout among young people in Virginia helped to unseat dozens of incumbent state lawmakers in 2017.  While Millennials are still less likely to register than older Americans, their share among newly registered voters indicates a significant increase, especially in battleground states.  Additionally, the number of young people running for office is surging across the country at every level of government. But we won’t know until November, just how many students are stepping up to take on their historic responsibility as agents of change, and showing it at the ballot box.

Photo: KOMUnews/Flickr

Transition to Renewable Energy: Legislation Puts Clean Air and Vulnerable Communities First

UCS Blog - The Equation (text only) -

A number of California’s natural gas power plants are located in low-income communities of color. For decades, these communities have unjustly carried the burden of powering our state and paid the highest price — their health — for dirty energy. The good news is that, according to an analysis just released by the Union of Concerned Scientists, California can retire a significant amount of natural gas generation because it is no longer needed. The bad news is that as California increases its reliance on renewable energy, an unintended consequence is that existing natural gas plants could get dirtier.

The UCS analysis also found that without an alternative to natural gas to meet evening electricity needs when the sun goes down and solar generation subsides, plants not retired are likely to turn on and off much more frequently.  This “cycling”—can result in up to 30 times more nitrogen oxide pollution (NOx, a cause of smog) per hour than continuous operation. So even as total greenhouse gas emissions are dropping because we are relying on more renewable energy, communities near natural gas plants could see increased pollution that poisons their air. It’s crucial that air quality regulations limit these poisonous spikes and we prioritize clean energy alternatives and phase out all fossil fuel power plants as soon as possible.

California Senate Bill 64 (SB 64, Weickowski), currently under consideration by the California Legislature, is a step in the right direction. It would require the California Air Resources Board to publish data on the hourly change in emissions, startups and shutdowns of natural gas plants in the state. SB 64 would also require state agencies to work together to identify ways to reduce global warming and air pollution emissions, placing a priority on reducing emissions in communities most impacted by air pollution. While we need to phase out natural gas power plants by bringing more renewable energy onto our electricity grid, we must not allow the transition to unjustly impose further burdens on communities that have been polluted by our energy system. SB 64 is a necessary safeguard to help prevent low-income communities of color from continuing to sacrifice their health for the greater good.

Lily Bello, a youth leader at CAUSE is one of many youth who testified before the California Energy Commission in October 2017 to oppose the building of the Puente Gas Plant in Oxnard.  She said, “I’m somebody who lives in Oxnard, who spends their entire day here, and I have asthma. I’ve missed out on so much of my childhood because I could not breathe. I just recently found out that power plants cause asthma. It’s a reality. It affects us…. my life should be a little bit more important than the [money] put in a billionaire’s pocket.”

Like Lily, I also grew up in Oxnard. As a young girl I often visited a beach near my home over which a behemoth power plant loomed. Decades later, I fought as an attorney to stop the construction of Puente – yet another power plant planned on this “industrial beach” – together with Oxnard’s youth who refused to allow their community to serve as a dumping ground. After years of struggle, the proposal for the Puente power plant was suspended in January 2018 by the California Energy Commission and effectively died, granting the community a chance to reclaim their beach – and breathe easier.

We must continue to hold the line and say no more to air pollution from gas fired power plants, and yes to a future that is clean, renewable, rooted in the leadership of our communities, and that creates a worker-centered transition to a new energy economy. The good news is we have the solutions in hand, illustrated by our recent victories in bringing rooftop solar energy to multifamily affordable housing (SOMAH) and net metering.

Two California Public Utilities Commission’s (CPUC) actions have also recently put us on the right path. One is analysis the CPUC conducted as part of the Integrated Resource Plan proceeding that concluded there is no need for new gas-fired capacity through 2030. A second is a CPUC ruling that ordered PG&E to seek proposals for solar, demand response, or energy storage to avoid renewing contracts with three Calpine gas plants to provide local grid reliability needs.  This is a key step that can minimize the need to depend on gas in certain areas for local grid needs, instead providing energy from cleaner sources and battery storage. Now is the time for the us to work towards furthering the retirement of old plants and stop building new ones.

The California Environmental Justice Alliance (CEJA) and our member organizations work with communities that are the most affected by natural gas power plants: they must be prioritized in the clean energy economy. That’s why at CEJA we promote community-owned energy resilient systems as a critical step towards retiring natural gas plants — both by large-scale solar and storage models, as well as decentralized and distributed energy systems. These systems can create opportunities to build wealth through ownership, clean and renewable energy for the neighborhood without the need for firing-up a gas plant, and badly needed jobs to implement these solutions. We must increasingly demand public investment and prioritization of community-owned energy resiliency systems such as microgrids, solar and storage, and emergency energy systems in disadvantaged communities.

Our transition away from fossil fuels and toward a renewable energy economy must be smart and just. SB 64 is an important step to ensure that environmentally disadvantaged communities, such as Oxnard, also benefit from our state laws that seek to improve the quality of life of all Californians. By creating more transparency and accountability about our natural gas plants, we can both better hold our regulators and energy producers accountable, and work toward cleaner air for communities of color.  Indeed, by passing SB 64, California, as a whole will continue on its path of being a national leader in taking on climate change.

Gladys Limon is Executive Director of the California Environmental Justice Alliance (CEJA), where she brings 15 years of experience in legal, policy, and community-based work for environmental justice and civil rights. CEJA is a statewide, community-led alliance that works to achieve environmental justice by advancing policy solutions, uniting powerful local organizing in low-income communities of color most impacted by environmental and growing the statewide movement for environmental health and social justice. Previously, Ms. Limon was an attorney at Communities for a Better Environment pursuing high stakes environmental justice cases in southern California, and with the Mexican-American Legal Defense Education Fund, litigating cases concerning anti-immigrant laws, racial discrimination, and the rights of low-income immigrant workers.

 

How Can We Turn Down the Gas in California?

UCS Blog - The Equation (text only) -

Southern California Edison's Mountainview Gas Plant. Photo: David Danelski

California’s deep commitment to addressing climate change and transitioning away from fossil fuels has helped establish the state as a worldwide hub for clean energy investment and innovation. Thanks in large part to the Renewables Portfolio Standard or “RPS”— a policy enacted first in 2002 and ramped up over time—renewables now meet about 30 percent of California’s electricity needs while the state is on track to reach its 50 percent renewable target by 2030.

But California also has a lot of natural gas-fired power plants that release greenhouse gas emissions and pollute our air. After the state deregulated its electricity market in 1998, a combination of market manipulation and price caps led to skyrocketing electricity prices and rolling blackouts in 2000 and 2001. To make sure the state would never be left in the dark again, utilities and independent power plant owners built more natural gas power plants.

By the time this new generation capacity came online, the Great Recession was underway, the economy was slowing down and the state had also committed to greater reliance on clean, renewable electricity to address air pollution and climate change concerns. In 2002, renewables supplied just 11 percent of the state’s electricity needs. But in 2011, the state passed a law to reach 33 percent renewables by 2020, and in 2015 increased that to 50 percent by 2030. Today, clean energy advocates, including UCS, are supportive of 100% clean energy goals that would push renewables even higher.

For a while, renewable energy generation and natural gas power plants largely coexisted on the California grid. In some cases, it’s been a symbiotic relationship. The generation patterns of wind and solar are weather-dependent, making it necessary to find additional sources of power to meet energy and grid reliability needs when those resources are not around. In the past, California has used natural gas to play that role.

But as the state doubles down to reduce global warming pollution throughout all sectors of its economy, this means transitioning away from fossil-fueled electricity generation. Clean electricity will be a central strategy for reducing emissions associated with traditional electricity use as well as reducing emissions associated with transportation and buildings. Replacing vehicles currently running on gasoline and diesel with vehicles powered by renewable electricity will significantly reduce air pollution. In addition, the state now depends on natural gas to heat most homes and buildings; affordable renewable electricity will also provide a cleaner fuel source for those needs.

California still uses a little bit of coal, but most of it will be phased out by 2020. In 2017, natural gas still supplied about a third of state electricity demand. If our goal is to decarbonize the electricity sector and reduce air pollution, we must continue to ramp up renewables while at the same time turn down the gas.

How much gas could California retire?

Click to enlarge.

To understand what a cost-effective transition away from natural gas might look like, UCS analyzed the operations of the 89 combined-cycle (CCGT) and peaker natural gas plants located in the territory of the California Independent System Operation (CAISO), the grid operator that manages the electricity flow for about 80 percent of the state. UCS used an investment optimization model to identify how much gas generation could be economically retired between 2018 and 2030 while meeting the state’s mandated global warming emission reduction target and maintaining grid reliability.

According to the UCS analysis, California does not need to build any additional gas generation capacity in the CAISO territory to meet 2030 energy or grid reliability needs. In fact, nearly 24 percent of both CCGT and peaker capacity could be retired without negatively affecting grid reliability. This translates into 28 natural gas plants retiring by 2030. In addition, 12 of these plants are located in communities that are disproportionately burdened by air pollution.

In addition, the UCS analysis found that many more peaker plants could be retired if clean energy investments—like energy storage—were strategically located in certain areas on the grid that need local generation capacity to keep the grid reliable during power plant or transmission line failures.

What happens to the remaining plants?

UCS also analyzed how the operations of the remaining natural gas plants by 2030 would change over time.  UCS’s results indicate that even as natural gas generation decreases over time as California ramps up renewable generation, many natural gas power plants may turn off and on much more frequently in 2030 than they did in 2018, potentially resulting in an increase in criteria air pollutant emissions like nitrogen oxides or NOx. This is because more solar generation will be available in the middle of the day, making natural gas generation less needed.  But in the evening, as the sun sets, gas plants will have to be turned back on to meet evening electricity demand, unless cleaner technologies like other renewables, energy storage, and shifting load to reduce demand are substituted. In the most likely 2030 scenario, 16 of the 23 remaining CCGT plants modeled would go from starting and stopping close to zero times in 2018 to at least 200 times per year by 2030.

Many combined-cycle natural gas plants will start and stop much more frequently in 2030 compared with today. Some plants will go from close to zero starts today to starting once nearly every day of the year.

This increase in natural gas cycling could be a problem for air quality and the communities living near these plants because the NOx emissions associated with starting up a natural gas power plant can be much higher than a plant constantly running in steady-state operations. More analysis is required to understand how changing gas plant operations will impact air pollution and public health, in order to avoid a potential unintended consequence of more air pollution from natural gas plants as the state strives to reach future renewable energy and climate change goals.

In order to start getting a handle on how the natural gas fleet is changing in real-time, UCS is co-sponsoring Senate Bill 64, which would make more accessible the data associated with the hourly startup, shutdown, and cycling of California’s gas fleet and require local air districts to analyze how changing power plant operations may be impacting air quality. SB 64 would also require state energy agencies to conduct a study that plans for how the state will reduce natural gas generation and accelerate the eventual retirement of gas plants, placing a priority on reducing natural gas generation in communities most impacted by air pollution.

Steps California can take today to reduce reliance on natural gas generation
  • Shift more evening electricity demand to daytime hours and target energy efficiency to lower evening demand.
  • Invest in more energy storage that saves excess solar generation for use after sundown.
  • Invest in a more diverse portfolio of renewable generation technologies to spread clean energy generation evenly throughout all hours of the day to reduce evening ramp needs and the need to cycle in-state gas plants.
  • Allow California’s grid operators greater access to clean energy generation resources outside the state to help further reduce the need to cycle in-state gas plants.
  • Target specific locations for clean energy investment so that new generation resources can meet local capacity needs, which can hasten the retirement of natural gas.

If we’re truly committed to reducing carbon emissions, cutting air pollution and avoiding the worst impacts of climate change, then we must move away from natural gas in a way that both makes sense for the communities most affected and keeps everyone’s lights on. All eyes are on California to show the world how to wean millions of people and an enormous economy off fossil fuels. It’s imperative we get this right.

Photo: David Danelski

New Defense Bill Strengthens the Military’s Flood Readiness and Saves Taxpayer Dollars—All While Addressing Climate Change

UCS Blog - The Equation (text only) -

H.R. 5515, the John S. McCain National Defense Authorization Act (NDAA) for Fiscal Year 2019 This week President Trump is expected to sign into law H.R. 5515, the John S. McCain National Defense Authorization Act (NDAA) for Fiscal Year 2019.

The National Defense Authorization Act for Fiscal Year 2019 (NDAA FY 2019) builds to the future and reflects the reality of climate change, therefore providing a useful roadmap for Congress as they consider different infrastructure proposals.

The Armed Services Committee deserves recognition for their leadership in ensuring that taxpayer dollars are spent wisely and that new military construction is built to be flood-ready and resilient to future environmental conditions including climate change.

The Department of Defense and military branches such as the Navy, have made clear that climate security is national security. Recent examples include:

  • In 2017, the Naval Facilities Engineering Command (NAVFC) Headquarters released a climate adaptation planning handbook and tools (Appendix F and Appendix G) to help military installation planners implement viable strategies to address climate change impacts.
  • In 2016 the Department of Defense led a multi-agency team of researchers to develop an online database and tool to help more than 1,700 military installations worldwide plan for different sea level rise scenarios and timeframes based on each planners’ risk tolerance.

The NDAA for Fiscal Year 2019 provides support for the Department of Defense to continue these types of climate-ready activities while also representing a good step in the right direction by Congress on climate preparedness.

 Climate and energy resiliency

The NDAA for Fiscal Year 2018 required the Pentagon to do a report on how military installations and overseas staff may be vulnerable to climate change over the next 20 years. The language in that bill (see Section 335Report on effects of climate change on Department of Defense) recognized that climate change is a direct threat to the national security of the U.S. and “is impacting stability in areas of the world both where the United States Armed Forces are operating today, and where strategic implications for future conflict exist.”

The climate and energy resilience language in the NDAA FY 2019 is a smart next step following the NDAA FY 2018 that required vulnerability assessments of military installations. The NDAA FY 2019 (see Section 2805) requires the Defense Department to direct the different branches to implement multiple climate and energy resiliency measures and standards. Section 2865 also includes important preparedness language that authorizes the Secretary of Defense to use funds to repair and mitigate the risk to highways if they have been impacted by recurrent flood events and fluctuations in sea levels.

The bill defines ‘energy and climate resiliency’ as the:

anticipation, preparation for, and adaptation to utility disruptions and changing environmental conditions and the ability to withstand, respond to, and recover rapidly from utility disruptions while ensuring the sustainment of mission-critical operations.’’

 Climate resiliency:  Ensuring new construction is flood-ready

The NDAA FY 2019 (Section 2805) requires the Defense Department to direct the different military branches to implement flood standards to help avoid building in the floodplain if possible and if that isn’t possible, to ensure that new projects are more resilient to future floods.

The language in section 2805 requires smart, flood-ready resilient measures including:

  • disclosure of whether a proposed project will be sited within or partially within a 100-year floodplain;
  • a specific risk mitigation plan if the project is sited within or partially within a 100-year floodplain;

The bill also requires the Secretary of the Department of Defense to submit a report to the Congressional Defense Committee on proposed projects that are to be sited partially or within the 100-year floodplain.  Included in this report must be:

  • An assessment of flood vulnerability for the proposed project;
  • Information on alternative construction sites considered and an explanation as to why those sites do not satisfy mission requirements; and
  • A description of planned flood mitigation measures.

    HRPDC Quarterly Commission Meeting

Finally, and perhaps most importantly, it also sets minimum base flood elevation requirements for new construction in the 100-year floodplain. Base flood elevation is the height flood water is expected to rise during a base flood. For non-mission-critical buildings and facilities, the structure must be built 2 feet above the base flood elevation (BFE) and for mission-critical buildings, 3-feet above BFE.

At least 600 communities ranging from big cities, mid-size like Hampton Roads, VA as well as small towns are already implementing these commonsense flood-ready standards.  These base flood elevation standards ensure that structures are built from 1 to 3 feet (“freeboard”) above the “100-year flood” level.

Section 2865 of the NDAA FY19 is a critical piece to advancing preparedness on military installations. Section 2865 gives the Secretary of Defense the flexibility to utilize funds to repair and mitigate the risk to highways if the access to the military installation has been impacted by past recurrent flood events and fluctuations in sea levels.

Why make new construction flood-ready? Connecting the dots…

#1  Climate Change: Climate change is driving rising seas & more extreme precipitation

This commonsense standard will help to protect defense facilities but will also save taxpayers money by ensuring the newly built structures can withstand rising seas and riverine flooding, both of which are becoming more frequent and costly.

US Naval Station Norfolk

Rising Seas: The military is at the frontlines of sea level rise given its large coastal presence, its interconnectedness with the surrounding communities and its important role in maintaining our national security. With higher water, high tides will reach further inland, tidal flooding will become more frequent and extensive, and storms will have more water to drive ashore.

In Hampton Roads, VA for example, installations will be challenged with getting military personnel to their bases when the roads are inundated or to service ships at Naval Station Norfolk, the largest in the world. If the electricity at the piers must be turned off due to extra high tides, this will impact deployment and operations.

Extreme Precipitation:  UCS’s fact sheet “Climate Change, Extreme Precipitation and Flooding: The Latest Science” summarizes how global warming is shifting rainfall patterns, making heavy rain more frequent in many regions.  This extreme rainfall, along with human alteration of the

North Carolina Army National Guardsmen and local emergency services assist with evacuation efforts in Fayetteville, N.C., Oct. 08, 2016. Heavy rains caused by Hurricane Matthew led to flooding as high as five feet in some areas.

land and development in floodplains is placing more and more places at risk of destructive and costly floods.   We know that climate change is worsening extreme weather events, making them more extreme and frequent as we’re seeing with extreme rainfall events.

#2: Dollars & Sense:  Investing now, means savings down the road

Dollars: As downpours become more frequent and intense and as seas are rising, we’re seeing a toll on our nation’s coffers. Last January just after NOAA found that 2017 was the costliest year on record for weather and climate disasters.

Underwater, UCS’s recent analysis of properties at risk of chronic inundation due to sea level rise indicates that within a lifetime of a mortgage 300,000 homes worth $17.5 billion while 14,000 commercial properties worth $18.5 billion are at risk of this type of tidal flooding.  By the end of the century, these numbers grow to a collective 2.5 million homes and businesses worth a $1 trillion. These numbers do not capture the value of coastal military installations nor other infrastructure such as roads, bridges, urban drainage or water or energy utilities, and therefore provide a glimpse of the value of what’s at stake. While inland flooding and storms will only exacerbate this risk and the costs of future impacts.

Sense: While cost assessment of natural disasters for just the last year alone is daunting, we can be smarter about how we spend federal taxpayer dollars  while also increasing the nation’s resilience. An assessment of federal investments to reduce the risk of multiple natural hazards made just this case.  The National Institute of Building Sciences (NIBS) issued Natural Hazard Mitigation Saves: 2017 Interim Report which found that every $1 invested in disaster mitigation saves taxpayers $6.  The findings show even better numbers for riverine flooding.  Investing in preventive measures like the Department of Defense’s flood standard, there’s a $7 benefit for every dollar invested.  NIBS also found that the Gulf Coast and other regions in the

United States benefit even more from these standards that ensure building above the legally mandated height.

Energy resiliency

The Armed Services Committee also deserves praise for addressing energy resiliency in the NDAA FY 19. The law incentivizes emissions reductions of new buildings and facilities by requiring that the Secretary of Defense provide an energy study or life cycle analysis for each requested military construction. Life cycle analysis can help inform better decisions by providing data on the projected energy needs, the costs and related environmental impacts over the life span of that project.

Finally, the law requires the Secretary of Defense to incorporate data from authorized sources on the projections of changing environmental conditions during the design life of existing or planned new facilities or infrastructure in the overarching facilities criteria and any other subsequent regulations.

The examples of the type of conditions and of “reliable and authorized” sources include:

  • The Census Bureau for population projections;
  • The National Academies of Sciences for land use change projections and climate projections;
  • The U.S. Geological Survey (USGS) for land use change projections; and
  • The U.S. Global Change Research Office (USGCR) and National Climate Assessment (NCA) for climate projections.
Time to keep up the momentum on flood and climate readiness

The NDAA FY 2019 moves the needle on flood and climate readiness in many ways.  It reflects the reality of climate change and the real challenges we face as a nation, particularly when it comes to the impacts of sea level rise and flooding.  It also recognizes the importance of informing policies and plans based on the most recent science.  Finally, it provides a solid example of the policies around planning and mitigating these risks including the need to: 1) disclose flood risk; 2) avoid placing new development in risky areas, particularly floodplains; and 3) implement mitigation measures to reduce flood risk.

The NDAA FY 2019 provides a valuable and badly needed example of bicameral, bipartisan leadership on flood and climate readiness and on using federal taxpayer dollars wisely.  Hopefully, Congress will continue to move the needle on flood and climate readiness to ensure our communities and military are more resilient to extreme weather events and climate change.

 

 

 

 

Photo: Ian Swoveland North Carolina National Guard FEMA

At the Trump USDA, the “D” Stands for “Dow”

UCS Blog - The Equation (text only) -

USDA/Flickr

Everywhere you look in the Trump administration, there’s the Dow Chemical Company. Or rather, DowDuPont, as the company has been known since a 2017 corporate merger. The influence of this multinational chemical and agribusiness conglomerate is being felt in regulatory decisions involving Dow’s products, and the administration has pulled multiple Dow executives and lobbyists through the revolving door into high-level government positions.

The latest example of the latter? Meet Scott Hutchins, the career Dow exec and pesticide booster nominated last month to oversee science at the USDA.

Hutchins is a scientist…but is that enough?

To be fair, Hutchins is a vast improvement over the White House’s first choice (remember this guy?) for the job of USDA under secretary for research, education, and economics (REE), a position that encompasses the role of the department’s chief scientist. A trained scientist with a PhD in entomology, Hutchins clearly meets the criteria Congress set for this position in 2008. And to be sure, scientific training and experience with agriculture is critical for the person who will manage the USDA’s four science agencies and its $3 billion annual investment in science to support farmers and protect and enhance our food supply. It was the main reason UCS deemed the previous nominee unacceptable and more than 3,100 independent scientists urged the Senate to reject him.

But do Dr. Hutchins’ scientific credentials alone make him the right person for the job? I don’t think so.

Most of the USDA’s scientific work is carried out in four agencies that Hutchins would directly administer, and their work affects all of us every day. For example:

  • Some 2,000 scientists at the department’s Agricultural Research Service conduct research, often in collaboration with universities, that helps keep our food safe and shapes farmers’ decisions about what to grow and how to grow it.
  • Researchers at the Economic Research Service analyze the state of the agricultural economy, track food prices, and evaluate the economic impacts of farm pollution and efforts to curb it.
  • Number-crunchers at the National Agricultural Statistics Service conduct a 5-year census of agriculture that provides consistent, comparable, and detailed agricultural data for every US county, and analyze other data to identify trends in food and farming.
  • And the National Institute for Food and Agriculture awards grants to scientists working across the country to meet many of our greatest challenges, from fighting hunger and food insecurity to reducing agriculture’s greenhouse gas emissions and preparing the next generation of scientists and farmers.

As the REE under secretary oversees all this work, he or she needs to have an expansive view of our food and agriculture system. And this is what concerns me most about this nomination: There are many ways we can address the system’s challenges, but Scott Hutchins has spent his whole career on just one of them.

A career steeped in Big Ag

Hutchins is a pesticide guy. Since 1987, he has worked to develop and refine marketable chemical solutions to farm pests at Dow AgroSciences’ pesticide and seed division, a unit renamed Corteva Agriscience last year when it was spun off from the newly-merged DowDuPont. If he joins the USDA, he will leave the position of Corteva’s global leader of integrated field sciences; previously, he was Dow AgroSciences’ global director for crop protection R&D.

More than 30 years’ worth of ties to Dow and other agribusiness corporations will be difficult for Hutchins to fully disentangle, as his public financial disclosure form and ethics agreement illustrate. He will receive a severance payment and a prorated 2018 bonus from Corteva/DowDuPont upon his resignation, and the company will continue to pay for his and his wife’s health insurance, for life, under its retiree plan. For two years after the severance and for as long as he participates in the health insurance plan, he’s committing to recuse himself from participating “personally and substantially in any particular matter” involving DowDuPont…though there’s a loophole that allows for a written waiver, which other conflicted USDA officials have received. And anyway, who’s to say that any given decision he’d make at the USDA would have no effect on a company as embedded in the agriculture system as Dow?

Hutchins has also pledged to divest a lot of personal stock holdings—copious Dow stock but also that of Big Food companies including Coca Cola and Nestlé. And he will resign from the Board of Directors of AgriNovus of lndiana (described as “an industry sector initiative formed by the Central Indiana Corporate Partnership,” which in turn involves 55 corporations). That’s a whole lot more industry ties he will officially sever but inevitably bring with him, in some way, to the USDA.

Another day, another betrayal at the USDA

This is part of a troubling pattern. We’ve already documented the ways USDA Secretary Perdue—who literally applauded the Hutchins nomination—has catered to large agribusiness corporations at the expense of farmers and the public, just in his first year. And the list of industry-friendly actions just keeps coming.

Take the president’s trade war. A July op-ed by Alicia Harvie of the nonprofit Farm Aid is a good reminder that Perdue’s rationale for the trade war—China’s theft of patented GMO seeds from US farm fields—isn’t really about farmers at all:

We should remember that farmers are not the ones who reap benefits from patented seed technologies. Those profits go to the patent-holding company itself, which these days is one of ever-fewer multinational seed conglomerates, while farmers watch their seed prices skyrocket.

The supposed reason for this trade war with China, then, is not to protect farmers — it’s to shelter multinational seed and chemical giants, like Bayer-Monsanto, Dow-Dupont and Syngenta-ChemChina, and other agribusiness giants who benefit from free trade regimes that put corporate profits before people. 

Now, the administration is trumpeting a $12 billion bailout (but don’t you dare call it that) for farmers caught in the crisis the president himself manufactured. But some farmers are rightly pessimistic that the money will end up in their pockets rather than in agribusiness coffers. The Trump USDA’s betrayal of farmers continues unabated, it seems.

Dow and the Trump administration are cozy, and getting cozier

Even among huge agribusiness corporations, Dow is particularly tight with the Trump administration. That relationship began with a million-dollar gift from Dow CEO Andrew Liveris to the president-elect’s inauguration fund. The new president then tapped Liveris to lead his short-lived manufacturing council. (President Trump abruptly disbanded the council last summer after some of its members—though not Liveris—resigned in protest of the president’s response to racist violence in Charlottesville. But that’s another story.)

As Bloomberg reported in April 2017, a pre-merger Dow Chemical nearly tripled its lobbying expenditures between 2008 and 2016.

Source: Center for Responsive Politics, https://www.opensecrets.org/lobby/clientsum.php?id=D000000188&year=2016

In 2017, as the Trump administration got underway, the newly-merged DowDuPont ramped up its lobbying even further.

Source: Center for Responsive Politics, https://www.opensecrets.org/lobby/clientsum.php?id=D000069022&year=2018

Clearly, Dow saw the Trump era as a promising one for its policy priorities, and it appears they were right. The company’s investment in the Trump administration started paying off in March 2017, when the EPA suddenly reversed its planned ban on the pesticide chlorpyrifos in an apparent gift to its manufacturer…Dow. Not satisfied with that win, the company has continued to press the administration and its allies in Congress to weaken pesticide regulations in ways that would harm endangered fish and wildlife.

And now former Dow officials and lobbyists are literally holding the reins of government. The Hutchins nomination brings the number of Dow employees appointed to high-level USDA jobs to three. If that doesn’t sound like a lot, note that there are only 13 Senate-confirmable positions at USDA. Hutchins would join former Dow AgroSciences lobbyist Ted McKinney, who was confirmed last year as USDA under secretary for trade, and Ken Isley, who was appointed (without need for Senate confirmation) to head the Foreign Agricultural Service. Like Hutchins, the two spent many years (19 and nearly 29, respectively) at Dow and its subsidiaries. There’s also Rebekah Adcock, an advisor to Secretary Perdue who was a lobbyist at CropLife America, a pesticide lobby group that counts Corteva among its members, and who got caught last fall opening the department’s door a little wider for her former pesticide industry colleagues.

Add to all this the pending nomination of former Dow lawyer Peter Wright as assistant administrator of the EPA office that manages the Superfund program and other chemical hazards programs—my colleague Genna Reed recently blogged about why that’s so troubling.

Doubling down on Dow

It’s clear that DowDuPont already wields significant influence in the Trump administration. Moreover, Dow and a small number of other multinational agribusiness conglomerates have enormous control over US agriculture and our food system, a situation that pre-dates Trump, of course. The trend toward corporate consolidation has increasingly detrimental effects on farmers (as our allies at the Organization for Competitive Markets explain), and DowDuPont is emblematic of that trend.

For example, with the DowDuPont merger completed and the recently-approved merger of Bayer and Monsanto underway, it’s been estimated that the resulting two mega-companies sell three-quarters of all corn seeds planted by US farmers, and nearly two-thirds of all soybean seeds. Globally, Bayer-Monsanto, DowDuPont, and Switzerland-based Syngenta now sell 59 percent of the world’s seeds and 64 percent of its pesticides.

This is the world Scott Hutchins inhabits, a world in which giant corporations develop and patent a few tools and products that make up the bulk of our agriculture system. It’s a world, and a mindset, that is incompatible with the kinds of ecologically-sophisticated, knowledge-based solutions farmers say they want and scientists urge the USDA to invest in. It’s also incompatible with what eaters are increasingly looking for: healthy and sustainable food. Yet perversely, the Trump USDA is embracing that model as the nation’s official stance on what agriculture should be. With yet another Dow exec in a position of power at the USDA, that model would be reinforced further.

Bottom line: taxpayer-funded research of the kind Hutchins would oversee at the USDA should focus on solutions in the public interest, not Dow’s interest. Ultimately, that’s why UCS is decidedly less-than-enthusiastic about Scott Hutchins. But he is the nominee, and the Senate must now vet his appointment and give their advice and consent. In another post, I’ll share my thoughts on key questions Senators should ask him.

My No-Regrets, Enthusiastic Transition to Driving an EV

UCS Blog - The Equation (text only) -

It is summertime.  I want to take a brief respite from the horrific news that dominates the headlines and public debate, and let you in on for what many may be a secret:

Electric cars rock.

In February, I took out a three-year lease on an all electric Chevy Bolt EV (confusingly named because its cousin, a Chevy Volt, is a hybrid electric/gas car).  I’ve had the car for about six months.  Here is what I have learned.

It is really fun to drive.  Because an electric motor rotates with much higher variability than its gas-fired counterpart, it does not need gears. This means it ramps up very quickly and very linearly—no peaks and valleys as you drive up a highway ramp and merge into traffic, for example.  Similarly, the car uses “regenerative braking”—the motor takes the kinetic energy from slowing down or stopping and transfers it back to the battery.  This means that the car comes to a gradual, smooth stop when you take your foot off the pedal, and when you get used to this “one-pedal system,” you use the brake only for the relatively rare unexpected need to stop quickly.  Driving an EV is like ice skating rather than walking/running—easier to get torque when you start out, more like a glide when you are level speed, and a much smoother stop.

It is more, not less, convenient than a gasoline-powered car.  I installed a “Level Two” charger in my garage.  This required an electrician to wire an outlet for 220 volts (the same as what a typical dryer requires), and install a charger which I purchased.  I charge the car once or twice a week overnight.  Plugging it in takes about five seconds, and the charging takes between 4-8 hours.  When I wake up, the battery is full.  No more trips to the gas station.  It will go between 200-280 miles on a full charge, depending upon weather (cold New England days decrease the range) and driving (highway driving uses more energy than city driving).  Because of the long range, I rarely need to use public charging stations while on the road.  I’ve used them five times since I leased the car, typically to add about fifty miles of range.  This takes about twenty minutes of charging time, which I use to stretch, get a cup of coffee or answer e-mails.   For the most part, these public charging stations are strategically located along major roadways, and easy to locate and use with apps on my phone.

It is more affordable than you think. The Chevy Bolt lists for about $37,000, which is too high a price for many to afford.  However, there is a federal tax credit of $7500.  Officially, that tax credit only applies if you buy rather than lease the car, but many dealers will pass the value of the tax credit on in a lease.  In some states, like Massachusetts where I live, there is an additional rebate of $2500 which applies to purchases or leases.  The bottom line is that I paid $2500 down for a three year lease, got that down payment back from the MA rebate program, and now pay $240/month in lease payments.  At the same time, I am saving about $60/month in fueling costs, as electricity cost per mile is less than half of  gasoline, even in a state like Massachusetts that has relatively high electricity costs and relatively low gas prices.  And not paying for oil changes, air filters, belts, brake pads and many other maintenance expenses for a gas-fired car also saves money.

It feels really good not to cause unnecessary pollution.  It is a challenge to identify things you can do in your personal life to lower your carbon footprint.   For example, carbon-intensive air plane trips is a must for me, as my job requires a lot of travel.  And some other options, such as rooftop solar, are not viable for me due to the orientation of my house and surrounding foliage.  But according to the Department of Energy, the electricity I use for driving my car generates about 3500 pounds of greenhouse gases per year, while an average gas-fired car generates 11,500.  That is a difference of about 4 tons per year of emissions.  To put that in perspective, the average resident of Massachusetts has a carbon footprint of about 10 tons per year.  Simply shifting to an electric car drops my carbon footprint by approximately forty percent.  Or to put it another way, according to UCS data, I am driving the equivalent of a gas-fired car that gets over 100 miles per gallon.

A lot more work needs to be done to scale up and broaden access to EV’s

As UCS’s president, I was obviously very motivated to drive an electric car.  But going through all the steps made me realize that we must do a lot more to persuade and assist people who might not have my level of motivation or income.  Here are some of the most important barriers we must remove.

Affordability.  While the Chevy Bolt EV is not a luxury car, the price is still out of reach for many Americans.  And the $7500 tax credit, while very helpful for some, fully benefits only those that pay taxes of $7500 or more– i.e., families that earn more than about $63,000 per year.  Moreover, that tax credit only applies to new cars; this typically is not going to reach those of lower incomes who buy used cars.  And, it is set to start expiring for some of the car makers, such as GM, who have sold 200,000 or more electric cars.  A high priority must be to extend that tax credit on the federal level and expand upon it at the state level to effectively encourage moderate and lower income drivers to make the switch.  Many expect that by the mid-2020’s no such subsidy will be needed, as falling battery costs will allow EV’s to reach cost parity.  But we are not there yet.

Practicality.  What makes an electric vehicle super convenient for me is that I can plug it in night in my garage.  Yet many don’t have this option. To make electric cars suitable for a larger range of drivers, we need to make a significant investment in very fast public charging stations, located in convenient places such as shopping malls, libraries, town centers, apartment buildings and workplaces, among others.  Funding for some of those investments will come from the Volkswagen settlement, in which VW has agreed to settle claims over its fraudulent emissions testing by expending over $2 billion in EV infrastructure, outreach and education.  Some utility companies are also starting to invest in charging infrastructure, recognizing that the growth of an electric vehicle market will add business opportunities for them.  But governments will also need to play a role in making EV’s accessible, and will need to identify new sources of funding, such as revenues from a cap and invest program for transportation.

Education and One Stop Shopping.  It took a lot of time to sort out all the practical aspects of owning an EV.  While there are helpful websites, I still had to do significant research to figure out, for example, what type of charger I would need for my home and how to find a good electrician.  I was motivated enough to overcome those obstacles, but for someone who doesn’t have my level of motivation, having to figure all this out might make an EV a non-starter.  To overcome the hassle factor, we need to make EV ownership very easy.  For example, just as utility companies provide one-stop shopping energy efficiency services, they can be tasked with installing home charging stations.

The past six months of EV driving has been illuminating for me.  It has shown me that the technology is here now, and it is a pleasure to take advantage of it.  But what is needed now is a surge of political will to make the necessary investments to scale up usage, and a stepped-up commitment from automakers to build and market a wide range of EV’s that are affordable and meet the needs of all drivers.

Photo: Dave Reichmuth

Opposition to Trump’s New Low-Yield Nuclear Warhead

UCS Blog - All Things Nuclear (text only) -

And the “consensus” on rebuilding the US nuclear stockpile

The Trump administration’s program to deploy a new, low-yield variant of the W76 warhead carried by U.S. submarine-launched ballistic missiles has faced relatively strong opposition in Congress, with almost all Democrats and several Republicans supporting legislation to eliminate or curb the program.

Indeed, the low-yield warhead is clearly outside the “bipartisan consensus” that supporters have often claimed exists for the Obama administration’s 30-year, $1.7 trillion program to maintain and replace the entire U.S. nuclear stockpile and its supporting infrastructure. Importantly, as I’ll get to later, such a consensus never really existed in the first place.

Congressional roadblocks  

Two Pantex production technicians work on a W76 while a co-worker reads the procedure step-by-step. (Photo NNSA)

But let’s start with the new warhead. The attempts to stop it have been noteworthy. A list of most of the votes and amendments on the low-yield option can be found here. Although the final FY19 National Defense Authorization Act (NDAA) that the Senate passed yesterday approves the low-yield warhead, the Appropriations committees—on a bipartisan basis—have generally funded the program but also consistently sought more information on it.

Most recently, on June 28, the Senate Appropriations Committee approved by voice vote an amendment from Sen. Jeff Merkley (D-OR) that would prohibit deployment of the proposed new warhead until Secretary of Defense James Mattis provides Congress with a report that details the implications of fielding it. The Department of Energy (DOE) would still be able to produce the low-yield variant, work that would take place as a part of the ongoing Life Extension Program for the W76 warhead that is scheduled to be completed in Fiscal Year 2019. The W76 warheads have a yield of 100 kilotons; the lower-yield variant will have a yield of 6-7 kilotons.

If nothing else changes, Defense Secretary Mattis should be able to produce the required report in time for deployment to proceed. Although the Navy’s precise timing for deployment is classified, officials have hinted that it should not take more than a year or two. In other words, if the program proceeds as planned, the new warhead could be deployed while President Trump is still in office. Fielding a new weapon in three years or less would be remarkably fast.

But note that phrase “if nothing else changes.” An election is going to happen. There is a chance that Democrats could take the House and (less likely) the Senate. If so, then deployment of the low-yield warhead – and perhaps more pieces of the enormous nuclear rebuilding plan – could come into question.

A rapid response to Trump’s warhead plan

The proposal for the low-yield warhead was included in the Trump administration’s Nuclear Posture Review (NPR), one of two “supplements” to the already ambitious program to revamp the entire nuclear arsenal developed by the Obama administration. (The second supplement is a nuclear-armed sea-launched cruise missile that is many years off.)  The NPR described the first supplement as a “near-term” effort to “modify a small number of existing SLBM warheads to provide a low-yield option.”

Democratic opposition to the proposal was swift. When a near-final version of the NPR was leaked to the press in January 2018, sixteen senators wrote a letter to President Trump expressing opposition to the low-yield warhead.

More recently, in May, broader opposition emerged when more than 30 former officials, including former defense secretary William Perry, former secretary of state George Shultz, and former vice chairman of the Joint Chiefs of Staff Gen. James Cartwright (USMC Ret.) wrote a bipartisan letter to Congress calling the new warhead “dangerous, unjustified, and redundant.”

Shortly after that letter was sent, 188 members of the House, including all but seven Democrats and five Republicans, voted in favor of an amendment to the annual NDAA that would have withheld half the funding for the low-yield warhead until Secretary Mattis submitted a report to Congress assessing the program’s impacts on strategic stability and options to reduce the risk of miscalculation. While the amendment failed, it is notable that, in addition to overwhelming Democratic support, five Republicans voted for it.

Then in June, an amendment to the House Energy & Water Development Appropriations Act showed even stronger opposition to the low-yield warhead. Rep. Barbara Lee (D-CA) proposed eliminating all the funding for DOE’s work on the program, in effect killing it outright. This much more aggressive approach received 177 votes, including all but 15 Democrats. Moreover, this vote came after Rep. Lee succeeded in getting the Appropriations Committee to include language requiring Mattis to submit a report on “the plan, rationale, costs, and implications” of the new warhead.

While the Senate has not had any votes on the low-yield warhead on the floor, several Democrats have attempted to cut or fence money for the program in both the Appropriations and Armed Services Committees, culminating in the successful effort by Senator Merkley to prohibit deployment until Secretary Mattis produces a report about the implications of doing so, as highlighted above.

Indeed, both the Senate and House appropriations committees expressed concern that the administration has not provided enough information to make an informed decision about the new weapon.

Will the “bipartisan consensus” unravel?

In the House, it’s clear that a “bipartisan consensus” does not exist for the Obama program to revamp the arsenal, at least not for the program in its entirety. While the recent vote against the Trump administration’s low-yield warhead reflected almost unified opposition to a new weapon by the Democrats, there was similar opposition to the planned Long-Range Stand-Off (LRSO) weapon – the new nuclear-armed air-launched cruise missile – even though it was put forward by the Obama administration.  In 2014, 179 House members voted to eliminate funding for the program, including all but 18 Democrats. More recent votes to cut the program back have also enjoyed strong Democratic support.

On the other side of Congress, it has been several years since the Senate has had a floor vote on any nuclear weapons program, so it is harder to judge the level of support for revamping the entire arsenal. Notably, Sen. Jack Reed, the ranking member on the Senate Armed Services Committee, has generally voiced support for the Obama administration’s plan to date. But this year, he led an attempt in the Armed Services Committee to fence funding for deployment of the low-yield warhead, an effort that failed along party lines but became the model for the successful Merkley amendment in the Appropriations committee, on which Sen. Reed also serves. In addition, Sen. Reed also supported a separate Merkley amendment in the Appropriations Committee to eliminate all funding for the low-yield warhead, an attempt that failed largely along party lines.

Clearly, the low-yield warhead is not a part of any “bipartisan consensus.” The question becomes whether the debate over it could be the tipping point that leads to more concerted opposition to some of the new weapons systems in the larger plan, including the LRSO.

That question takes on increased salience when one considers the possibility that Democrats could take the House in elections this fall. While the low-yield warhead likely will be produced in Fiscal Year 2019, its deployment could become a major battle in the new Congress. If that is the case, the supposed “bipartisan consensus” in support of the Obama administration’s plan to replace the entire U.S. nuclear arsenal with a suite of new warheads and delivery vehicles could potentially come unraveled.

Chronic Flooding and the Future of Miami

UCS Blog - The Equation (text only) -

Photo: Ben Grantham/Flickr

En español > 

The Union of Concerned Scientists (UCS) recently released a report analyzing the impacts of chronic tidal flooding on U.S. coastal properties in the lower 48 states. The number of homes and businesses, their value, along with the amount of tax base and most importantly, people at risk is startling. They found that by 2045, 311,000 homes, worth $117.5 billion dollars by today’s market values, could be at risk of chronic flooding driven by climate change. By 2100, 2.4 million homes, worth approximately $912 billion dollars, and 4.7 million people will be at risk. Nowhere more than Florida, that bears 40% of the risk, are these realities being felt now and will be more so in the future as sea levels continue to rise. Ultimately, the impacts of climate change driven chronic flooding leads to a greater potential crisis for low-income communities.

In my time at UCS, I served as an on-the-ground researcher and advocate for low-income communities. I was lucky to be able to work in my hometown examining how climate-driven chronic flooding is changing Miami. I worked with local residents to better understand how they were already dealing with chronic flooding and how they could plan for an uncertain future.

The two faces of climate gentrification

Long time freedom fighter and community leader Paulette Richards, of Miami’s Liberty City, introduced me to the concept of climate gentrification. A term she coined in response to the issues her neighborhood was facing. When I met with her in 2014, she talked about how her community was changing under the pressure of expedited foreclosures. She understood that she was on higher ground and knew that this was becoming a desirable location for developers looking to capitalize on the public’s growing awareness of sea level rise. She said, “I’ve seen gentrification, but I call this climate gentrification”. Over the years Paulette and her fellow community leaders in Miami have tracked this shift.  As long-time Liberty City residents are being edged out of the community they built, they are being directed to the southern most point of the county, relocating to an area that is at a much lower elevation and neighbors Turkey Point, a nuclear power plant.

Now the media and others are taking notice and verifying what Paulette has known for many years. In 2017, Scientific American published a story called “Higher Ground Is Becoming Hot Property As Sea Level Rises”. The article details how climate driven gentrification is happening in Miami and how more people, including academics, are finally tracking these changes.

Only four miles away from Paulette’s home sits the neighborhood of Shorecrest, part of the City of Miami. It’s a low-elevation, mixed income community that represents what chronic inundation looks like on the ground. Much like Miami Beach, Shorecrest consistently floods during high tides. However, unlike high-end Miami Beach, it does not have the same access to the kind of funds needed to properly address chronic flooding.  In front of a strip of affordable rental apartments, I met with residents who told me about their frustration with chronic flooding. Many have seen their jobs affected by the, sometimes, impassable floods; there have been issues with accessing transportation, garbage collection and possible health impacts related to wading through waters that have tested for high levels of bacteria. Low-income residents of low-elevation communities without the proper resources to adapt to climate change are faced with the toughest questions. Do we leave? Where will we go? Can we afford to leave? How much time do we have to make these decisions? Local officials are also facing difficult questions. In some cases, the development projects that would build adaptively and could bring in the tax revenue needed to fund community wide adaptation projects are often the same projects that lead to gentrification.

How can coastal areas find the resources to adapt while maintaining the integrity of their communities?

The City of Miami recently passed bonds that provide funds of almost $200 million to deal with sea level rise. While this is a step in the right direction, the task of equitable sustainable adaptation will require much more funding and more collaborative support from state and federal agencies. Just in terms of sheer square miles, the City of Miami is roughly 4 times the size of Miami Beach, yet Miami Beach has allocated $500 million dollars in funds to deal with their sea level rise issues. Add to that the complex canal and drinking water systems and varied needs of many different communities, and the challenge is daunting.

What can be done?

Will my hometown still be the diverse, multicultural, thriving metropolis that I know and love or will it slowly become a monochromatic string of islands catering to the recreational fancies of the rich? When sea level rises it not only stands to wipe out miles of coastline and properties, but decades and even centuries of culture while displacing millions of lives by the end of this century.

Our greatest hope in avoiding the worst is that we, as a global community, adhere to the Paris Agreement, that holds warming to less than 2C. In this case the vast majority of homes at risk in Florida (93%) would be spared.

Additionally, not just property owners, but all residents and businesses in coastal communities must be aware of their vulnerability and when this type of tidal flooding will become disruptive to their daily lives. UCS provides a mapping tool for gathering this information.

Finally, elected officials must make equity a priority when designing and planning for the future to ensure resources are distributed not just to the wealthy but to those who have fewer resources to plan and implement solutions.

For more information about the study, including a discussion of solutions, please visit Underwater: Rising Seas, Chronic Floods, and the Implications for US Coastal Real Estate

 

 

Nicole Hernandez Hammer is a sea-level researcher, climate change expert and environmental justice advocate. A Guatemalan immigrant, Ms. Hernandez Hammer works to address the disproportionate impacts of climate change on communities across the US. Most recently, Ms. Hernandez Hammer served as the climate science and community advocate at the Union of Concerned Scientists. She was the Florida field manager for Moms Clean Air Force, and an environmental blogger for Latina Lista. Before that, she was the assistant director of the Florida Center for Environmental Studies at Florida Atlantic University, and coordinated the Florida Climate Institute’s state university consortium.

She has co-authored a series of technical papers on sea level rise projections, impacts and preparedness. Her activism and initiative on climate change earned her an invitation from First Lady Michelle Obama to be her special guest at the 2015 State of the Union address.

Nicole speaks across the country on climate change issues. Most recently, she presented at the 2018 National Hispanic Medical Association Conference and the MIT Cambridge Science Festival. She has done extensive media work and has been featured in National Geographic’s The Years of Living Dangerously,  Amy Poehler’s Smart Girls, The New Yorker, MSNBC, the Miami Herald, Telemundo News, Univision.com, The Huffington Post, PRI Science Friday, The New York Times, The Washington Post, Grist, NPR and other major news sources.

Photo: Ben Grantham/Flickr

Why Republican Farm Bill Negotiators Should Think Twice About Attacks on SNAP

UCS Blog - The Equation (text only) -

Photo: US Department of Agriculture

This September, after Congress returns from its August recess, we can expect to see the first public meeting of the farm bill conference committee.

The committee—currently composed of a healthy 47 appointees (or “conferees”) from the House and nine from the Senate—will have the difficult task of reconciling two vastly different versions of the bill. The House bill received sharp criticism for its proposed changes to the Supplemental Nutrition Assistance Program (SNAP), including extreme and unjustified work requirements that would reduce or eliminate benefits for millions of people. The Senate, by contrast, passed a bipartisan bill that left the structure of SNAP largely intact and made additional investments in healthy and sustainable food systems.

Based on what we’ve seen so far, it wouldn’t surprise us if House Republican conferees continue to push for changes that will make it harder for people to access SNAP. But based on the data, this strategy seems pretty misguided.

We looked at household SNAP participation among the counties represented by the 28 House Republican conferees and found that restricting SNAP would not only harm many of their constituents—it would harm them disproportionately compared to counties represented by House Democrats.*

Will conferees push SNAP changes at the expense of their own voters?

Evidence shows that SNAP is one of the most effective public assistance programs we have. In 2016, it lifted 3.6 million people out of poverty and provided many more with temporary assistance between jobs or in crisis. And as we’ve shown, it benefits people of every zip code and political persuasion across the country, helping families put food on the table and set aside money for other critical expenses. Yet SNAP has become an intensely partisan issue, and its work requirements are now the most polarizing piece of the farm bill debate.

But dogma and data don’t always converge—and this could prove particularly troublesome for the House Republican conferees.

We looked at the average household SNAP participation among counties represented by both the 28 House Republicans and 19 Democrats appointed to the farm bill conference committee. Here’s what we found:

  • On average, households in counties represented by Republican conferees are more likely to participate in SNAP than those in counties represented by Democratic conferees. The average household participation across the nearly 600 Republican counties is 13.9 percent, compared to an average of 12.3 percent across 135 Democratic counties.
  • Nationwide, about 14.3 percent of households in a given county participate in SNAP. Nearly half of all counties represented by Republican conferees exceed this average (288 out of 597) —compared to just a quarter of counties represented by Democratic conferees (34 out of 135).
  • For some Republican conferees, a vast majority of the counties they represent have above-average household SNAP participation:
    • All but one of the 13 counties Rep. Mike Rogers (R-AL-3) represents have above-average SNAP participation. In Macon County, nearly a third of households participates in SNAP.
    • Likewise, 23 of the 24 counties represented by Rep. Austin Scott (R-GA-8) have above-average SNAP participation. In Atkinson County, Ben Hill County, and Turner County, more than a quarter of households participate in SNAP.
    • A vast majority of counties (147 out of 174 in total) represented by eight other Republican conferees exceed the national average for household SNAP participation. Those counties are represented by Rep. Rick Crawford (R-AR-1), Rep. Bruce Westerman (R-AR-4), Rep. Neal Dunn (R-FL-2), Rep. Rick Allen (R-GA-12), Rep. James Comer (R-KY-1), Rep. Ralph Abraham (R-LA-5), Rep. David Rouzer (R-NC-7), and Rep. Mark Walker (R-NC-6).

The data beg the question: are House Republicans unaware of the extent to which SNAP helps people in the counties they represent, or are they just indifferent?

A is for August (and Action)

As we mentioned, it’s August recess (sort of—Principal McConnell cut summer break short in the Senate), which means your Senators and Representatives are probably spending some time at home. If one of the farm bill conferees (see the full list of House Republicans and Democrats) represents your district, pay them a visit.

If your Representative isn’t on the committee, there’s still plenty you can do to be vocal about your priorities:

  • Sign onto our national action alert urging farm bill conferees to adopt the Senate version of the bill. (If you’re a public health expert, we’ve got something special for you.)
  • Tweet, email, or snail mail a “thank you” to your Representative if they voted no on the House bill—or a “no thank you” if they voted yes.
  • Take a look at the Senate list, too. Though they managed to pass a bipartisan bill the first time around, they’ll need our support more than ever if they hope to engage in successful negotiations with the House.

*Includes full and partial counties. Does not include Virgin Islands. 5-year estimates of county-level SNAP participation provided by the 2011-2015 American Community Survey.

Trump Fuel Efficiency Rollback Is an Attack on Science and the Public Interest

UCS Blog - The Equation (text only) -

Today, the Environmental Protection Agency and Department of Transportation released their long-awaited revisions to federal fuel economy and greenhouse gas standards. To no one’s surprise, their preferred alternative is to essentially eliminate the standards—a predetermined outcome that the administration is now trying to defend with bogus analysis.  The current standards were created in collaboration with California and the entire automotive industry and have directly made new cars and trucks cleaner and cheaper to drive. EPA and California Air Resources Board scientists spent years studying the standards, as was required, and concluded last year they are technologically feasible and cost-effective.

Millions of vehicle owners, transportation experts, public health officials and consumer advocates are rightfully outraged.

Thanks to the Clean Air Act, California has a waiver from the EPA to maintain tougher state emission standards despite a national rollback. However, with the current proposal the administration intends to make real its threat to revoke California’s authority to set its own standards. California’s Attorney General Xavier Becerra says California will take any step necessary to protect our planet and people and recently, Representative DeSaulnier (CA), introduced a resolution aimed at protecting state authority while Senator Harris (CA) is expected to do the same in the Senate. While threatening to revoke the waiver has led to much consternation, actually revoking the waiver will surely lead to years of litigation and regulatory chaos.

A battle between the Trump Administration and California sounds like it’s made for Hollywood, but it’s also the story the administration is using to distract us. Why? Because it’s easier to paint California as a rogue state of pushy progressives than to defend a policy decision that ignores scientific evidence and relies wholly on industry talking points. The rollback is not just an attack on our state, but on the 12 other states that choose to follow California’s more protective standards and all the other states that have the right to choose to follow California standards if they wish, as Colorado is now moving to do. It’s also more than a fight for authority, it’s an attack on our values and a larger strategy from this administration of pushing science and the public interest aside.

To justify this policy, the agencies are twisting themselves in knots and ignoring their own analysis that show safe, cost-effective technologies exist to continue to improve efficiency, cut emissions, and save consumers money at the pump. They are dragging up tired old arguments that efficiency standards make vehicles less safe, contrary to actual evidence. And the cherry on top: they point out the U.S. is pumping more oil than ever. So the days of needing to conserve energy have passed? Using more oil is not going to make our country stronger or safer, nor is it going to be good for consumers.

Americans like clean cars

Multiple polls show an overwhelming majority of Americans favor clean car standards because no matter what size car or truck they buy, drivers want more efficient, cleaner vehicles. The standards have delivered cleaner cars of every size and class to consumers every year. Additionally, vehicle standards benefit lower income individuals who tend to purchase used cars and for whom gasoline costs are a much larger share of their income.

In the U.S., transportation accounts for about 27 percent of the greenhouse gas emissions that cause climate change – in California it’s nearly 40 percent. The vehicle standards directly curb these carbon emissions. The standards are the most effective climate policy the United States has on the books today and an example of how scientists and industry can work together to create good public policy that protects everyone.

If the standards are rolled back as proposed, the U.S. will pump out an extra 2.2 billion metric tons of global warming emissions and consume 200 billion more gallons of fuel by 2040. If this happens, it will be impossible to achieve our obligations under the Paris climate agreement and significantly damage the planet’s ability to hold global warming to two degrees Celsius. The automakers want a compromise between leaving them alone and a total rollback. But a compromise would mean we significantly veer off the path the country and the planet need to be on to avoid the worst impacts of climate change during our lifetimes.

These rollbacks hurt progress

With the undeniable signs of climate change increasing each season, making consumers use more fossil fuel, even when fuel efficiency technology is available and cost-effective, is at best short-sighted and at worst cynical and destructive.

New cars and trucks aren’t cleaner and more efficient by accident or because of automakers’ goodwill. They are more efficient because forward-looking and scientifically sound public policies require them to be. California and the twelve other states with clean car standards cover more than one third of the new car market. These states have a critical role to play in defending cleaner cars. We must take every legal and legislative step necessary to make sure the Trump administration does not take us backward. But don’t fall for the headlines or the simplistic rhetoric from Washington DC. It’s not just California under attack – it’s science and the public interest that they are targeting.

 

8 Ridiculous Things in the Trump Rollback of Clean Car Standards (And 1 Thing They Get Right)

UCS Blog - The Equation (text only) -

President Trump has followed through on his promise to roll back Obama-era fuel economy and emissions standards for passenger cars and trucks, proposing to freeze standards at 2020 levels.  Given the tremendous benefits of these rules to-date and the promising future for 2025 and beyond, you can imagine that justifying this rollback requires contortions that would qualify the administration for Cirque du Soleil…and you would be right.  Here are just a few of the ridiculous assertions found in the proposal to justify rolling back such a successful policy:

Absurdity #1: Consumers will benefit from the rollback

Consumers, of course, stand to be the biggest losers from this rollback.  To date, these rules have saved consumers over $64 billion in fuel costs. Every class of vehicle is seeing record fuel economy levels, with the most popular vehicle classes showing the greatest improvement since the rules went into effect.  This rollback threatens to put all of that in jeopardy, limiting consumer choice.

Absurdity #2: More efficient vehicles will be less safe

Last year, a study (that the administration even cites!) showed that fuel economy standards have resulted in reduced fatalities since their inception by reducing the average weight disparity in a crash, refuting oft-trotted out nonsense from groups like the Heritage Foundation and the Competitive Enterprise Institute who hysterically claim that making more efficient cars kills people in an effort to eliminate the rules.  Rather than sticking with the science, the administration is borrowing this ideological argument to market its rollback agenda as safety.

Lightweight materials were first deployed for safety reasons, and manufacturers have been using high-strength steel and aluminum and other lightweight materials to significantly reduce the weight of the biggest vehicles on the road, like the F-150.  This is good for society, reducing the lethality of the largest and least efficient vehicles.  The National Highway Traffic Safety Administration’s latest data confirms this, of course—but the agencies instead fudge the economics of their model to spit out the answer that the boss in the White House wants.

Absurdity #3: The fleet will get older and travel more without the rollback (and therefore be less safe)

The people who wrote this proposal somehow came up with ridiculously high fatality numbers, which they use to justify rolling back these incredibly popular and consumer friendly standards.  About 99% of the increase in fatalities have absolutely nothing to do with the safety of new vehicles but come instead from an economic model that claims older, less safe vehicles will stay on the road longer and that there will be a massive increase in total miles traveled if the standards stay in place (more miles = more crashes = more fatalities).

There is no consistency to this logic—they claim that these newer and more efficient vehicles will be so great that everyone will travel more, but not so great that people will want to buy them.  Never mind, of course, that manufacturers are on pace for 17 million in annual sales for the fourth consecutive year, extending an industry record, or that the primary source of vehicle travel are commutes, which are fixed, and that there is little evidence of as high an increase in “rebound” or additional travel as the agencies claim.

Absurdity #4: “Energy dominance” means we don’t have to worry about conserving energy

Ignoring the absurdity of “energy dominance” itself, the notion that increasing domestic oil production means we don’t care about energy conservation doesn’t just defy the Energy Policy and Conservation Act requirements of the fuel economy program—it defies basic economics.

Oil is the one of the most fungible commodities in the world—that means that prices are set on a global market, by the basics of supply and demand.  As such, the best way to insulate yourself from global uncertainty (and I think we can all agree there is plenty of that) is to simply decrease demand, which sets downward pressure on market prices and helps buffer against volatility.  The decoupling of economic growth and oil demand is not just good for the environment—it’s good for consumers and national security.

Absurdity #5: Zero emission vehicle standards are inherently fuel economy standards

Perplexing to anyone who understands California’s air quality challenge and the history of the Zero Emission Vehicle (ZEV) program is the Trump administration’s assertion that the ZEV program is connected to fuel economy.  In fact, the ZEV program predates California Assembly Bill 1493, which is what pushed the state to adopt global warming emissions standards for vehicles (it should of course be noted here that those, too, are under attack, despite also not being fuel economy standards).

California’s ZEV program is designed to improve air quality in the state and is a critical part of the state’s plan to meet federal air quality requirements.  Other states have adopted the standard because they see ZEVs as a critical part of their sustainable transportation future, including for air quality reasons.  The administration suggesting that a regulation aimed squarely at eliminating tailpipe pollution is pre-empted by fuel economy standards is not just legally dangerous—it’s bad for anyone who breathes air in the states adopting those standards.

Absurdity #6: Manufacturers will improve fuel economy without regulations

The real impacts of the rollback would look too bad on paper, so the administration cooked the books by claiming that manufacturers will overcomply with the 2020 standards by nearly 3 miles per gallon, out of the goodness of their hearts and because they know people will buy fuel-efficient vehicles (no, the administration does not seem to sense the irony here in claiming that people will buy these fuel-efficient vehicles but not even more efficient vehicles).

Historically, fuel economy has only improved when standards have been tightened. (Values shown are lab test values—the “sticker” value is about 20 percent lower today.)

This of course smacks of ignorance—the fleet-wide efficiency of cars did not increase absent regulation  in the past 40 years — in fact, in the 1990s fuel economy actually went down as a result of flatlined standards (see figure).  To pretend like fuel economy improvements are going to magically happen without regulation defies the historical record.

Absurdity #7: Manufacturers will put on more technology than necessary to meet the standards

A corollary to the modeling of the rollback magically adopting fuel economy improvements for free is the ridiculous amount of technology being applied to meet the standards, helping to drive up costs for the standards. The administration has crafted a modeling approach so insane that the output shows manufacturers putting on more technology than is needed to meet the standards—so much, in fact, that the manufacturers will overcomply and earn credits  that will expire before they can be used!

This is completely at odds with how manufacturers actually plan to comply with the regulations.  Generally, manufacturers try to target an individual vehicle’s performance to the average standard over the car’s product lifecycle.  Since cars generally don’t change much over a 5-year span, a vehicle will tend to perform better than the standard initially, generating credits, which can then be used to compensate for the vehicle’s underperformance relative to the standard in the latter years.

In the agencies’ modeling, manufacturers improve their vehicles so quickly and to such a strong degree that they end up banking credits that never get used!  That is beyond economically inefficient—it’s just dumb.  And it’s yet another cynical ploy by the administration to inflate the estimate of the cost of the currents regulations.

Absurdity #8: Everyone’s going to need to drive “turbo hybrids” in 2025 if the standards aren’t rolled back

The end result of these ridiculous assumptions on technology is borne out in a vehicle fleet that no manufacturer would possibly design.  There are many issues with the technical assumptions in the rule, but perhaps my favorite is a concept the agencies have introduced called a “turbo hybrid”.  Their modeling effort claims that initially, manufacturers will adopt turbocharged engines (which we’re seeing—about ¼ of vehicles on the road incorporate a smaller, boosted engine), and then eventually they will be forced to become hybrids like the Prius…but  they will maintain that turbocharged engine in the hybrid.  This is not how manufacturers would design a car.

This idea is so ludicrous that the only example that I could find of anything close to this was the incredibly complex engines found in Formula 1 race cars.  The reason an auto manufacturer would never make this choice is that the electric motor on a hybrid car already provides power supplemental to the engine—you don’t need to turbo boost it as well.  There are lots of examples of car companies taking advantage of the extra power provided by a hybrid and pairing it with a smaller engine, like how the Toyota Prius utilizes the much more efficient Atkinson cycle or why the new Honda Insight can get away with using just a 1.5L engine.  There’s no reason a manufacturer is going to add cost and complexity when they don’t have to.  But by pretending like this is how manufacturers would comply, the agencies have been able to artificially inflate the costs of the current standards.

The one thing they get right: These standards are going to cost jobs

Surprising to me was that the agencies acknowledge in their analysis that this rollback is bad for the automotive sector.  According to their analysis, the industry stands to lose $200-$250 billion in revenue, cut investments in technology by $40 billion, and cut jobs in the automotive sector by 60,000 in 2030.  We pointed out how bad this rollback will be for the economy as a whole, as consumers are forced to spend more money on oil and less money on sectors with greater job growth potential, which will cut overall job growth by 125,000 in 2035 and nix $8 billion from national GDP—but this is confirmation that this rollback is terrible for the industry that asked for it.

The industry asked for this rollback—now it’s up to them to stop it

The industry opened up Pandora’s box by requesting the administration take another look at standards that are working for the American people.  Now we are getting a clearer picture of what that action means for consumers at the pump, the economy, the environment, and the auto industry—in short, it’s terrible.

It is up to the auto industry to try to fix what they broke.  I have little faith that this administration is interested in the facts—industry voices, on the other hand, may carry a lot more weight.

So, auto companies—are you willing to go to bat for the American people?  Or are you going to sit on the sidelines and watch this disaster of your own making unfold?

Photo: Ryan Searle/Unsplash

Containment Design Flaw at DC Cook Nuclear Plant

UCS Blog - All Things Nuclear (text only) -

Role of Regulation in Nuclear Plant Safety #6

Both reactors at the DC Cook nuclear plant in Michigan shut down in September 1997 until a containment design flaw identified by a Nuclear Regulatory Commission (NRC) inspection team could be fixed. An entirely different safety problem reported to the NRC in August 1995 at an entirely different nuclear reactor began toppling dominoes until many safety problems at both nuclear plants, as well as safety problems at many other plants, were found and fixed.

First Stone Cast onto the Waters

On August 21, 1995, George Galatis, then an engineer working for Northeast Utilities (NU), and We The People, a non-profit organization founded by Stephen B. Comley Sr. in Rowley, Massachusetts, petitioned the NRC to take enforcement actions because irradiated fuel was being handled contrary to regulatory requirements during refueling outages on the Unit 1 reactor at the Millstone Power Station in Waterford, Connecticut.

Ripples Across Connecticut

The NRC’s investigations, aided by a concurrent inquiry by the NRC’s Office of the Inspector General, substantiated the allegations and also revealed the potential for similar problems to exist at Millstone Units 2 and 3 and at Haddam Neck, the other nuclear reactors operated by NU in Connecticut. The NRC issued Information Notice No. 96-17 to nuclear plant owners in March 1996 about the problems they found at Millstone and Haddam Neck. The owner permanently shut down the Millstone Unit 1 and Haddam Neck reactors rather than pay for the many safety fixes that were needed, but restarted Millstone Unit 2 and Unit 3 following the year-plus outages it took for their safety margins to be restored.

Ripples Across the Country

The NRC sent letters to plant owners in October 1996 requiring them to respond, under oath, about measures in-place and planned to ensure: (1) applicable boundaries are well-defined and available, and (2) reactors operate within the legal boundaries. In other words, prove to the NRC that other reactors were not like the NU reactors were.

The NRC backed up their letter writing safety campaign by forming three NRC-led teams of engineers contracted from architect-engineer (AE) firms (e.g., Bechtel, Stone & Webster, Burns & Roe) to visit plants and evaluate safety systems against applicable regulatory requirements. The NRC’s Frank Gillespie managed the AE team inspection effort. The NRC issued Information Notice No. 98-22 in June 1998 about the results from the 16 AE inspections conducted to that time. Numerous safety problems were identified and summarized by the NRC, including ones that caused both reactors at the DC Cook nuclear plant to be shut down in September 1997.

Ripplin’ in Michigan

The AE inspection team sent to the DC Cook nuclear plant in Michigan was led by NRC’s John Thompson and backed by five consultants from the Stone & Webster Engineering Corporation.

Sidebar: UCS typically does not identify NRC individuals by name as we have here for Gillespie and Thompson. But both received unfair criticisms from a NRC senior manager for performing their jobs well. Gillespie, for example, told me that the manager yelled at him, “We didn’t send teams out there to find safety problems!” NRC workers doing their jobs well deserve praise, not reprisals. Thanks Frank and John for jobs very well done. The senior manager will go unnamed and unthanked for a job not done so well.

DC Cook had two Westinghouse four-loop pressurized water reactors (PWRs) with ice condenser containments. Unit 1 went into commercial operation in August 1975 and Unit 2 followed in July 1978. The NRC team identified a design flaw that could have caused a reactor core meltdown under certain loss of coolant accident (LOCA) conditions.

A LOCA occurs when a pipe connected to the PWR vessel (reddish capsule in the lower center of Figure 1) breaks. The water inside a PWR vessel is at such high pressure that it does not boil even when heated to over 500°F. When a pipe breaks, high pressure water jets out of the broken ends into containment. The lower pressure inside containment causes the water to flash to steam.

Fig. 1 (Source: American Electric Power July 12, 1997, presentation to the NRC)

In ice condenser containments like those at DC Cook, the steam discharged into containment forces open doors at the bottom of the ice condenser vaults. As shown by the red arrow on the left side of Figure 1, the steam flows upward through baskets filled with ice. Most, if not all, of the steam is cooled down and turned back into water. The condensed steam and melted ice drops down to the lower sections of containment. Any uncondensed steam vapor along with any air pulled along by the steam flows out from the top of the ice condenser into the upper portion of containment.

Emergency pumps and large water storage tanks not shown in Figure 1 initially replace the cooling water lost via the broken pipe. The emergency pumps transfer water from the storage tanks to the reactor vessel, where some of it pours out of the broken pipe into containment.

The size of the broken pipe determines how fast cooling water escapes into containment. A pipe with a diameter less than about 2-inches causes what is called a small-break LOCA. A medium-break LOCA results from a pipe up to about 4-inches round while a large-break LOCA occurs when larger pipes rupture.

Before the storage tanks empty, the emergency pumps are re-aligned to take water from the active sump area within containment. The condensed steam and melted ice collects in the active sump. The emergency pumps pull water from the active sump and supply it to the reactor vessel where it cools the reactor core. Water spilling from the broken pipe ends finds its way back to the active sump for recycling.

The NRC’s AE inspection team identified a problem in the containment’s design for small-break LOCAs. The condensed steam and melted ice flows into the pipe annulus (the region shown in Figure 2 between the outer containment wall and the crane wall inside containment) and into the reactor cavity. The water level in the pipe annulus must rise to nearly 21 feet above the floor before water could flow through a hole drilled in the crane wall into the active sump. The water level in the reactor cavity must rise even farther above its floor before water could flow through a hole drilled in the pedestal wall into the active sump.

Fig. 2 (Source: American Electric Power July 12, 1997, presentation to the NRC)

For medium-break and large-break LOCAs, the large amount of steam discharged into containment flooded both these volumes and then the active sump long before the storage tanks emptied and the emergency pumps swapped over to draw water from the active sump. Thus, there was seamless supply of makeup cooling water to the vessel to prevent overheating damage.

But for small-break LOCAs, the storage tanks might empty before enough water filled the active sump. In that case, the flow of makeup cooling water could be interrupted and the reactor core might overheat and meltdown.

Calmed Waters in Michigan

The owner fixed the problem by drilling holes through lower sections of the crane and pedestal walls. These holes allowed water to fill the active sump in plenty of time for use by the emergency pumps for all LOCA scenarios. Once this and other safety problems were remedied (and a $500,000 fine paid), both reactors at DC Cook restarted.

UCS Perspective

The event in this case is the August 1995 notification to the NRC that the Millstone Unit 1 reactor was being operated outside its safety boundaries and the regulatory ripples caused by that notification that led to the identification and correction of containment flaws at DC Cook. For that event sequence, the NRC response reflected just right regulation.

The NRC asked and answered whether the August 1995 allegations were valid—finding that they were.

Once the initial allegation was substantiated, the NRC asked and answered whether that kind of problem also affected other reactors operated by the same owner—finding that it did.

Once the extent-of-condition determined that multiple reactors operated by the same owner were affected, the NRC asked and answered whether similar kinds of problems could also affect other reactors operated by other owners—finding that they did.

In seeking the answer to that broader extent-of-condition question, the NRC AE inspection team identified a subtle design flaw that had escaped detection for two decades. And slightly over two years elapsed between the NRC’s initial notification and both reactors at DC Cook being shut down to fix the design flaw. While neither a blink of an eye nor a frenetic pace, that’s a pretty reasonable timeline given the number of steps needed and taken between these endpoints.

Had the NRC put the blinders on after receiving the allegations about Millstone Unit 1 and not considered whether similar problems compromised safety at other reactors, this event would have fallen into the under-regulation bin.

Had the NRC jumped to the conclusion after receiving the allegations about Millstone Unit 1 that all other reactors were likely afflicted with comparable, or worse, safety problems and ordered all shut down until proven affliction-free, this event would have fallen into the over-regulation bin.

By putting the Millstone Unit 1 allegations in proper context in a timely manner, the NRC demonstrated just-right regulation.

* * *

UCS’s Role of Regulation in Nuclear Plant Safety series of blog posts is intended to help readers understand when regulation played too little a role, too much of an undue role, and just the right role in nuclear plant safety.

At Long Last, President Trump is Expected to Appoint a Science Adviser

UCS Blog - The Equation (text only) -

Multiple outlets (Nature, Science, the Washington Post) are reporting that President Trump is set to appoint meteorologist Kelvin Droegemeier to lead the Office of Science and Technology Policy (OSTP). He is an experienced scientist with an impressive record of public service. When the appointment happens, the Senate should move quickly to vet and consider his nomination so that the vacuum of science advice within the White House can begin to be filled.

Importantly, the OSTP director has typically also served as the science advisor to the president, reporting directly to the president (except during the George W. Bush administration, when the science advisor was demoted to report to the White House chief of staff). If you want to go deep on presidential science advice, here’s one book for you.

Presumed science adviser nominee Kelvin Droegemeier could be a moderating force within the White House. Photo: Oklahoma State University

Direct access to the president matters. Just think of all of the issues the president deals with that have a science and technological component: pandemics, disaster response, economic competitiveness, health care, drug abuse, energy, food systems, resource extraction and more. Imagine how much better prepared the president could have been in talks with North Korea with a trusted advisor on the nation’s nuclear capacity.

Dr. Droegemeier is an extreme weather expert, a knowledge base that is becoming more and more important with climate change loading the dice as extreme weather becomes more prevalent, costly, and deadly. Science advisors can be moderating forces by providing road maps and showing what is possible, and working behind the scenes to stop dangerous proposals from moving forward.

Hopefully, Dr. Droegemeier would help the president and his advisors make decisions that are more scientifically justifiable and reflective of scientific evidence. He would also serve the country well by supporting efforts that protect federal scientists from political interference in their work.

Some will doubt that the president will have any inclination to listen to science advice and incorporate it into his erratic behavior. But not all policy comes from the mouth of the president, and at this point any mainstream scientific presence in the White House should be considered a step forward.

Pages

Subscribe to Union of Concerned Scientists aggregator - Combined UCS Blogs