Combined UCS Blogs

Truck and Bus Legislation to Watch in California

UCS Blog - The Equation (text only) -

Today’s the last day of the California legislative session. It gets hectic in Sacramento this time of year, but here are two bills I’m paying attention to that could help reduce air pollution and global warming emissions from heavy-duty vehicles.

As a reminder, heavy-duty vehicles make up just 7 percent of vehicles in California but disproportionately contribute to global warming emissions and air pollution, contributing 20 percent of global warming emissions from the transportation sector, for example. And as we work to improve public health, we must also remember that communities of color are disproportionately exposed to pollution through proximity to roadways, ports, warehouses, and other sources of emissions.

Cleaning up state-owned trucks and buses

That’s what Assembly Bill 739 by Assembly member Ed Chau would do. This bill sets a target for zero-emission trucks and buses purchased by the state: 15 percent of purchases made in 2026-2030 and 30 percent of purchases made in 2031 and later. This is an achievable target with eight years’ worth of technology development and agency planning to enable its implementation.

The target would apply to vehicles with gross vehicle weight ratings (the maximum weight at which a fully loaded vehicle is rated to operate) above 19,000 lbs. For a sense of scale, think transit buses, large U-Haul-type trucks, garbage trucks, etc. The bill only applies to state-owned vehicles, which includes everything from buses at the Cal State universities to work trucks operated by the Department of Parks and Recreation and Caltrans. The purchase goals do not apply to vehicles with special performance requirements necessary for public safety, such as fire trucks operated by the Office of Emergency Services.

This bill walks the talk. There’s been a lot of planning and workshops on how to get zero-emission trucks and buses on the road in California, from the Sustainable Freight Action Plan to standards for trucks, buses, and airport shuttles. This bill holds the state fleet to a similar standard.

It is important to note that the 15 percent and 30 percent targets in this bill apply only to purchases, not the overall composition of the state’s fleet. Suppose a given type of vehicle typically lasts 14 years. This means roughly 7 percent of those vehicles are turned over each year. A 15 percent purchase target in this case corresponds to 1 percent of the total fleet (15 percent of 7 percent).

There are many zero-emission heavy-duty vehicles already commercially available today and more on the way. Cummins recently unveiled an electric truck and Tesla will reveal its electric truck with a 200-300 mile range at the end of next month. Many other major companies have also signaled their interest in zero-emission trucks, including Daimler, Peterbilt, and Toyota.

Large scale funding for clean vehicles

That’s what recent amendments to Assembly Bill 134 (the budget bill) would do. The legislation proposes $895 million in funding for clean vehicles using revenue from the state’s cap and trade program. If that sounds like a lot of money, it is compared to previous years ($680 million for the last four years combined). But it’s not compared to the level of action needed for the state to meet its air quality and climate goals.

Oversubscribed incentive funding programs that offset the upfront purchase cost of electric trucks, buses, and cars for businesses and consumers receive much-needed funding in this bill, including $180 million for the Hybrid and Zero-Emission Truck and Bus Voucher Incentive Project (HVIP). This program provides rebates for medium- and heavy-duty vehicles, with zero-emission trucks and buses receiving larger incentives than combustion technologies. The $35 million in HVIP designated for zero-emission transit buses alone could allow half of the roughly 700 buses purchased in California over the next year to be battery electric vehicles.

The budget bill also includes $140 million for the Clean Vehicle Rebate Program (CVRP), which provides consumers with rebates for plug-in hybrid electric, battery electric, and fuel cell electric passenger cars. This program has helped put over 200,000 clean cars on the road in California since 2010. There’s a lot more in the budget bill for clean vehicles ($575 million), but the CVRP and HVIP programs are ones UCS has been especially involved with.

These two bills are very different in scale – AB 739 applying to a fraction of state-owned vehicles and the budget bill providing incentives to businesses and consumers for vehicles across the light-, medium-, and heavy-duty classes. But to reach the end goal of clean air for all Californians and dramatically reduced climate emissions, we need actions that span all scales.

Jeff Turner/CC BY 2.0 (Flickr) A Caltrans diesel dump truck. Photo: California Department of Transportation

North Korea’s Sept. 15 Missile Launch over Japan

UCS Blog - All Things Nuclear (text only) -

North Korea conducted another missile test at 6:30 am September 15 Korean time (early evening on September 14 in the US). Like the August 28 test, this test appears to have been a Hwasong-12 missile launched from a site near the Pyongyang airport. The missile followed a standard trajectory—rather than the highly lofted trajectories North Korea used earlier this year—and it flew over part of the northern Japanese island of Hokkaido (Fig. 1).

Fig. 1. Approximate path of the launch.

The missile reportedly flew 3,700 kilometers (km) (2,300 miles) and reached a maximum altitude of 770 km (480 miles). It was at an altitude of 650 to 700 km (400 to 430 miles) when it passed over Hokkaido (Fig. 2).

Fig. 2. The parts of Hokkaido the missile flew over lie about 1,250 to 1,500 km (780-930 miles) from the missile launch point.

The range of this test was significant since North Korea demonstrated that it could reach Guam with this missile, although the payload the missile was carrying is not known. Guam lies 3,400 km from North Korea, and Pyongyang has talked about it as a target because of the presence of US forces at Anderson Air Force Base.

This missile very likely has low enough accuracy that it could be difficult for North Korea to use it to destroy this base, even if the missile was carrying a high-yield warhead. Two significant sources of inaccuracy of an early generation missile like the Hwasong-12 are guidance and control errors early in flight during boost phase, and reentry errors due to the warhead passing through the atmosphere late in flight. I estimate the inaccuracy of the Hwasong-12 flown to this range to be likely 5 to 10 km, although possibly larger.

Even assuming the missile carried a 150 kiloton warhead, which may be the yield of North Korea’s recent nuclear test, a missile of this inaccuracy would still have well under a 10% chance of destroying the air base. (For experts: This estimate assumes the air base would have to fall within the warhead’s 5 psi air blast radius, which is 3.7 km, and that the CEP is 5 to 10 km.)

Heating of the reentry vehicle

As I’ve done with some previous tests, I looked at how the heating experienced by the reentry vehicle (RV) on this test compares to what would be experienced by the same RV on a 10,000 km-range missile on a standard trajectory (MET). My previous calculations were done on North Korea’s highly lofted trajectories, which tended to give high heating rates but relatively short heating times.

Table 1 shows that in this case the duration of heating (τ) would be roughly the same in the two cases. However, not surprisingly because of the difference in ranges and therefore of reentry speeds, the maximum heating rate (q) and the total heat absorbed (Q) by the RV on this trajectory is only about half that of the 10,000 km trajectory.

Table 1. A comparison of RV heating on the September 15 missile test and on a 10,000 km-range trajectory, assuming both missiles have the same RV and payload. A discussion of these quantities can be found in the earlier post.

So while it seems likely that North Korea can develop a heat shield that would be sufficient for a 10,000 km range missile, this test does not demonstrate that.

Why Does the Cost of Offshore Wind Keep Dropping?

UCS Blog - The Equation (text only) -

The latest costs for new offshore wind farms are mighty impressive. How come offshore wind costs just keeps going down?

Records were meant to be broken

The UK just held its latest auction for power from future projects based on a range of low-carbon technologies beyond the usual suspects like solar and land-based wind.*

The UK auction results were quite something: The winning bids included not one but two offshore wind projects whose developers agreed to a contract price of ­­£57.50 per megawatt-hour (2012 prices)—around 7.7 US cents per kilowatt-hour. That’s half the cost for offshore wind projects in a round of bidding in the UK just two years ago, and within striking distance of—or lower than—the cost of almost any source of new “conventional” power.

So how does this happen? Why does the cost of offshore wind keep getting lower, and so quickly?

Bigger, stronger, faster

Those latest record breakers, the proposed Moray and Hornsea Two offshore wind projects, offer some strong clues about possible paths to lower costs:

  • Larger turbines. The two new projects might use 8-megawatt wind turbines, as did one project that just came online. That’s a big step up from the standard of just a few years ago. And larger turbines are likely on the way (and maybe even much larger ones). Larger turbines mean more power from each installation—each footing, each tower, each trip to install pieces of it, and then to maintain it.
  • Larger projects. Moray will be a really impressive 950 megawatts. Hornsea Two will be a stunning 1386 megawatts—likely the largest offshore wind project in the world when it goes online (and enough to power more than 1.4 million UK homes). Larger projects mean likely economies of scale on lots of pieces, making better use of the installation crews and equipment, covering more ground (or water) with given maintenance personnel, and spreading all the project/transaction costs over more megawatts.
  • Faster project timelines. Both of these new projects are supposed to come online by 2022/23, which is amazingly quick (and not just by US standards). Faster timelines mean less zero-revenue time before the blades start turning and the electrons start flowing (and the dollars/pounds start coming in).
  • Lots of offshore wind projects in place already. The latest projects will join a national mix that includes 5100 megawatts of offshore wind providing 5% of the UK’s electricity. Plenty of experience offshore means there’s a developed and growing industry in the UK and much of the necessary infrastructure for manufacturing components, moving them into place, and getting the electricity to shore.
  • Comfortable investors. With all the UK experience to date, investors know what they’re getting into. The UK government, offering these contracts, is about as solid a guarantor for the revenue stream as investors could ever hope to see. Comfortable investors = lower financing costs = lower prices for consumers.

Lots of tailwinds for offshore wind. So what might be pushing things in the other direction—counterbalancing (partly) all those cost gains?

Two have to do with project sites. As near-shore sites get taken, projects end up farther from land, meaning more shipping time to get personnel and materials to the project site, and longer power lines to get the electrons back to land, and higher associated costs. New sites might also be in deeper water, which means more tower costs (or even floating turbines!).

UK wind farms and instantaneous output (Source: The Crown Estate). Click to enlarge.

On the plus side, better wind speeds are also a factor in cutting offshore wind costs, and being further out can mean even better winds.

The UK doesn’t seem to be in danger of running out of suitable sites, in any case, and technologies seem to be evolving to keep up with changing site characteristics.

Meanwhile, back in the U.S. of A.

What’s this latest offshore wind news mean for those of us on this side of the pond? The biggest takeaway, maybe, is that we can do more when we do more.

As UCS and plenty of others have argued, we really benefit by offering the US offshore wind industry a clear path not just to one or two projects, but to the robust levels of installation and clean energy that we know we need. That long-term outlook can allow them to make the kind of investments (and attract the investors) to build not just projects, but an industry.

And with each project, it becomes easier to envision the next one. Massachusetts has structured its 1600-megawatt offshore wind requirement with multiple tranches to take advantage of this effect. The first round, maybe 400 megawatts (for which bids are currently being prepared), is likely to pave the way for a cheaper second round, and a third round that’s cheaper still.

New York is offering a path to even larger scale, with its recent commitment to 2400 megawatts of offshore wind.

As the experience in the UK and elsewhere is showing, more and bigger projects, larger overall targets, and greater clarity for the industry can lead to economies of scale, more local manufacturing and stronger local infrastructure, and more comfortable investors for US markets.

And that can all add up to more cost-effective offshore wind for us all.

*Can I just say how great it is to be in a place where solar and wind are “usual suspects”? We are definitely making progress.

The Good, Bad, and Ugly Self-Driving Vehicle Policy

UCS Blog - The Equation (text only) -

A Waymo self-driving car on the road in Mountain View, CA, making a left turn. CC-BY-2.0 (Wikicommons).

Automakers and their advocates have been busy in the halls of Congress and Department of Transportation. The U.S. House of Representatives passed legislation that will make it easier for self-driving cars to hit the road, the Department of Transportation replaced an Obama-era self-driving vehicle policy with a more industry-friendly approach, and the Senate had a hearing on a bill that would also speed the deployment of self-driving vehicles, including trucks.

The Good News

The bill that passed the House and the bill being considered in the Senate include some positive provisions. For example, each establish an expert committee that will be tasked with identifying how self-driving vehicles could affect: mobility for the disabled and elderly, labor and employment issues, cybersecurity, the protection of consumer privacy, vehicle safety, and emissions and the environment. Establishing a structure for a Department of Transportation-led committee to examine these issues is important for informing future self-driving vehicle policy that can help this technology create positive outcomes and avoid its potential consequences.

Both bills also draw a brighter line between federal and state authority related to vehicle safety. The way this division works for regular cars today is that the federal government regulates the vehicle and states regulate the drivers. But this distinction doesn’t quite work with self-driving vehicles, because who is the driver? The person sitting in the driver’s seat, eating pita chips and watching Netflix while the car drivers itself? Or is it the vehicle itself?

To better clarify the distinction between federal and state authority, both the House and Senate bills give control over the design, construction, and performance of self-driving vehicles and self-driving technology to the federal government. States retain their right to enact laws related to how these vehicles are registered, who can use them, and how they interact with state or local roads and infrastructure. However, states would be preempted from enacting any law that can be read to be an “unreasonable” restriction on the design, construction, or performance of a self-driving vehicle.

Self driving vehicles are set to hit the road sooner than you may think. Companies like Google, Uber, Ford, and Tesla are all rushing to get the best self-driving vehicle on the market. Image via; https://commons.wikimedia.org/wiki/File:Driving_Google_Self-Driving_Car.jpg

The last bit of good news is that the bills require automakers to submit detailed cyber-security and safety evaluation reports to the Department of Transportation. The bills also note the need to inform consumers of the capabilities and limitations of self-driving vehicle systems, so that users better know when the system can be engaged or needs to be turned off.  In fact, the National Transportation Safety Board recently found that Tesla’s autopilot lacks the appropriate safeguard to prevent drivers from using it improperly.

The Bad News

It wouldn’t be federal legislation if there wasn’t something bad tucked in, and both the House and Senate self-driving vehicle bills have some potentially dangerous provisions.

Both bills allow self-driving vehicles to be granted exemptions from federal motor vehicle safety standards (FMVSS). Any vehicle, whether self-driving or not, can be granted an exemption from FMVSS, and the law currently allows up to 2,500 exemptions per manufacturer per year.

Self-driving cars will surely need FMVSS exemptions. They might not have a steering wheel, for example, so they couldn’t possibly comply with the FMVSS for steering wheels and, as a result, couldn’t be tested or sold in the U.S. The whole FMVSS playbook will likely need to be updated by the Department of Transportation to respond to self-driving vehicle technology. But before then, self-driving vehicle makers will look for exemptions to sell their product.

The problem is the number of exemptions that the House and Senate bills are offering self-driving vehicle manufacturers. Both bills would grant a single manufacturer up to 100,000 exemptions from FMVSS after a couple years. (The Senate bill starts with 50,000 in year 1, for example.) This means that an automaker could make a self-driving vehicle and exempt it from any safety regulation that would “prevent the manufacturer from selling a motor vehicle with an overall safety level at least equal to the overall safety level of nonexempt vehicles.” Given that self-driving vehicles will likely have similar, if not better, safety ratings than regular vehicles, I could see this language as having very broad appeal for getting the Department of Transportation to approve exemption requests.

Exempting self-driving cars from FMVSS for testing purposes makes sense, but the quantity of exemptions allowed in the House and Senate bills is excessive. Once self-driving cars are on the road, there’s no putting the self-driving genie back in the bottle. Transportation analysts, academics, the government, and the public need to better understand the safety, congestion, labor, and other impacts that self-driving vehicles will create before automakers get a free pass to each put 100,000 self-driving vehicles on the road.

Limiting the number of FMVSS exemptions closer to the current cap of 2,500 per manufacturer would put the introduction of self-driving vehicles at a pace to better understand how they function in actual driving conditions, not on the test track (or test city). In addition, several groups and two former heads of the National Highway Traffic Safety Administration have expressed skepticism that the agency even has the resources to process additional FMVSS exemptions or conduct adequate oversight in this area.

The Ugly News

In 2016, the Obama-led Department of Transportation put together a thoughtful, lengthy memo that detailed where the Department was headed on self-driving vehicle regulation. Earlier this week, the Department tossed that out the window and replaced it with a streamlined set of voluntary guidelines that self-driving companies should seek to follow.

Like the Obama-era guidance, nothing in the new federal guidance is mandatory. But unlike the previous guidance, the new guidance isn’t very specific. Consumer advocates like Consumer Watchdog and Consumers Union lambasted this approach as being a handout for industry, and they have a point. The guidance “encourages” the industry to do a lot of things, like collect data on when self-driving vehicles malfunction or crash, or submit a “voluntary” safety self-assessment that isn’t subject to any sort of federal approval.

Overall, the tone and vagueness of the document, combined with the choice to just throw out, and not build upon, the previous self-driving vehicle guidance puts this move by the Department of Transportation squarely in the ugly category.

Why Did Hurricane Irma Leave so Many People in the Dark?

UCS Blog - The Equation (text only) -

The National Hurricane Center issued its final advisory for Irma on Monday night, September 11, but for millions of people left in the storm’s wake, the disaster remains far from over. One stark reminder? Power outages. Everywhere.

Across the Caribbean, through the entirety of Florida, up into Georgia, and spreading into the Carolinas, Irma ripped power from the people.

Seventeen million people, at its peak.

Which means 17 million people without air conditioners in the sweltering heat and humidity, 17 million people without refrigerators keeping food and medicine safe, 17 million people without lights at home or along the roads, 17 million people without internet to stay informed, 17 million people suffering business interruptions and loss, and 17 million people without the assurance of critical infrastructure dependent on power—first responders, hospitals, drinking water, sewage—being able to keep their operations going. We’ve already seen the tragedy that can occur when these systems fail, with the loss of eight lives at a nursing home unable to cope, powerless in the oppressive Florida heat.

Following herculean round-the-clock efforts of the largest assembly of restoration workers in history, the lights are starting to flicker back on across the Southeast. But questions about these outages—how many, why, for how long, and critically, could it have gone better—abound. Here, a quick run-down of what we know, what we don’t, and what we’ll be looking to see in the days, weeks, and months to come.

How big was this power outage and how long will it last?

Current estimates place the number of people impacted by outages from Irma at more than 16 million across the southeastern US. When you add in outages across the Caribbean, where homes and infrastructure have seen even more severe damage, the count climbs higher to 17 million. It will take some time to get final official numbers, but the rough-cut already confirms a mind-bogglingly high number of people got left in the dark.

Just how high? When we compare customer outage counts (which is different from people; utilities tally each account as one “customer,” but accounts can represent multiple people living in the home or working in the business located behind the meter) from some major recent storms, Irma’s preliminary 8.956 million across five states, Puerto Rico, and the US Virgin Islands looks like it will probably top the list:

  • Sandy (2012): 8.66 million customers
  • Irene (2011): 6.69 million customers
  • Gustav (2008): 1.1 million customers
  • Ike (2008): 3.9 million customers
  • Katrina (2005): 2.7 million customers
  • Wilma (2005): 3.5 million customers
  • Rita (2005): 1.5 million customers

But here’s a critical point. In many ways, the duration of an outage determines the severity of its consequences. Lights out for a night? For most: an inconvenience. Lights out for several days, a week, or even longer? The triggering of a cascade of disastrous and potentially life-threatening consequences. And in a comparison of the 2005 and 2008 hurricane seasons below, we can see clearly that across storms, the initial magnitude of peak outages does not necessarily align with the subsequent duration borne by large numbers of people:

A comparison of peak outages, and outage durations, from a series of 2005 and 2008 hurricanes. Credit: DOE OE/ISER.

Right now, we know that the peak number of customers experiencing outages from Irma tops those tallied in the storms above, but we don’t yet know how long all of these outages will last. Already utilities have returned millions of people to power across the Southeast—Wednesday evening’s situation report had the total number without power at over 4.2 million; down steeply from its peak, yet still high—and are predicting that many more will be restored by the end of this weekend. Still, the utilities have flagged that they expect some segment of customers will remain without power for yet another week, or a full two weeks after the storm initially blew through.

One thing to watch? Who’s left in the dark the longest. The order in which customers get returned to power can have life-threatening consequences. Tragically, lives have already been lost from these outages. As coordination between utilities and local governments grow, in addition to prioritizing critical infrastructure, it is imperative to identify those populations most in need of attention—including the elderly, those with disabilities, and low-income populations—to help ensure prioritized and equitable attention for those who are least able to cope with the aftermath of severe weather events.

What caused these widespread outages?

Severe storms can present many and varied threats to the electricity system, from high winds, trees, and flying debris taking down power lines, to storm surge and inland flooding laying siege to substations, transformers, buried power lines, and even power plants. And in a centralized grid, where electricity from large power plants gets routed along transmission and distribution lines until it finally reaches a customer at the end of the wire, outages occurring along any part of the system can ripple down the line.

We know from a previous UCS analysis of the southeastern Florida and Charleston and South Carolina Lowcountry electricity grid that critical electrical infrastructure is located in areas highly susceptible to flooding from storm surge. However, in some places Irma ended up sparing significant storm surge, yet still the power went out. Why?

Wind, for one. Heavy winds can snap poles, send trees crashing onto wires, loft dangerous flying debris, and otherwise rip lines from homes and businesses. But flooding almost certainly contributed in places as well, especially in locations where storm surge and rainfall was worse. And finally in some places, utilities themselves may have caused the outages by pre-emptively cutting power to parts of the grid to better protect potentially inundated infrastructure.

Hurricane Irma restoration in Fort Lauderdale, FL, on Sept. 11, 2017. Credit: Florida Power and Light.

Depending on the causes of failure, and whether there existed many scattered problems versus several centralized disturbances, the length of repairs—and thus the time until restoration—can vary.

We will be waiting to review the utility’s system assessment following the restoration effort to see, in particular, where the major vulnerabilities in the system were concentrated, which can help us understand what went wrong, what went right, and where more attention must be focused in the future—so stay tuned for updates here.

Utilities in Florida  invested billions to storm-harden the grid. Do these outages mean it was a waste?

Following the catastrophic 2004 and 2005 hurricane seasons, Florida took steps to require its utilities to more closely consider storm preparedness. This resulted in several new requests from the state’s Public Service Commission, including a requirement for utilities to adhere to a vegetation management plan (i.e., requiring diligent, intentional tree-trimming schedules), and a requirement that utilities present an annual accounting of storm hardening efforts across their systems.

In response, Florida Power & Light (FPL), the largest utility in the state, has invested on the order of $3 billion since then, with other utilities in the state following suit. In FPL’s case, this has meant replacing thousands of wooden poles with concrete, burying dozens of main power lines, upgrading hundreds of substations with flood-monitoring equipment to pre-emptively shut off power (and thus avoid far worse outcomes than if such equipment were inundated while energized), and installing smart-grid devices throughout the system to help pull back the curtain on where outages are and how to work around them.

So how, then, do we square these $3 billion in investments with the fact that over the course of this storm, a staggering 4.45 million out of 4.9 million FPL customers were affected by outages? Were all the investments, borne on the backs of ratepayers, in vain?

Almost certainly not. For one, where FPL’s investments in grid hardening overlapped with increases in system resilience—or the development of a grid that is flexible, responds to challenges, and enables quick recoveries—these upgrades can help the utility restore power faster. That’s critical for lessening the impact of outages, especially for vulnerable populations, even if it doesn’t lessen the initial scope.

Still, there will be lots to consider after the restoration process is over, and once we have had a chance to see where outages persisted, and why. We will also then be able to study how this restoration evolved compared to previous efforts, and where attention should be focused in the future. At the same time, we already know that utilities have been insufficiently factoring climate change into their current infrastructure plans, leaving today’s investments vulnerable to tomorrow’s conditions. And that, we know, must change.

Is this the future we must accept, or are there things we know we can do better?

In addition to tragic loss of life and property, Hurricane Irma has also forced the reckoning of a new round of questions relating to storm preparedness in a warming world. On the one hand, it is impractical to perfectly protect our electricity infrastructure against all possible power outage threats, and though it’s too soon to tell the degree to which the widespread power outages following Irma could have been avoided, it is possible to accept that such a large storm would have at least resulted in some. (And it’s worth noting that Irma itself could have been far more devastating to parts of the coastal grid had the storm’s path not changed—the performance here should not be evidence of the worst that can happen, as we know a future storm could lay bare other paths of exposure.)

At the same time, we know that prolonged power outages can have catastrophic consequences. In particular, the critical infrastructure upon which we all depend, and the vulnerable populations for whom lasting outages can have the most severe affects, simply cannot be left to chance. We should not, cannot, accept that lives will be lost because the power stayed out.

So where do we go from here?

We put a focus on resilience. Now this is a big conversation, and one demanding attention on many fronts, not just the electricity sector. Because yes, it’s about improving the resilience of the power grid—about which we’ll be writing more in the time to come—but it’s also about advancing complementary measures that get people out of harm’s way to begin with. It’s about climate change, and equity, and infrastructure, and planning—it’s all about the future, and how we best position ourselves to face it.

And that means looking forward, not looking back. So in the time ahead, we’ll be looking to see how the federal government, states, and utilities move forward, and do our best to make sure that when tomorrow’s storm won’t look like today’s, all parties are preparing for the future, not the past.

Florida Power & Light

Tennessee Valley Authority’s Nuclear Safety Culture Déjà vu

UCS Blog - All Things Nuclear (text only) -

The Nuclear Regulatory Commission (NRC) issued a Confirmatory Order to the Tennessee Valley Authority (TVA) on July 27, 2017.  An NRC team inspecting the Watts Bar Nuclear Plant in fall 2016 determined that TVA failed to comply with elements of another Confirmatory Order that NRC had issued to TVA on December 22, 2009. Specifically, the 2009 Confirmatory Action required TVA to implement measures at all its nuclear plant sites (i.e., Watts Bar and Sequoyah in Tennessee and Browns Ferry in Alabama) to ensure that adverse employment actions against workers conformed to the NRC’s employee protection regulations and whether the actions could negatively impact the safety conscious work environment. The NRC inspection team determined that TVA was not implementing several of the ordered measures at Watts Bar.

To be fair to TVA, the agency did indeed develop the procedures to ensure adverse employee actions did not violate NRC’s employee protection regulations.

To be fair to NRC, its inspectors found that TVA senior management simply did not use those procedures when taking adverse employee action against several TVA employees and contractors.

To say that TVA has a nuclear safety culture problem is like saying the sun is hot.

After determining that TVA failed to implement mandated in its December 2009 Confirmatory Order, the NRC issued another Confirmatory Order to TVA in July 2017.

How many Confirmatory Orders it will take to get TVA to establish and sustain proper nuclear safety cultures at its nuclear power plants?

I don’t know. But at least we are now one Confirmatory Order closer to that magic number. Perhaps before too many more years roll by, workers at Watts Bar, Sequoyah, and Browns Ferry will actually be protected the way they are supposed to be by NRC’s regulations.

What Is Grid Modernization—and What’s the Role of Electric Vehicles?

UCS Blog - The Equation (text only) -

Utilities around the country are creating “grid modernization” plans. What does this mean? Isn’t the grid “modern” already?

We get electricity reliably with the flip of a switch. It can power all manner of appliances and devices. The National Academy of Engineering (NAE) regards electrification as the greatest engineering achievement of the 20th century. Even so, NAE observes that the system could be even more economical and reliable with the right kinds of improvements.

Maintaining the current level of reliability requires investments of billions of dollars each year by utilities and grid operators, and regular attention by trained line workers and electrical engineers. The grid is over a century old in much of the country, and has been built up in a patchwork fashion over this time.

As a result, most states are looking at ways to improve the system, as seen in Figure 1. “Grid modernization” can mean different things depending on local needs. A state in the Midwest might focus on upgrading transmission lines to connect more wind turbines and deliver the power to urban centers. California or Hawaii may focus on pricing structures to reflect the abundance of solar power at mid-day. New York is working to address growing electricity demand in the Brooklyn-Queens “load pocket,” where constructing a new substation would be expensive and disruptive.

Figure 1: Grid modernization activities took place in 37 states, either in their legislatures or regulatory agencies, in the first quarter of 2017. Source: NC Clean Energy Technology Center, “50 States of Grid Modernization.”

A common feature of most of these grid modernization plans is communication. By sending real-time information about conditions such as power flow or the supply of renewable energy, and designing systems to respond automatically, we can reduce grid costs and maximize the use of clean power.

Smart charging of electric vehicles is an illustrative application of grid modernization that brings together many of its key elements. By varying the rate at which the vehicles draw electricity from the grid, we can manage short-term changes in wind and solar power output, or even compensate for outages at other power plants. This can be done without inconveniencing drivers. It requires communication between the vehicle and the grid, but is possible with existing technology.

Why grid modernization?

Grid modernization can deliver greater quantities of zero-to low-carbon electricity reliably and securely, including handling variable renewables like wind and solar power. It can support the electric vehicle revolution and increase grid resilience to withstand climate impacts. It can spread economic opportunity in rural and urban communities through electricity and transportation infrastructure investment and upgrades. And, it can improve system efficiencies and reduce costs by reducing the need for expensive and dirty power plants that only run a few hours per year (these are called “peakers”).

The US obtained about 10% of its electricity generation from wind and solar in the spring of 2017 (counting distributed solar), with some regions much higher on individual days. A modern grid will allow higher levels of renewable energy by improving weather prediction, limiting the effects of local variations, and providing storage and load flexibility (electricity demand that has some leeway to adjust up or down) so that backup power plants won’t need to be kept running.

UCS modeling (in The US Power Sector in a Net Zero World) has shown that 55-60% of US electricity could be delivered from renewable energy by 2030, most of this from wind and solar. The US Department of Energy (DOE) explores a scenario of 20% wind power in 2030 in Wind Vision, and DOE’s National Renewable Energy Laboratory (NREL) illustrates a pathway to even higher levels of renewables in 2050 in the Renewable Electricity Futures Study. Modernizing our grid will help us best take advantage of those new wind and solar resources.

The technologies

Unlocking the promise of a low-carbon electricity system will require deploying new infrastructure and innovative technologies and changing the rules that govern our electricity system and markets.

Some established technologies have a valuable role to play in grid modernization. Transmission lines move power between different regions of the country. This can help manage large amounts of renewable energy. For example, wind power becomes more consistent when the wind farms are in many different places across a large area. Transmission lines also help regions cope with outages of other types of power plants, such as natural gas or nuclear.

Energy storage is finding new roles on the grid. Storage that uses hydropower has long helped match supply to daily changes in electricity demand, while recent years have seen tremendous advances in another technology: batteries. As costs come down and performance improves, batteries are increasingly viable for a broad range of electricity sector applications. They can store power produced during times of low electricity demand and discharge it to the grid when needed.

Batteries can also make renewable energy available on demand (such as solar power, as seen in Figure 2). In some cases, this is a lower-cost and cleaner solution than relying on other generators for power at night and on cloudy days.

Figure 2: Utility-scale solar array with batteries, Kaua’i, Hawaii. Source: Kaua’i Island Utility Cooperative.

The real-time communication aspect of grid modernization comes into view with “smart” systems on homes and businesses. Smart inverters on solar panels adjust solar power supply to help the grid provide electricity at the correct voltage and frequency. Smart charging systems allow electric vehicles to selectively charge at times of low cost or low emissions. Smart electric meters measure electricity usage at short intervals, such as every hour rather than every month, empowering consumers to shift their electricity use to times when power is less expensive. Smart thermostats learn the patterns of household heating and cooling demand to reduce energy costs.

Together, these technologies enable a more efficient use of our electricity resources to help reduce consumer costs as well as reduce emissions.

Some of these smart systems feature controllable loads. Instead of shifting supply to times of peak demand, as storage does, they can shift demand to times of abundant supply. A commercial air conditioning system might make ice during a period of low electricity demand and then use the ice to cool a building during the late afternoon, typically a time of high electricity demand. An electric vehicle parked overnight could vary its rate of charging to match the output of nearby wind farms. Controllable loads can help the grid manage variable energy resources. If consumers can control the demand, everyone can accept some variability in the supply. Controllable loads also can provide short-term demand response, contracting with a utility to reduce electricity consumption during times of very high demand.

Smart meters allow controllable loads to better align electricity demand with supply. This depends on some sort of information from the utility. Time-varying rates provide that information in the form of a price signal—identifying the best times to use power. These rates can move higher or lower over the course of the day, week, and/or season in accordance with true system costs, can save money, and can better match consumer demand with our supply of clean energy resources. Controllable loads such as water heaters and electric vehicles can benefit from time-varying rates, since they can be flexible in when they draw power from the grid. This is discussed in more detail in the UCS Issue Brief, Flipping the Switch for a Cleaner Grid.

Smart charging as an example of grid modernization

Electric vehicles represent a growing source of electricity demand. A modern grid would both minimize the impact of EV charging on the  grid, while also enabling (and taking advantage of) “smart charging.” This in turn would improve grid reliability and support greater renewable electricity deployment.

So, what exactly is smart charging?

An EV has flexibility in when you charge it. With a home EV charger, you might get 20 miles of range per hour of charging. If you drove 60 miles that day, then you would require 3 hours of charging. Now suppose you get home at 7 pm and don’t need to go out again—the vehicle will be parked for the next 12 hours. You probably don’t need to start charging right away at 7 pm, at the same time as everybody else is using stoves, ovens, microwaves, televisions, or other electrical loads. In fact, it would be less expensive for the utility if you waited until any time after 10 pm.

How might the utility encourage you to do that? It could charge you less for electricity in “off-peak” times. This would require a “smart meter” that can measure when power is used, not just how much is used in a month. That could be a new utility meter in your house, or the utility could use the systems embedded in the vehicle or the charger. Alternatively, the utility might give you a rebate for a charger that they can control within certain limits, while still ensuring you have an override option.

The 2015 Kia Soul EV paired with a charger at the DC auto show.

Your local utility would like to know that you have an EV charger (since many high-powered chargers on the same neighborhood loop could cause impacts on the local transformer). But there is also the potential for you to use smart charging and benefit the utility, reducing system costs for everybody. It may become more practical to use the battery in a two-way “vehicle-to-grid” arrangement, where the battery is actually sending power to the utility. Although not widespread yet, V2G systems operate in several regions and offer a technically viable energy storage option.

Smart charging is not just an idea; such programs exist today. In BMW’s “ChargeForward” program, smart EV charging, combined with a bank of used batteries from older electric vehicles, provides demand response. When power is needed, BMW has its vehicles stop taking power from the grid, and has its battery bank start sending power to the grid. Vehicle owners are compensated for enrolling and participating in this program, and have the option to override and keep on charging during any demand response event.

Another smart charging program is EMotorWerks’ “JuiceNet Green” algorithm, which automatically aligns vehicle charging at residential or workplace charging stations with clean energy generation to minimize pollution. Other systems have been developed for public charging stations, such as those from ChargePoint or Greenlots. Utilities such as San Diego Gas & Electric, Consolidated Edison, Eversource, and others are investigating how smart charging can benefit their systems. Many such programs, and some “vehicle to grid” systems, are discussed in the UCS report Charging Smart.

EVs bring together many of the elements of grid modernization. Because the vehicles incorporate storage, they are a controllable load. They can provide services such as demand response and can benefit from time-varying rates. These rates require the use of smart meters. The meters also provide the utility with a large amount of data; with the right information systems, utilities can use this data to improve grid operations. Smart charging systems can communicate with the grid in real-time, including the local components, and make automatic adjustments. A charger might vary the power draw to improve the local “power quality,” or coordinate with other chargers to limit power spikes on a circuit, or increase power draw if a neighboring solar photovoltaic system is producing surplus power.

Finally, EV chargers in some cases incorporate additional energy storage in the form of stationary batteries; these can offer many benefits, such as allowing higher-powered charging where the local infrastructure could not otherwise accommodate it.

EVs aren’t the same thing as grid modernization. You could have one without the other (there have in fact been electric vehicles and large-scale energy storage on the grid for many decades). But considering the technologies and principles of grid modernization when making investments for electric vehicles can help ensure that the vehicles are an asset to the grid—allowing increased reliability, greater utilization of renewable energy, and limiting the grid infrastructure investments needed to accommodate EVs.

What Scott Pruitt Still Gets Wrong About Chemical Safety Post-Hurricane Harvey

UCS Blog - The Equation (text only) -

Photo: North Carolina National Guard/CC BY-ND 2.0 (Flickr)

In a recent interview with ABC’s “Powerhouse Politics” podcast, EPA Administrator Scott Pruitt was asked about the agency’s role in responding to Hurricanes Irma and Harvey and the devastation caused in Florida and Texas by the natural disasters. During the conversation, one of the hosts asked Administrator Pruitt about the EPA’s Risk Management Plan (RMP) rule and his decision to delay implementation of the amendments to modernize the standards for chemical facility safety (relevant because a chemical plant covered by this rule exploded after Hurricane Harvey hit Houston).

In the subsequent back and forth, Administrator Pruitt said something very concerning. He argued that one of the reasons the EPA delayed the new rule was not to limit information to the communities living next to chemical plants, but due to national security concerns. He argued that the 12,500 chemical plants covered under the RMP are in fact soft targets for terrorists. He went on to say that if you had too much information in the RMP, terrorists could use that and cause harm to the communities.

But this is not an accurate or truthful portrayal of the RMP amendments. Administrator Pruitt is either willfully ignorant or simply confused about how the RMP works currently and the updated rule.

First responders, workers, and fenceline communities need easier access to information so that they can be better prepared for chemical facility disasters.

First responders, workers, and fenceline communities need easier access to information so that they can be better prepared for chemical facility disasters.

First, the new rule does not require additional information to that which must already be disclosed to the public under existing laws and regulations. What the updated rule does do is provide an easier avenue of access to RMP data for local communities and emergency responders. If the updated rule was implemented, facilities would have to disclose certain basic information (such as 5-year accident history, safety data sheets, planned emergency exercises, and evacuation information) directly upon request from the public. Under current regulation, the public has to visit Federal Reading Rooms or file public records requests to gather this information in what is a really time-consuming and overly burdensome process.

Second, the RMP amendments were developed in close consultation with the Department of Homeland Security (DHS) so that the final rule is consistent with anti-terrorism standards and ensure that national security concerns are appropriately addressed. In fact, in its own FAQs about the RMP amendments, the EPA highlights that it coordinated the rulemaking closely with DHS and security professionals so that the agency could “strike a balance between information sharing and security.” The EPA already struck a balance between national security concerns and the safety of workers, communities, and first responders when it finalized the RMP amendments.

EPA, Implement the RMP amendments

The bottom line is that Administrator Pruitt’s desire to cloak his actions to delay a critical public safety and security safeguard is unjustified and flagrantly irresponsible. If he truly cares about public safety as he claims, the most immediate threat is already here. A lack of access to crucial chemical safety information led first responders to be needlessly exposed and caused mass confusion among the public and media as we scrambled to assess public risks.

By pivoting to national security, Administrator Pruitt is simply shifting the responsibility for his actions to gut a public protection that should have been implemented months ago. First responders in Houston are already suing Arkema for gross negligence (a reminder, Arkema submitted comments opposing the RMP amendments, specifically raising concerns about sharing information with first responders and the public). This is exactly why we need to have the updated regulations implemented right away. The RMP amendments, among other things, would result in:

  • Better access to information for emergency first responders and communities, along with a specific charge to improve coordination between facilities and emergency personnel
  • Ensuring that lessons are learned from serious accidents
  • A requirement that facilities with the worst accident records assess options for safer alternatives to remove hazards

The Union of Concerned Scientists has a long history of advocating for stronger chemical facility safety protections. We have submitted extensive comments to the EPA asking the agency to do more to secure chemical facilities and ensure safety for fenceline communities. We have worked in partnership with our colleagues at Texas Environmental Justice Advocacy Services (t.e.j.a.s.), based in Houston, on a report highlighting the impact chemical facilities have had on neighboring communities for several years, if not decades, and the need for stronger chemical facility safety regulations. And as a result of Administrator Pruitt’s decision to postpone implementation of the RMP amendments, we have joined community and environmental justice groups, including t.e.j.a.s., in litigation to force the agency to reverse its decision.

Before Administrator Pruitt says that this is not the right time to talk about the RMP and chemical facility safety (like he did with climate change), let me just say that it’s not just the right time to talk about the science of the risks that these communities face, it’s actually long overdue. The New York Times reported that more than 40 sites have released approximately 4.6 million pounds of hazardous airborne pollutants due to Hurricane Harvey. The EPA needs to act to improve safety at chemical plants by immediately implementing the updates to the RMP rule. Fenceline communities in Houston and around the country, which are predominantly low-income communities and communities of color, cannot afford to wait any longer.

The Northeast Should Limit Pollution from Transportation

UCS Blog - The Equation (text only) -

Vehicle pollution is a major issue for human health and the environment.

Over the past decade, the Northeast region of the United States has helped lead the country—and the world—in supporting and developing clean, renewable sources of electricity. Taken together, the policies of Northeast states, from Maine to Maryland, have generated billions of dollars in investment for solar, wind, and efficiency. One driving force behind this investment is a regional initiative that caps emissions from the electricity sector, charges power plants for the emissions they generate, and invests the funds generated by those fees into efficiency and clean energy programs. This initiative has helped fundamentally change the region’s electricity sector: we have achieved unprecedented penetration of renewables, nearly eliminated the use of coal, and reduced overall electricity use at a time of economic expansion.

The next big step for the states of the Northeast is to bring that same sense of commitment, ingenuity and purpose towards clean transportation.

Regional policies have helped drive down electricity-related emissions, while transportation-related emissions have been mostly stagnant.

Transportation is the largest source of pollution in the Northeast region, comprising more than 40 percent of total regional global warming emissions. In addition to the health impacts associated with rising temperatures, soot and ground-level ozone from the region’s cars and trucks are responsible for more than 50,000 asthma attacks, 1,000 deaths, and other pollution-related illnesses that incur approximately $27 billion in total health costs every year. The health impacts of transportation affect all of us, but especially vulnerable are children, the elderly, and people in low-income communities (who often live in or near freight corridors).

Our transportation system pollutes because it is dirty, wasteful and inefficient. It’s also expensive. 92 percent of all transportation is powered by oil. Every year Northeast drivers send billions of dollars out of state to purchase fuel, enriching oil companies at the expense of our economy. Congestion, a growing problem in every Northeast metro area, is a waste of our time and a source of endless aggravation for Northeast drivers. 4 of the 5 states with the longest commute times are located in the Northeast. At the same time, inadequate access to affordable transportation remains a major barrier to opportunity, particularly for poor and marginalized communities, rural residents, the disabled and the elderly.

We can create a better transportation system

The good news is that we have the tools and the technologies to build a better, cleaner transportation system in the Northeast. Exciting technologies such as electric vehicles offer the promise of cars and trucks and buses that can operate without tailpipe emissions and that can be powered by clean energy. Thanks to our relatively clean grid, in the Northeast EVs can get the emissions equivalent of a 100+ mpg vehicle.

New transportation modes such as ride-sharing and automated vehicles, if given the proper incentives, have the potential to challenge the dominance of personally owned, single-occupancy vehicles and open up new possibilities for greater system efficiency. Use of public transportation in the six largest transit systems in the Northeast  has increased over 8% since 2008. And a younger generation is coming of age that shows ever greater interest in transit, cycling, and urban living.

Together, these present-day technologies and trends point towards a possible future still on the horizon.  A transportation system that does more but costs less and pollutes less. Where a network of shared, electric vehicles, working in concert with a first-class public transportation system, gets everybody where they need to go without burning a gallon of gasoline or getting stuck for an hour in traffic.

A transportation system that doesn’t contribute to air pollution, doesn’t contribute to climate change, and doesn’t concern itself with the price of oil.

We need new policies to make this happen

This future won’t happen on its own. We need policies to get us there. Just as there was no one, silver-bullet policy that is responsible for the progress that we have made reducing emissions from electricity, reducing pollution in transportation will require a coordinated set of policies and regulations. It will require cooperation between local, state, regional and federal governments, and between government and the private sector. Ultimately, it will require policy leaders to identify new sources of funding for clean transportation priorities.

The Union of Concerned Scientists (UCS) recommends that Northeast decisionmakers do the following to get the region on the path toward a cleaner transportation system:

  1. Create a regional limit on transportation emissions. The Northeast’s success in reducing electricity-related emissions lies in the Regional Greenhouse Gas Initiative (RGGI); under RGGI, which came into force in 2009, Northeast states established an overall limit on emissions from electricity consumed in these states. The RGGI process brought together key stakeholders in the business community and guided local, state, and regional policymakers’ decisions about clean energy and efficiency investments. Establishing a similar program for the region’s transportation sector would ensure that communities and governments take a comprehensive, coordinated approach to identifying and investing in clean transit solutions.
  2. Enforce this limit through regulations that hold oil companies accountable for their emissions. Under RGGI, the emissions cap is enforced by requiring power plants to purchase allowances for every ton of pollution they emit under the cap. By limiting the number of allowances available, the program guarantees overall emission reductions. Revenue from allowance sales is used to support a range of clean energy and efficiency initiatives that save consumers money and reduce pollution. This “cap-and-invest” strategy has successfully reduced the region’s electricity emissions while cutting costs for consumers.For the transportation sector, a cap-and-invest program could require polluters (in this case, oil companies serving the Northeast) to purchase allowances under a designated cap, and communities could use the funds generated from allowance sales for clean transportation programs. This strategy has been used successfully to reduce transportation-related emissions in California, and in Ontario and Quebec, Canada.
  3. Invest in clean transportation solutions for Northeast residents. There are many valuable projects and programs in the Northeast region that could help reduce consumer costs and expand clean mobility choices. For example, states could offer subsidies for lower-income residents who want to purchase an electric vehicle; several states outside the region offer such a program, including California, which offers low-income consumers up to $13,500 in rebates when they trade in a vehicle. States could also invest in infrastructure to make electric vehicle charging more convenient for drivers. Increasing our investments in affordable housing and transit can ensure that people who want to live in communities with multiple transportation choices, or who want to live car-free, can do so. And replacing older and less-efficient buses and trucks in with electric models could significantly improve air quality in urban environments (Chandler, Espino, and O’Dea 2016).
  4. Engage communities and stakeholders in a broad conversation about clean transportation. We need to be thinking about how to provide clean transportation options to all communities in the region, from our big metro areas, to our medium-sized post-industrial “Gateway Cities,” to suburban and rural areas. It is especially important for states to think carefully about how a new investment in clean transportation solutions can benefit communities that are currently poorly served by our existing transportation system, including many communities of color, rural communities, the disabled, and the elderly. Engaging community groups early in the process can help policymakers understand the real transportation needs of Northeast residents, and shape resulting policies and programs for maximum benefit.

The Northeast has long been a leader in addressing pollution from fossil fuels, and its multistate initiative to reduce electricity sector emissions has set an example for other regions to follow. With the federal government abdicating responsibility for our environment and our climate, state and regional leadership is more important than ever. Working together, we can create a clean transportation system that works for all our residents, and the result will be a cleaner environment, a stronger economy, less spending on fuel, and a safer climate.

Sea Level Rise Threatens Cape May, New Jersey, and Its Vulnerable Visitors

UCS Blog - The Equation (text only) -

Red knots on Cape May: The population of the rufa subspecies of red knot has declined by an estimated 75 percent over the past two decades as sea level rise, shoreline stabilization, and Arctic warming shrink its habitat.

A case study part of the report When Rising Seas Hit Home

On a sunny spring day, David La Puma, the director of the Cape May Bird Observatory, escorted 20 birders to a phenomenal site in the world of migrating birds. Scores of laughing gulls crowded at the shoreline at Reed’s. Not a speck of sand could be seen beneath them. Flittering around them, sneaking through whatever airspace they could find, were some of the prettiest of shorebirds, known for their robin-orange heads and breasts: red knots.

 Despite the crowded shores, La Puma is concerned: this is nowhere close to the number of birds historically observed in this renowned migratory rest stop. This drop in population may spell trouble not only for the area’s biodiversity but also for its economy, as tourists from around the world come to Cape May to witness the bird migrations.

In 2014, the US Fish and Wildlife Service (FWS) listed the rufa subspecies of the red knot as threatened. Its numbers have declined by an estimated 75 percent over the last two decades to around 25,000 birds. The FWS reports that causes include “loss of habitat across its range due to sea level rise, shoreline stabilization, and Arctic warming.”

New analysis by UCS suggests that we may be in only the beginning stages of that habitat loss. Given a moderate scenario for sea level rise, many Cape May coastal communities could see 15 percent or more of their land flooded 26 times or more per year, or every other week, on average, by the middle of the century.

If emissions continue to rise through the end of the century, sea level is projected to rise more than 6 feet by 2100. In this scenario, the same areas of Cape May that were flooded by Hurricane Sandy’s storm surge would be inundated more than 26 times or more per year, or every other week on average. When Rising Seas Hit Home, 2017

“They’ll lose their beaches”

The birders watched as the red knots jostled each other, moving backwards in unison as waves came in, to peck at what the casual observer would assume was an ordinary strip of wrack pushed ashore. For the birds, the seaweed was the tablecloth for their version of a Michelin-starred meal; trapped inside the clumps were fatty horseshoe crab eggs.

Somewhere along evolution’s path, red knots coevolved with the egg-laying cycle of the horseshoe crab. One female crab can lay 100,000 eggs in the sand over the course of a few days, and many end up in the wrack. The birds on the beach in Cape May had arrived from thousands of miles away in South America, utterly spent of fat. Stocking up on the eggs allows them to regain their migrating weight and to continue on to the Arctic.

A red knot among a flock of laughing gulls.

“The red knot is the poster bird, but a whole host of birds will probably lose habitat. They’ll lose their beaches, they’ll be squeezed out of areas between low marsh and high marsh, there’ll be changes in water salinity, you could go on,” La Puma said. Among the birds he fears will be sensitive to sea level rise are various species of sparrows, rails, owls, terns, and other shorebirds, including the threatened piping plover.

An obvious way to help would be to preserve Cape May’s thousands of acres of natural habitat. That is a prime mission of the Wetlands Institute in the town of Stone Harbor. After Hurricane Sandy severely eroded beaches where horseshoe crabs lay eggs, the institute was a key partner in restoring critical spawning beaches in time for the next red knot migration.

It is involved in several conservation efforts working to understand the best ways to preserve coastal resiliency and implementing projects to offset the effects of sea level rise on area wildlife, including horseshoe crabs, diamondback terrapins, and coastal birds. It also offers several education programs to promote marsh advocacy.

Protecting homes and habitats

Charles Krafczek, a member of the Stone Harbor Borough Council, and Lenore Tedesco, Executive Director of the Wetlands Institute, Cape May, NJ.

Executive Director Lenore Tedesco hopes that whatever measures the Cape May region takes to prepare for sea level rise—including fortifying the wetlands as a natural sponge and barrier—are done in a way that does no harm.

In many other resort communities around the country, a reflexive solution for rising seas and storm surges is to build waterfront homes higher, strengthen bulkheads at their foundations, and build higher seawalls.

With many Cape May communities facing chronic inundation within the next several decades, one can imagine wanting to take the same approach here. According to a study published last year by researchers at the University of Georgia and Stetson University, Cape May County could see 38,000 families displaced this century by a three-foot sea level rise and nearly 80,000 by a six-foot rise.

But more than half of Cape May County is protected open space, and Tedesco believes that open space holds much potential to protect both homeowners and wildlife.

“The country, not just Cape May, has to start asking a lot of fundamental questions,” Tedesco said. “How much can we keep ‘renourishing’ our beaches? How much can we anchor the beach for homes? Even though our wetlands are largely protected, sea level rise will likely chip away at them. As we continue to work to understand how local area marshes are responding to sea level rise, we need to work together to protect them while seeking community-wide solutions that work for both people and wildlife.”

La Puma hopes those solutions come soon, as there is no telling exactly when sea level rise will begin to cause a fatal disruption of migration patterns and food supplies for the birds. “The most alarming thing to me is that people might be looking for an obvious signal of what is happening with the birds, but it might be subtler than that year to year,” he said. “By the time we discover the damage, it’s hard to say if we can recover.”

 

 

Will New Scientific Breakthroughs Pave the Way for More Climate-Related Lawsuits?

UCS Blog - The Equation (text only) -

What can you do when the president of the United States says climate change is a hoax and Congress is gridlocked by fossil fuel industry-funded climate science deniers?

Look to the courts for redress—with a major assist from science.

Using sophisticated computer analyses, scientists can now determine what percentage of an extreme weather event can be attributed to climate change. This emerging field of “climate attribution” science offers courts a powerful new tool for apportioning responsibility in cases brought by victims of extreme weather events—Hurricane Harvey comes to mind—or other climate-induced impacts, such as sea level rise, against municipalities and private real estate developers for failing to protect them from foreseeable damages.

Likewise, companies responsible for producing and marketing fossil fuels—BP, Chevron, ExxonMobil and the like—may find themselves in legal crosshairs thanks to a first-of-its-kind study definitively linking global climate changes to carbon emissions directly associated with them.

Published last week in the journal Climatic Change, the study calculated the amount of sea level rise and global temperature increase resulting from carbon dioxide and methane emissions from products marketed by the largest coal, gas and oil producers and cement manufacturers as well as their extraction and production processes.

“We’ve known for a long time that fossil fuels are the largest contributor to climate change,” said Brenda Ekwurzel, lead author and climate science director at the Union of Concerned Scientists (UCS). “What’s new here is that we’ve verified just how much specific companies’ products have caused the Earth to warm and the seas to rise.”

Ekwurzel’s study builds on a groundbreaking study one of her co-authors, geographer Richard Heede, published in Climatic Change in 2014. Closely tracking the oil, gas and coal extracted since the Industrial Revolution, Heede found that just 90 private and state-owned companies are responsible for two-thirds of human-caused carbon emissions since then. What’s more, Heede’s research showed that more than half of these carbon emissions occurred since 1988, when NASA scientist James Hansen sounded the alarm about climate change in well-publicized congressional testimony.

Ekwurzel et al.’s study quantified climate change impacts of each company’s carbon and methane emissions during two time periods: from 1880 to 2010 and from 1980 to 2010, because internal industry documents show fossil fuel companies were well aware of the threat posed by global warming at least 25 years ago.

According to the new study, emissions traced to the 90 largest carbon producers contributed approximately 57 percent of the upsurge in atmospheric carbon dioxide, nearly 50 percent of the increase in global average temperatures, and about 30 percent of global sea level rise since 1880. Meanwhile, emissions attributed to just the 50 investor-owned carbon producers, including BP, Chevron, ConocoPhillips, ExxonMobil, Peabody and Shell, were responsible for roughly 16 percent of the global average temperature increase and around 11 percent of the global sea level rise from 1880 to 2010. Between 1980 and 2010, the same 50 companies contributed approximately 10 percent of the global average temperature increase and about 4 percent of the sea level rise.

State-owned companies also have played a significant role. Emissions linked to 31 majority state-owned companies, including Coal India, Russia’s Gazprom, Kuwait Petroleum, Mexico’s Pemex, Petroleos de Venezuela, National Iranian Oil Company and Saudi Aramco, were responsible for about 15 percent of the global temperature increase and approximately 7 percent of sea level rise from 1880 to 2010.

“Until a decade or two ago, no corporation could be held accountable for the consequences of their products’ emissions because we simply didn’t know enough about what their impacts were,” explained Myles Allen, a study co-author and professor of geosystem science at the University of Oxford in England. “Our study provides a framework for linking fossil fuel companies’ product-related emissions to a range of impacts, including increases in ocean acidification and deaths caused by heat waves, wildfires, and other extreme weather-related events. We hope the results of this study will inform the debate over how best to hold major carbon producers accountable for their contributions to the problem.”

As climate change impacts worsen and become more expensive to address, the question of financial responsibility will become more urgent. In New York City alone, local officials estimate that it will cost more than $19 billion to adapt to climate change. Globally, adaptation cost projections are equally astronomical. The U.N. Environment Programme calculates that developing countries will require $140 billion to $300 billion per year in 2030 and a whopping $280 billion to $500 billion per year by 2050.

“Fossil fuel companies could have taken any number of steps to address climate change, such as investing in clean energy or carbon capture and storage,” said Peter Frumhoff, a study co-author and director of science and policy at UCS. “Instead, many of them spent millions of dollars to try to deceive the public about climate science and block sensible limits on carbon emissions. Taxpayers alone, especially those living in vulnerable coastal communities, shouldn’t have to bear all the costs of these companies’ irresponsible decisions.”

Pending lawsuits by three California coastal communities could benefit immediately from Ekwurzel et al.’s findings. San Mateo and Marin counties and Imperial Beach, a city in San Diego County, filed complaints in July against 37 major coal, oil and gas companies, including BP, Chevron, ExxonMobil and Shell, claiming higher sea levels triggered by their products is putting billions of dollars of property at risk. The study also may embolden other municipalities and states to take similar legal action in the absence of leadership from the Trump administration and Congress. It then will be up to the courts to do what too many of our elected officials have so far failed to do: acknowledge scientific reality.

Will Chevron’s New CEO Show More Vision on Climate?

UCS Blog - The Equation (text only) -

Chevron Refinery in South Africa. CC-BY-2.0 (Wikimedia Commons).

The surprise announcement that Chevron CEO John Watson will be stepping down next month caught me and a lot of other people by surprise. I quickly had a flashback to the May 31 annual shareholder meeting that I attended and my one (and likely only) unsatisfying interaction with Chairman Watson.

Attending the Chevron shareholder meeting in Midland, Texas in May was one of the most surreal experiences of my life. It started with the black suits, dark sunglasses, and earpieces that met us at the entry gate and stationed across the parking lot in strategic places.  Then came the examination of my watch at the security checkpoint, after I had been essentially stripped of all personal belongings (save the 8-1/2 x 11 pages of my typewritten notes and one ballpoint pen). I had never been in a situation as potentially intimidating as that one.

I was recruited by UCS to travel to Midland on behalf of a shareholder who wanted to press Chevron to stop spreading and supporting climate science disinformation and do its fair share to address climate change as we transition to a carbon-constrained world.

While one of two shareholder proposals pertaining to climate change was withdrawn before the meeting, my impetus for attending remained: To press the company on its plan for action. And climate change was the 800-lb gorilla in the room. It came up at multiple points in the Chairman’s presentation and assorted remarks. But not in an authentic, satisfying way. It felt more like the company was checking the “climate change” box that companies are now obligated to check due to investors’ ongoing concerns and pressure.

My heavily rehearsed statement clocked in at just under the allotted two minutes (did I mention the giant countdown clock projected on the screen behind Chairman Watson)?   At the end, I asked CEO Watson the following question:

My question, Mr. Watson is: Since you have acknowledged that ‘Reducing greenhouse gas emissions is a global issue that requires global engagement and action,’  how will you make Chevron a global leader on climate change and strategies for a low-carbon future?

I went out of my way to give Mr. Watson an opportunity to speak about what Chevron was doing on the climate issue, how he and his company were taking leadership on one of the greatest challenges of our century.

I mentioned their March 2017 report Managing Climate Change Risks.  But the response I received was a disappointing “it’s all in our report.” Whereas he could have elaborated on Chevron’s “actions and investments to manage greenhouse gases,” (energy efficiency initiatives, high-efficiency power plants, biofuel research because they believe “second- and third-generation biofuels could help meet the world’s future energy needs”), he chose not to do so.   Instead, moving quickly to a fiduciary perspective, Mr. Watson then assured stockholders that there would be no stranded assets since we will still need fossil fuels for decades, and that Chevron is not at risk of failing to provide return on investment in the assets it develops in the future.

I remember leaving the meeting wondering why CEO Watson seemed incapable of providing a vision of Chevron in a world that is rapidly evolving to understand the risks of climate change. Maybe the answer to that question is that he did not have much of a long-term vision of himself at Chevron.

Under Mr. Watson, this is an oil and gas operation that is very, very reluctant to embrace the energy world of the future. It remains to be seen whether Chevron’s next CEO is cut from the same or different cloth.

More on Chevron:

* Climate Risk in the Spotlight of Chevron’s Annual Shareholder Meeting

* Chevron, ExxonMobil Face Growing Investor Concerns About Climate Risk

* Chevron Denies Climate Risk to Shareholders While Supporting the Spread of Climate Disinformation

Author’s Bio: Dr. Wendy Gordon is an environmental professional with more than 25 years of experience managing the natural resources of Texas, with an emphasis on science-based water policy and planning, endangered species conservation, and climate change impact analysis. Her career has spanned the private, nonprofit and government sectors. In her current role as an environmental consultant she focuses on water conservation, endangered species management, and climate change assessments. She has published her ecological, hydrologic, and climate change research in a variety of peer-reviewed journals from Global Change Biology and GIScience & Remote Sensing to Ecological Applications and Eos.  

Broken Valve in Emergency System at LaSalle Nuclear Plant

UCS Blog - All Things Nuclear (text only) -

An NRC Special Inspection Team (SIT) conducted an inspection at the LaSalle Nuclear Plant this spring to investigate the cause of a valve’s failure and assess the effectiveness of the corrective actions taken.

The two units at Exelon Generation Company’s LaSalle County nuclear plant about 11 miles southeast of Ottawa, Illinois are boiling water reactors (BWRs) that began operating in the early 1980s. While most of the BWRs operating in the U.S. are BWR/4’s with Mark I containment designs, the “newer” LaSalle Units feature BWR/5’s with Mark II containment designs. The key distinction for this commentary is that while BWR/4’s employ steam-driven high pressure coolant injection (HPCI) systems to provide makeup cooling water to the reactor core in event that a small pipe connected to the reactor vessel breaks, the BWR/5’s use a motor-driven high pressure core spray (HPCS) system for this safety role.

The Event

Workers attempted to refill the Unit 2 high pressure core spray (HPCS) system with water on February 11, 2017, following maintenance and testing of the system. The Unit 2 reactor was shut down for a refueling outage at the time and this downtime was used to inspect emergency systems, like the HPCS system.

The HPCS system is normally in standby mode during reactor operation. The system features one motor-driven pump that supplies a design makeup flow rate of 7,000 gallons per minute to the reactor vessel. The HPCS pump draws water from the suppression pool inside containment. In event that a small-diameter pipe connected to the reactor vessel broke, cooling water would leak out but the pressure inside the reactor vessel would remain too high for the array of low-pressure emergency systems (i.e., the residual heat removal and low pressure core spray pumps) to function. Water pouring from the broken pipe ends drains to the suppression pool for re-use. The motor-driven HPCS pump can be powered from the offsite electrical grid when it is available or from an onsite emergency diesel generator when the grid is unavailable.

Fig. 1(Source: Nuclear Regulatory Commission)

Workers were unable to fill the piping between the HPCS injection valve (1E22-F004) and the reactor vessel. They discovered that the disc had separated from the stem of this double disc gate valve manufactured by Anchor Darling and blocked the flow path for filling the piping. The HPCS injection valve is a normally closed motor-operated valve that opens when the HPCS system is actuated to provide a pathway for makeup water to reach the reactor vessel. The motor applies torque that rotates a screw-like stem to raise (open) or lower (close) the disc in the valve. When fully lowered, the disc blocks flow through the valve. When the disc is fully raised, flow through the valve is unobstructed. Because the disc became separated from the stem in the fully lowered position, the motor might rotate the stem as if to raise the disc, but the disc would not budge.

Fig. 2 (click to enlarge) (Source: Nuclear Regulatory Commission)

Workers took a picture of the separated double disc after the valve’s bonnet (casing) was removed (Fig. 3). The bottom edge of the stem appears at the top center of the picture. The two discs and the guides they travel along (when connected to the stem) can be seen.

Fig. 3 (Source: Nuclear Regulatory Commission)

Workers replaced the internals of the HPCS injection valve with parts redesigned by the vendor and restated Unit 2.

Background

The Tennessee Valley Authority submitted a report under 10 CFR Part 21 to the NRC in January 2013 about a defect in an Anchor Darling double disc gate valve in the high pressure coolant injection system at their Browns Ferry nuclear plant. The following month, the valve’s vendor submitted a 10 CFR Part 21 report to the NRC about a design issue with Anchor Darling double disc gate valves that could result in the stem separating from the discs.

In April 2013, the Boiling Water Reactor Owners’ Group issued a report to its members about the Part 21 reports and recommended methods for monitoring the affected valves for operability. The recommendations included diagnostic testing and monitoring the rotation of the stems. Workers performed the recommended diagnostic testing of HPCS injection valve 2E22-F004 at LaSalle during 2015 without identifying any performance issues. Workers performed maintenance and testing of HPCS injection valve 2E22-F004 on February 8, 2017, using the stem rotation monitoring guidance.

In April 2016, the Boiling Water Reactor Owners’ Group revised their report based on information received from one plant owner. Workers had disassembled 26 potentially susceptible Anchor Darling double disc gate valves and found problems with 24 of them.

In April 2017, Exelon notified the NRC about the failure of HPCS injection valve 2E22-F004 due to separation of the stem from the discs. Within two weeks, a Special Inspection Team (SIT) chartered by the NRC arrived at LaSalle to investigate the cause of the valve’s failure and assess the effectiveness of the corrective actions taken.

SIT Findings and Observations

The SIT reviewed Exelon’s evaluation of the failure mode for the Unit 2 HPCS injection valve. The SIT agreed that a part within the valve had broken due to excessive force. The broken part allowed the stem-to-disc connection to become steadily more misaligned until eventually the discs separated from the stem. The vender redesigned the valve’s internals to correct the problem.

Exelon notified the NRC on June 2, 2017, of its plan to correct 16 other safety-related and important to safety Anchor Darling double disc gate valves that may be susceptible to this failure mechanism during the next refueling outages of the two LaSalle units.

The SIT reviewed Exelon’s justifications for waiting to fix these 16 valves. The SIT found the justifications to be reasonable with one exception—the HCPS injection valve on Unit 1. Exelon had estimated the number of times that the Unit 1 and the Unit 2 HPCS injection valves had been cycled. The Unit 2 valve was original equipment installed in the early 1980s while the Unit 1 valve had been replaced in 1987 following damage due to another cause. Exelon contended that the greater number of strokes by the Unit 2 valve explained its failure and justified waiting until the next refueling outage to address the Unit 1 valve.

Citing factors like unknown pre-operational testing differences between the units, slight design differences of unknown consequence, uncertain material strength properties, and uncertain differences in stem-to-wedge thread wear, the SIT concluded “that it was a matter of “when” and not “if” the 1E22-F004 valve would fail in the future if it had not already failed.” In other words, the SIT did not buy the delayed look at the Unit 1 valve.

Exelon shut down LaSalle Unit 1 on June 22, 2017, to replace the internals of HPCS injection valve 1E22-F004.

NRC Sanctions

The SIT identified a violation of Criterion III, Design Control, of Appendix B to 10 CFR Part 50 associated with the torque values developed by Exelon for the motors of HPCS injection valves 1E22-F004 and 2E22-F004. Exelon assumed the valves’ stem to be the weak link and established motor torque values that would not over-stress the stem. But the weak link turned out to be another internal part. The motor torque values applied by Exelon over-stressed this part, causing it to break and the discs to separate from the stem.

The NRC determined that the violation to be a Severity Level III Violation (out of a four-level system with Level I being most serious) based on the failure of the valves preventing the HPCS system from performing its safety function.

But the NRC exercised enforcement discretion per its Enforcement Policy and did not issue the violation. The NRC determined that the valve design defect was too subtle for Exelon to have reasonably foreseen and corrected before the Unit 2 valve’s failure.

UCS Perspective

Exelon looked pretty good in this event. The NRC’s SIT documented that Exelon was aware of the Part 21 reports made by the Tennessee Valley Authority and the valve’s vendor in 2013. That they were unable to use this awareness to identify and correct the problems with the Unit 2 HPCS injection valve is really not a poor reflection on their performance. After all, they performed the measures recommended by the Boiling Water Reactor Owners’ Group for the two Part 21 reports. The shortcoming was in that guidance, not in Exelon’s application of it.

The only blemish on Exelon’s handling of the matter was its weak justification for operating Unit 1 until its next scheduled refueling outage before checking whether its HPCS injection valve was damaged or broken. But the NRC’s SIT helped Exelon decide to hasten that plan with the result that Unit 1 was shut down in June 2017 to replace the susceptible Unit 1 valve.

The NRC looked really good in this event. Not only did the NRC steer Exelon to a safer place regarding LaSalle Unit 1, but the NRC also prodded the entire industry to get this matter resolved without undue delay. The NRC issued Information Notice 2017-03 to plant owners on June 15, 2017, about the Anchor Darling double disc gate valve design defects and the limitations in the guidance for monitoring valve performance. The NRC conducted a series of public meetings with industry and valve vendor representatives regarding the problem and its solution. Among the outcomes from these interactions is a resolution plan by the industry enumerating a number of steps with target deadlines no later than December 31, 2017, and a survey of where Anchor Darling double disc gate valves are used in U.S. nuclear power plants. The survey revealed about 700 Anchor Darling double disc gate valves (AD DDGVs) used in U.S. nuclear power plants, but only 9 valves characterized as High/Medium risk, multi-stoke valves. (Many valves are single stroke in that their safety function is to close, if open, or open, if closed. Multi-stroke valves may be called open to open and close, perhaps several times, in fulfilling their safety function.)

Fig. 4 (Source: Nuclear Energy Institute)

There’s still time for the industry to snatch defeat from the jaws of victory, but the NRC seems poised to see this matter to a timely and effective outcome.

Florida’s Nuclear Plants and Hurricane Irma

UCS Blog - All Things Nuclear (text only) -

Will Florida’s two nuclear plants, Turkey Point and St. Lucie, be able to withstand Hurricane Irma?

Florida governor Rick Scott, the utility Florida Power & Light (FP&L), and the US Nuclear Regulatory Commission (NRC) have all provided assurances that they will. But we are about to witness a giant experiment in the effectiveness of the NRC’s strategy for protecting nuclear plants from natural disasters.

A review of the plans that the two plants have developed to protect against extreme natural disasters leaves plenty of room for concern. These plans were developed in response to new requirements that the NRC imposed in the years following the March 2011 Fukushima nuclear plant disaster in Japan. A prolonged loss of all electrical power—caused by an earthquake and subsequent tsunami that flooded the Fukushima site—resulted in three nuclear reactor meltdowns and a large release of radioactivity to the environment. (Even when reactors are shut down, they normally rely on electrical power to provide cooling water to the fuel in the cores and the spent fuel in storage pools, which remain hot.)

Fukushima made it clear that nuclear plants around the world were not sufficiently protected against natural disasters. Subsequently, the NRC imposed new requirements on US nuclear plants to develop strategies to cope with prolonged electric blackouts.

However, these new requirements were heavily influenced by pressure from a cost-conscious nuclear industry. As a result, they were limited in scope.

Moreover, these requirements are based on numerous assumptions that may not prove valid in the face of massive and powerful storms. In effect, the NRC is betting that no nuclear plant will experience conditions that don’t conform to these assumptions. Soon, the nation will find out whether the NRC wins or loses the next round with Mother Nature: Hurricane Irma.

The Plan for Turkey Point

Turkey Point Nuclear Plant (Source: NARA)

FP&L’s plan for Turkey Point, 25 miles south of Miami, contains many questionable assumptions.

To give just one example, its strategy to keep the two reactors cool if there is a total loss of electrical power (both offsite and on-site back-up power) includes initially drawing water from two water supply tanks (so-called condensate storage tanks), running the water through the reactors’ steam generators, and dumping the steam that is produced by the heat of the nuclear fuel in the reactor cores into the atmosphere (when the plant is operating, the steam is used to generate electricity).

But here’s the rub: These tanks were not designed to withstand objects thrown about by the high winds occurring during tornadoes or hurricanes.

Nevertheless, FP&L assumed—and the NRC accepted—that at least one of the two tanks on site would withstand any hurricane. They argued that this was a reasonable assumption because the two tanks are separated by a few hundred feet and there are structures between them. There seems to be a degree of wishful thinking at work here. If both tanks were damaged, the challenges in keeping the cores cool would be far greater.

Also, to deal with prolonged station blackouts—when both offsite and onsite back-up power is lost—the Turkey Point plan assumes that offsite assistance would be available after five days. The nuclear industry has set up two “National SAFER Response Centers,” one in Memphis, Tennessee and the other in Phoenix, Arizona. Each one contains additional emergency equipment and supplies to supplement those that each reactor owner is required to have on site. The NRC requires that every plant in the country have an agreement with one of the SAFER centers to provide equipment and assistance should it be needed.

But the functioning of this system depends on the ability of the SAFER centers to deliver the equipment in a timely manner, which might not be possible if there were a widespread and prolonged natural disaster.

Turkey Point’s plan requires that deliveries from the Memphis SAFER center be shipped to Miami International Airport and then hauled (if the roads are clear) to the site or to the Homestead Air Reserve Base and taken to the site via helicopter. But it doesn’t take too great a stretch of the imagination, given the potential impact of a massive storm like Irma, to see where this plan could go badly wrong. And looking at the current track of the storm, the Memphis SAFER center itself could well be in its path, causing problems at the shipping end as well as the receiving end.

Even if the Turkey Point plan were effective, it is not clear how much of it has been put into place on the ground yet. At the end of June, the plant reported to the NRC that it needed to make ten modifications to address the risk of storm surges that could exceed the flood level that the plant was originally designed to withstand.

But it isn’t clear how many of those modifications have been completed yet. And the NRC’s first inspection of the post-Fukushima measures at Turkey Point is not even scheduled until March 2018. So at this time all the public has to rely on is an assumption that FP&L has implemented the plan completely and correctly.

With one assumption piled upon another, it is very hard for observers to assess how prepared Turkey Point really is to deal with superstorms. Hopefully, the plant will pass the Irma test, but the NRC will need to reevaluate whether its new requirements can adequately address the potential for more severe storms in the future.

Recovery After Hurricane Harvey: Will There Be Justice for All?

UCS Blog - The Equation (text only) -

Photo: Coast Guard News/CC BY-NC-ND 2.0 (Flickr)

What happens to Houston after the media coverage storm subsides, when the country has moved on from the reality that is the aftermath of Hurricane Harvey? Will the people of Houston, who will be affected by this devastation financially and emotionally for years to come, soon become just yesterday’s headline? I would hope not. But recent history shows we should be concerned.

Hurricane Harvey dumped 33 trillion gallons of water (nearly double the volume of the entire Chesapeake Bay watershed)—roughly 275 trillion pounds of water, onto Houston and surrounding areas. Some media outlets called Hurricane Harvey an “equal opportunity” disaster, meaning it negatively affected both the rich and the poor. Equal opportunity—but is it? Time will tell. What will happen when the less financially secure, mostly minority communities try to rebuild? Will they still receive equal opportunities to recover? Will the air pollution and contaminants from hazardous facilities have a disproportionate long-term effect on the communities of color living in closer proximity to toxic sites?

I’m skeptical because I know the history. Last year I did a report looking at the disproportionate impacts of chemical risk and toxic chemical exposure in four Houston neighborhoods: Harrisburg/Manchester and Galena Park in east Houston, along the highly industrialized ship channel, and Bellaire and West Oaks/Eldridge in more affluent west Houston. The study found that 90 percent of the population in Harrisburg/Manchester and almost 40 percent of the population of Galena Park lives within one mile of an RMP facility, compared to less than 10 and less than 15 percent of Bellaire and West Oaks/Eldridge residents. And far more accidents have occurred at the facilities in and around the east Houston neighborhoods.

As for the health impacts: Residents of the Harrisburg/Manchester community have a 24-30% higher cancer risk and those of Galena Park have a 30-36% higher risk, when compared to Bellaire and West Oaks/Eldridge, respectively. The potential for residents to suffer from respiratory illnesses in Harrisburg/ Manchester and Galena Park was 24% and 43% higher than in Bellaire and West Oaks/Eldridge, respectively. It also bears noting that 97% and 86% of the respective east Houston communities’ residents are people of color, and the communities have up to ten times more poverty than the two in west Houston. Communities like Manchester have long not gotten their piece of the pie, they need to get their piece of Harvey recovery resources.

And already we are seeing evidence that Harvey’s impacts might not be so equal.

To quote my colleague, Juan Declet-Barreto, “The communities that have been living around those facilities have been saying for years that the regulatory frameworks to protect people from the negative aspects of petrochemical industry are not adequate.” The unprecedented rainfall and subsequent flooding was contaminated with waste from nearby toxic (Superfund) sites, leaving people exposed to a cocktail of harmful pollutants. The havoc wrought unto Houston by Harvey’s winds was intensified by the leaking of more than 1 million pounds of toxic pollutants from refineries and chemical facilities, including known carcinogens benzene and 1,3-butadiene and respiratory irritants hydrogen sulfide, sulfur dioxide and xylene. Hurricane survivors spoke of harsh fumes burning their throats and eyes, making it difficult to breathe under already difficult circumstances.

We knew better

While we certainly cannot stop hurricanes from hurtling in uninvited, we can mitigate the severity of their consequences. For instance, the explosions and fires at the Arkema facility could have been prevented with safer alternatives, and workers and communities could have had more information and coordination with emergency responders. However, just this past January, amendments to strengthen the Risk Management Program (RMP), the oversight program designed to set safeguards on chemical facilities to protect public health and safety, were delayed until February 19, 2019. My colleagues and I delivered comments at the public hearing EPA held in April on the rule to delay amendments to the RMP (read here). If certain communities are being hit harder and their cries for help have yet to be heard, is it fair to say this was an equal opportunity disaster?

Broad impacts

It isn’t just Manchester that is likely to experience disparate impacts from a hurricane. Extreme weather events will continue; warming sea surface temperatures are a key ingredient in the storm recipe. Floodwaters have not completely cleared, but media coverage is already being diverted to the next big storm. In the week since Harvey descended onto Houston and surrounding areas, three more hurricanes have formed: Hurricanes Irma, Jose and Katia. Harvey’s five day-long deluge had all but ended before Caribbean islands were bracing themselves for Hurricane Irma, which struck the island of Barbuda hard on Wednesday. Over 90% of structures on the small, bucolic island—home to 1800 people—are said to have sustained major damage, according to the Prime Minister of Antigua and Barbuda, Gaston Browne.

Irma is currently tearing through the Caribbean, leaving many in Puerto Rico without power, and possibly hitting Florida this weekend. Puerto Rico is, like Houston, laden with RMP facilities that can spew toxic fumes into the air and with Superfund sites that could potentially flood and contaminate drinking water.

EPA Brownfields, RMP, and Superfund program sites and FEMA-designated flood risk zones in Puerto Rico.

What’s next

For the last week, coverage detailing every aspect and angle of the situation in Houston has inundated news outlets, putting the spotlight on and revealing the issues that are usually swept under the rug and ignored. I expect similar coverage for Irma. We mustn’t let this attention die down. We must demand a just recovery from Barbuda to the Buffalo Bayou.

UCS Experts’ View of Risk and Preparedness as the Impacts of Hurricanes Harvey and Irma Mount

UCS Blog - The Equation (text only) -

A GOES satellite shows Hurricane Irma, center, and Hurricane Jose, right, in the Atlantic Ocean, and Hurricane Katia in the Gulf of Mexico. U.S. Navy (Flickr).

We’ve witnessed the destruction done by Hurricane Harvey and now, less than two weeks later, with clean-up in Texas and Louisiana scarcely underway, we see the path of Caribbean devastation Hurricane Irma is leaving as it heads toward mainland US. The NHC is warning that Irma will bring dangerous storm surge to coastal Florida, and heavy rain and life-threatening flooding from Florida to North Carolina.

With preparedness efforts being marshalled to the Southeast US and specific risks coming into better focus, we offer a composite of what our team – experts at the intersection science, policy and social equity – are seeing.

The science snapshot: a punishing hurricane season

Lead author: Astrid Caldas, Senior Climate Scientist

As Hurricane Irma approaches Florida, it will pass over extremely warm water that could provide fuel for an already strong storm.

What conditions are fueling a punishing hurricane season?

We’re in the midst of a terrifying storm season. Back on May 25, The National Hurricane Center released its annual Atlantic Hurricane Season Outlook for 2017, which predicted a 45 percent chance of an above-normal season. On August 9 an update was published that forecast a more active season than previously predicted, with a 60 percent chance of an above-normal season. One of the reasons behind the update: warm ocean water temperatures.

Oceans have absorbed about 93 percent of global warming to date, and all that warmth can intensify hurricanes and increase the amount of rain they deliver (see here). During the northern hemisphere’s warm months, sea surface temperatures rise, creating that unique hurricane fuel. And we are certainly seeing the forecast materialize with two (likely three) back-to-back major hurricanes.

Harvey underwent a very rapid strengthening from tropical storm to category 4 hurricane, moving over waters that were 2.7-7.2°F above average. The record rain it brought – over 50 inches in places – made the National Weather Service create new colors for their precipitation maps to show the unprecedented amounts. Irma quickly became a category 5 hurricane earlier this week, feeding off abnormally warm waters along its path across the northeast Caribbean. Its 180+ mph winds held for the longest time of any storm and set a record for an Atlantic hurricane – the strongest ever recorded. And according to NOAA, the sea surface temperature where Irma was located on as it thrashed Puerto Rico is “hot enough to sustain a category 5 storm.”

If Irma strikes the US coast as a Category 4 storm, it will be the first time in over a century that an Atlantic hurricane season produced two storms of that strength that made landfall on the US mainland.

The policy picture: Harvey requires staggering resources — Irma will too 

Lead Authors: Rachel Cleetus, Lead Economist and Climate Policy Manager, and Shana Udvarvy, Climate Preparedness Specialist.

After Hurricane Ike in Texas. Photo: U.S. Air Force/Staff Sgt. James L. Harper Jr.

Major disasters like Harvey and Irma call for robust federal leadership and significant resources. Recent news reports indicate that FEMA, the lead federal agency involved in disaster response, is running alarmingly low on funding in the wake of a spate of recent disasters. With the damaging impacts of Irma (and potentially Jose and/or the next storm, as well as other disasters such as the current wildfires in the West) adding up, Congress needs to act quickly to ensure the agency has adequate resources to help protect communities and aid in their recovery.

And it’s not just FEMA—HUD, USACE, EPA and DOT are among the many federal agencies that work in close coordination with state and local authorities to get people back on their feet safely and ensure the repair and rebuilding of critical infrastructure. They too need to be well funded right now.

An important way to limit harms to people and the economy is to invest in preparedness well ahead of a disaster, which is why the Trump administration’s attempts to cut the budgets of agencies like FEMA , HUD, NOAA and NASA  and the President’s Executive Order rolling  back the federal flood risk management standards  earlier this year were so short-sighted and frankly dangerous. (No, Secretary Ben Carson, HUD cannot deliver on its mission properly while understaffed and with the $6 billion cut in its budget that President Trump had proposed.)

Even as Congress works to provide immediate disaster relief for those who are suffering the impacts of Harvey, Irma and other disasters, it’s vital that they look ahead to provide the resources and policies we need to help protect people from future catastrophes.

Our colleague Rob points out five things Congress should do right away.  And we’ll add a sixth: It’s high time for Congress and the administration to stop denying basic scientific facts about climate change and its worsening impacts on Americans and work together on solutions that will help protect us.

Profile of the next storm:  How Irma is different from Harvey

Lead Author: Kristy Dahl, Climate Scientist

Irma is record-breaking in ways that are distinct from Harvey, and, depending on its path, has the potential to bring catastrophic storm surge to southeastern coastal states. The National Weather Service has already issued storm surge watches for both coasts of Florida south of Cape Coral and West Palm Beach and initial storm surge estimates exceed 9 feet in parts of the Everglades.

As Jeff Masters, of Weather Underground, points out, though, the northern coast of Florida, Georgia, and southern South Carolina, because of its concave shape, could experience much larger storm surges. Based on modeling of hypothetical storms (i.e., not specifically Irma), a Category 3 storm could cause 17 to 23 feet of storm surge along the northern Florida and Georgia coasts. That would put Irma into Katrina’s league.

In addition, Irma’s winds and waves could greatly impact coastal areas. If Irma approaches Florida as a Category 4 storm, its sustained winds could range from 130-156 mph, though gusts could be higher. And because the relatively deep water off the coast of Miami allows large waves to develop, the coastline could experience heavy wave damage.

Residents in these states should monitor NWS’s potential storm surge flooding maps and heed local warnings.

A picture of mobilization, with Harvey’s victims to thank

Lead Author: Erika Spanger-Siegfried, Senior Analyst in the Climate and Energy Program 

While it raged over Texas and later Louisiana, Hurricane Harvey brought much of the region and the nation’s 4th largest city to their knees, displacing over a million people and leaving more than 300,000 without power. The slow-moving catastrophe gripped the nation. Rooftop calls for rescue were answered, with stories of heroism and care emerging; and others went unanswered for days, while suffering deepened. We are shaken. Another Katrina, some say.

So when a new storm so quickly formed into Irma in the tropical Atlantic, we are meeting it with vastly heightened concern.  Not the anxiety that we can whip ourselves into over, e.g., a heavy snow forecast, but with open-eyed, informed, rational dread.

Irma utterly devastated several Caribbean islands, destroying 90 percent of Barbuda’s buildings and leaving half of its population homeless, according to its prime minister. Next it is going to turn north and head toward Florida and the Southeast U.S. with potentially catastrophic impacts. Whatever specific path it follows, it is big enough to impact the entire state of Florida.

The risks aren’t lost on Southeast officials and the response in these states has been swift and serious. Orders will change by the time you read this, but as early as Monday, September 4th, Florida Governor Rick Scott had declared a state of emergency, and by Thursday, the governors of Georgia, South Carolina and North Carolina had as well. As of Thursday afternoon, mandatory local evacuations orders had been issued from the Florida Keys, through locations like Savannah, Georgia. All 7,000 members of Florida’s National Guard and Guard from other states were being mobilized and FEMA staff were mobilizing teams and resources.

Harvey, as the first major storm of the season, provided a service by shaping and elevating the level of concern across the nation.  The attention focused by Harvey on the very real threat of the 2017 hurricane season hopefully means that more people in the coastal southeast will have retreated to safety when Irma strikes.  Thank you, Texas, for your sacrifice.

Snapshot of a State: This is Not Hurricane Andrew’s Florida

Lead Authors: Erika Spanger-Siegfried, Senior Analyst in the Climate and Energy Program, Kristy Dahl, Climate Scientist, Edwin Lyman, Senior Scientist, Global Security Program, and Steve Clemmer, Director of Energy Research and Analysis. 

Miami. Photo: Gunther Hagleitner (CC-BY-2.0, Flickr)

Much has changed in the 25 years since Hurricane Andrew struck Florida. Like Houston, which added more than 800,000 new metro area residents since 2010, South Florida’s population has grown rapidly in recent decades. Many are new residents or have been born after 1992, therefore don’t have the frame of reference of the danger a major hurricane can bring.

Miami-Dade County grew nearly 8 percent in just the last 5 years, to 2.7 million. But it is distinct from other urban areas in the millions of residents who are clustered so close to a coast that is, in turn, so close to sea level. Where there are people, there is housing — about 43,000 condos have been built in the last 15 years just in the strip of Miami-Dade County between I-95 and the Atlantic.

And there is infrastructure to support them. Our just-released map of energy and industrial infrastructure exposed to Harvey flooding illustrates how vast the public health, environmental and economic risks of hurricane-impacted infrastructure can be.

UCS’s 2015 Lights Out? analysis showed that storm surge from a Category 3 hurricane could expose nearly 40 electrical substations in Miami and southeastern Florida to flooding.  Widespread and long-lasting power outages can occur when only a few pieces of critical electrical infrastructure are damaged, as we have seen happen from Hurricanes Harvey, Sandy, Katrina and Rita. A stronger storm, along with high winds and heavy rainfall, could add to this flooding risk and cause significant damage to more substations and other electricity infrastructure, including power lines and power plants located in the area. The resulting potential for prolonged outages could present grave dangers for critical infrastructure dependent on reliable electricity supply, like hospitals, police and fire departments, and communication networks

Despite the claims of Governor Rick Scott and FP&L, Irma could pose a threat to Florida’s two nuclear plants, Turkey Point and St. Lucie. Even after the plants are shut down in advance of the storm,  the fuel in the cores of the reactors and the spent nuclear fuel in storage pools are hot and highly radioactive and must be continuously cooled to prevent them from overheating and melting.

The March 2011 Fukushima nuclear plant disaster in Japan occurred because massive flooding of the reactor site disabled on-site backup power supplies and shorted out electrical distribution systems. After Fukushima, the U.S. Nuclear Regulatory Commission ordered nuclear plants to take measures to reevaluate their vulnerabilities to natural disasters, and most plants in the U.S. found that they were subject to floods more severe than those they were originally designed to withstand. Florida’s nuclear plants were no exception. But the industry’s response to this new information has been too slow. Even today, more than six years after Fukushima, nuclear plant owners have not fully implemented measures to mitigate the new threats, and vulnerabilities will remain even after they are in place.

Looking at the region more broadly, the coastline between Key West, Florida, and Beaufort, South Carolina, is home to more than a dozen military installations, many of which could see more than 50 per cent of their area flood with a Category 3 storm.

The big picture: People, once again, in the jaws of a storm

Lead Authors: Nicole Hernandez-Hammer, Climate Scientist and Community Advocate, Rachel Cleetus, Lead Economist and Climate Policy Manager. 

While thousands of people are already evacuating Florida, getting out of harm’s way can often be easier said than done. The Miami native on our team is hearing from folks on the ground that people are trying to find lodging out of reach of the storm but many places within a day’s drive are already booked. Many are staying with family or finding hotels further out of state. People are being forced to make tough decisions about when to leave and what to take along, how to safely evacuate children, the elderly, the sick or others who may be vulnerable. Of Miami’s emergency shelters, only one currently allows pets. And we’re hearing from people who want to leave but can’t: they don’t have cars and were told the Greyhound buses are full. They are forced to remain and ride out the storm.

Evacuating also means leaving work, which is especially hard for those who live paycheck to paycheck – can they get the time off and if so, can they afford it? Some businesses like grocery stores and banks want to stay open to provide supplies for storm preparation, which helps those remaining and hinders their workers wishing to leave. Day laborers and hourly-wage earners can feel the pressure to make as much money as possible in advance of the storm so they can afford either to evacuate or supplies to shelter in place. All the while South Floridians are dealing with the uncertainty of whether they’ll have a job, a home even, after the storm passes.

In this moment, undocumented immigrants face compounded fears about seeking shelter and relying on federal authorities for help. News reports indicate that these fears are affecting immigrant communities in Houston. South Florida is home to approximately 450,000 undocumented immigrants who may be facing the same fears. FEMA has tried to allay concerns about ID checks by immigration officials at shelters but understandably many have serious misgivings with an administration that so recently made a decision to end the DACA program, has tried to withhold funding from sanctuary cities, and has frequently employed blatantly racist and xenophobic rhetoric. These politics are an obstacle to people finding safety, and a disgrace.

In addition, our research and research from others show that disasters like these have a disproportionate impact on disadvantaged communities and communities of color. As an example, in a recent report we highlighted the plight of Opa-Locka and Hialeah, two low-income communities in the western edge of Miami-Dade County, that have struggled for decades with the impacts of storms and flooding.

As with Harvey, ordinary people are rising to the occasion and doing their best to prepare for Irma. A grassroots effort to help with storm preparations is being coordinated by the Miami Climate Alliance, the CLEO Institute and the New Florida Majority, and an informal emergency operations center has sprung up in Miami-Dade and Broward counties to help low income, disabled and elderly people.

How to move forward? Building a legacy of hard-won lessons

Lead Authors:Erika Spanger-Siegfried, Senior Analyst in the Climate and Energy Program and Rachel Cleetus, Lead Economist and Climate Policy Manager. 

The devastating hurricanes we endure as a nation, as states, as communities — we have learned something profoundly important from each. From Andrew, we witnessed the painful pace of disaster recovery and learned how to better manage such processes and how to build stronger. In Katrina, we encountered our own hubris in the feebleness of our man-made defenses and in how many lives we had put at risk. With Sandy, we acknowledged the role of climate change and started learning to build and rebuild for a climate change future. And through all these storms, we’ve witnessed the injustices that some communities—particularly those that are economically and politically disempowered— have faced, where recovery has come much more slowly, if at all.

With Harvey and Irma, there will be smaller lessons – like, we knew the way we were planning, developing, and building was creating risk, and we kept doing it anyway. And there will be big lessons too. Not only have we built unnecessary risk into our built environment, we’ve added risk to our very environment with climate change exacerbating the threat each time hurricane season arrives, and there is no putting this angry genie back in the bottle.

A safety message on the base housing marquee sign reminds residents of the coming Hurricane Irma at Coast Guard Sector San Juan, Puerto Rico, Sept. 6, 2017. U.S. Coast Guard photo by Petty Officer 2nd Class Jonathan Lally.

In this moment between these massive storms, when we would each eagerly wish away a world where such fearsome scenarios are possible and so many people are under threat, we’ll instead need to face, not just the next storm, but the new reality. When these storms finally pass and people are safe, we’ll need to take stock with wiser eyes.

First, we’ll need to remember that recovery efforts will take time and need money. Long after these storms drop out of the headlines, we’ve got to ensure policymakers are focused on the needs of affected communities. And we’ll need to demand better of our leaders, whose failings look particularly stark right now. If you’re not helping to turn the tide toward a more resilient future, you’re not a leader for our times.

In the meantime, our focus remains on our brothers and sisters in the path of the storm.  If you can donate, UCS recommends donations to local groups working on hurricane response and on longer-term climate education and preparedness, such as the Miami Climate Alliance and the CLEO Institute, in addition to large groups like the Red Cross or Direct Relief.

 

Strategic missile defense failures: who’s to blame?

UCS Blog - All Things Nuclear (text only) -

In Wednesday’s Washington Post, columnist Marc Thiessen blames Democrats’ historic skepticism about missile defense for the poor state of these systems today, but that’s a misrepresentation of its history.

What is the poor state of the Ground-based Midcourse System (GMD) due to?

In our 2016 report, we looked back at the history of the development of the GMD system since its origins in 2002.

The Bush administration exempted the missile defense development program from the normal oversight and accountability processes required of other major military systems, with the goal of quickly fielding the GMD system. These exemptions allowed the Pentagon to cut engineering cycles short and to field poorly tested equipment; the haste with which the system was fielded ensured this would be the case.

Today this poorly tested equipment makes up key parts of the fielded GMD system. Nearly all of the GMD interceptors—the core of the GMD system’s defensive capability today—were fielded before their design had been successfully intercept-tested even once.

This flawed approach—not a lack of money– is responsible for most of the problems with the system. The GMD system’s test record has been notably poor, with just nine successful intercepts out of 18 tries, despite the fact that the tests are heavily scripted for success. Identifying the cause of these failures and fixing the already-fielded interceptors has cost considerable time and money. The GMD system continues to have major schedule and cost overruns.

Yet, it is not just the execution of the program that has been problematic, it is the approach to the task of hitting a missile with a missile. A scathing 2012 National Academy of Sciences study called the GMD system “deficient” with respect to all of the study’s fundamental principles for a cost-effective missile defense, and recommended a complete overhaul of the interceptors, sensors, and concept of operations.

Insufficient oversight has not only exacerbated the GMD system’s problems, but has obscured their full extent. Obama administration attempts to improve oversight and accountability without bringing missile defense under the normal processes have led to ongoing problems. These include projects that have been started without sufficient vetting and later canceled, and components that are being fielded based on imposed deadlines rather than technical maturity—in some cases with known flaws.

Build more or fix the system?

Is following the Bush plan the right idea? The full complement of 44 interceptors envisioned by the Bush plan will be fielded by the end of this year. Yet Pentagon testing officials assess that the GMD system has not yet demonstrated an operationally useful capability.

The Missile Defense Agency’s (MDA) decision to build and field additional untested interceptors rather than systematically fix all known flaws also ignores specific advice on how best to balance a sense of urgency with the responsibility to build a cost-effective and high-quality system. A top-level recommendation of the 2008 “Welch report” (produced by a panel headed by retired Air Force Chief of Staff General Larry Welch) on missile defense concerned this balance:

For mid-course intercept systems, the balance between qualitative improvements and deploying more of existing capabilities should be strongly in favor of qualitative improvements. Without such a focus, the current system capabilities will become obsolete regardless of the numbers of interceptors deployed.

For the GMD system, however, the balance has been strongly in favor of building more of the existing capabilities, presumably to provide reassurance domestically and to allies. Rushing minimally tested hardware into the field may give the appearance of a defense, but it does not reliably protect US cities.

Did the US abandon promising programs prematurely?

Thiessen suggests that missile defense programs have been abandoned prematurely. In reality, this was the overdue discarding of wasteful, unworkable programs.

Regarding the three programs Thiessen mentions: Airborne Laser, the Kinetic Energy Interceptor, the Multiple Kill Vehicle, Secretary of Defense Robert Gates strongly criticized these (and their supporters) in the New York Times in 2009:

I have found since taking this post that when it comes to missile defense, some hold a view bordering on theology that regards any change of plans or any cancellation of a program as abandonment or even breaking faith. I encountered this in the debate over the Defense Department’s budget for the fiscal year 2010 when I ended three programs: the airborne laser, the multiple-kill vehicle, and the kinetic energy interceptor. All were plainly unworkable, prohibitively expensive and could never be practically deployed—but had nonetheless acquired a devoted following.

In fact, Congress contributes to going down the rabbit hole of wasteful programs in two ways. First, Congress is not providing strict enough oversight of Pentagon proposals, being neither skeptical enough nor requiring robust analyses of alternatives up front, with in-depth analysis of feasibility, costs, and risks.

Second, the weakened oversight system and the politicized nature of missile defense leave strategic missile defense vulnerable to missile defense advocates in Congress adding their own unnecessary or unvetted projects to the missile defense budget. Indeed, several times Congress has generated new and unasked-for efforts, such as a proposal for a third continental interceptor site on the US East Coast. Despite having no validated requirement for such a site, and in spite of testimony from the MDA director that other priorities for improving strategic missile defense are more pressing, congressional advocates of an East Coast site have included mandates in budget legislation intended to fast-track the process for building a third site and have added unasked-for money to the budget for it each year since 2012.

Congress has also pressed for a return to discarded ideas, such as the Bush plan for land-based Ground Based Interceptors in Eastern Europe and space-based boost-phase interceptors. Congress added money to the fiscal year 2016 budget to study the feasibility of a space-based boost-phase missile defense layer—despite having several years ago received the advice it solicited from the National Academy of Sciences on this very question. The NAS recommendation on space-based boost phase missile defense, which it estimated would cost at least $300 billion for a limited capability, was unequivocal:

The total life-cycle cost of placing and sustaining the [space-based boost-phase] constellation in orbit is at least an order of magnitude greater than that of any other alternative and impractical for that reason alone.

Hurricane Season’s Impact at the Pump and Why Fuel Efficiency Matters

UCS Blog - The Equation (text only) -

Texas Army National Guardsmen assess damage to a gas station in Victoria, Texas, Aug. 26, 2017, caused by Hurricane Harvey. Army National Guard photo by Capt. Martha Nigrelle.

Gas prices are spiking. This week EIA reported an increase in the average price of gasoline of 28 cents per gallon – with some states seeing more than 40 cent increases. That’s the largest nationwide weekly gas price increase since hurricane Katrina in 2005.

What’s 28 cents worth you ask? More than a $100 million a day it turns out.

That’s bad, but it could be worse. Without vehicle fuel efficiency and emission standards that are currently in place, American drivers would be paying an average of $50 million more per day on fuel costs.

That’s right, vehicle standards that went in to effect in 2011 are already saving Americans $50 million a day. By 2030, these same standards will deliver more the $300 million per day in fuel savings.

Source: U.S. Energy Information Administration, Gasoline and Diesel Fuel Update.

Fuel efficiency is insurance against volatile gas prices

Harvey. Irma. Each one of these devastating storms reminds those in their path the importance of having insurance to protect their families and their property. Buying flood insurance if you live in a flood prone area is a prudent economic decision. Same goes for buying other types of insurance like health insurance and car insurance that protect you – and your household budgets – from unforeseen events

Making our cars and trucks more efficient provides insurance against volatile gas prices. More efficient vehicles mean less economic pain when oil prices spike.

But it’s even better than that. Normal insurance only pays off when disaster strikes. More efficient vehicles save on fuel costs no matter what the pump price. The fuel economy and emissions standards currently on the books through 2025 are expected to cut fuel costs by 40 percent when they are fully implemented. That means savings no matter if gas costs $2/gallon or $5/gallon.

The average household in the U.S. has already saved about $250 since 2011 because of more efficient new vehicles. And every state in the nation aims to benefit.  See what the savings in your state are from federal efficiency and emission standards.

By 2030, that total household savings are expected to rise to $2,800. That is if the Trump administration allows the standards to be implemented as currently written.

Ironically, while the country absorbs the largest price spike at the pump in recent years, the EPA held its first public hearing to reconsider federal emissions and efficiency standards for vehicles that are on the books through 2025 – the same standards that are saving consumers billions of dollars at the pump.

This is like calling your insurance agent to reduce your homeowner’s coverage while your house is on fire.

Despite the irony and the fact the agencies have shown the industry can achieve and even exceed the existing standards, Administrator Pruitt’s EPA is moving ahead to potentially weaken the vehicle standards.

My colleagues Dave Cooke and Richard Ezike testified at the hearing on Wednesday. They weren’t the only ones making the case for why it makes sense to make our cars and trucks less polluting and more efficient. Dozens of other supporters called for maintaining strong standards – everyone from concerned moms, to ministers, veterans, and unionized laborers. Concerned voices dominated the more than 100 testimonies.

UCS will continue to use the best available science to defend the standards and ensure consumers have more fuel efficient, lower polluting vehicles in every class to choose from.

If you agree keeping our vehicle efficiency and emissions standards in place makes sense to protect against future gas price spikes, or for all the other health, climate, and economic benefits from reducing our oil use, you can:

Flooded by Hurricane Harvey: New Map Shows Energy, Industrial, and Superfund Sites

UCS Blog - The Equation (text only) -

A new UCS analysis shows that more than 650 energy and industrial facilities may have been exposed to Hurricane Harvey’s floodwaters.

Harvey’s unprecedented levels of rainfall in Texas and Louisiana coasts have exacted a huge toll on the region’s residents. In the weeks and months ahead, it is not only homes that need to be assessed for flood damage and repaired, but also hundreds of facilities integral to the region’s economy and infrastructure.

To highlight these facilities, the Union of Concerned Scientists has developed an interactive tool showing affected sites. The tool relies on satellite data analyzed by the Dartmouth Flood Observatory to map the extent of Harvey’s floodwaters, and facility-level data from the US Energy Information Administration and the Environmental Protection Agency.

The tool includes several types of energy infrastructure (refineries, LNG import/export and petroleum product terminals, power plants, and natural gas processing plants), as well as wastewater treatment plants and three types of chemical facilities identified by the EPA (Toxic Release Inventory sites, Risk Management Plan sites, and Superfund sites).

Chemical facilities potentially exposed to flooding

Hurricane Harvey may have exposed to flooding more than 160 of EPA’s Toxic Release Inventory sites, 7 Superfund sites, and 30 facilities registered with EPA’s Risk Management Program.

The Gulf Coast is home to a vast chemical industry. The EPA’s Toxic Release Inventory (TRI) program lists over 4,500 facilities in Texas and Louisiana alone that are required to report chemical releases to the environment.

Before the storm hit, many facilities shut down preemptively, releasing toxic chemicals in the process. In the wake of the storm, explosions at Arkema’s Crosby facility highlighted the risks that flooding and power failures pose to the region’s chemical facilities and, by extension, the health of the surrounding population.

In the Houston area, low-income communities and communities of color are disproportionately exposed to toxic chemicals. Our analysis shows that over 160 TRI facilities, 7 Superfund sites, and over 30 facilities registered with EPA’s Risk Management Program were potentially exposed to floodwaters. Though most of the impacts from this exposure remain unknown, the risks include compromised facilities and the release of toxins into the air and receding floodwaters.

Energy infrastructure

In the week since Hurricane Harvey reached the Texas coast, disruptions to the region’s energy infrastructure have caused gas prices to rise nationally by more than 20 percent.

Our analysis finds that more than 40 energy facilities may have been exposed to flooding, potentially contributing to the fluctuations in gas prices around the country. As of yesterday, the EIA reports that several refineries have resumed operations while others are operating at reduced capacity.

More than 40 energy facilities–including power plants and refineries–may have been exposed to Hurricane Harvey’s floodwaters.

Wastewater treatment

Wastewater treatment facilities comprise the bulk of the facilities (nearly 430) that we identify as potentially exposed to flooding. The EPA is monitoring the quality and functionality of water systems throughout the region and reported that more than half of the wastewater treatment plants in the area were fully operational as of September 3.

With floodwaters widely reported as being contaminated with toxic chemicals and potent bacteria, wastewater treatment facilities are likely contending with both facility-level flooding and a heightened need to ensure the potability of treated water.

Nearly 430 wastewater treatment facilities may have been exposed to flooding during Hurricane Harvey.

About the data

It is important to note that the satellite data showing flood extent is still being updated by the Dartmouth Flood Observatory, and that we will continue to get a better handle on the extent and depth of flooding as additional data become available from sources such as high water marks from the USGS.

As of Tuesday, DFO Director Robert Brakenridge stated in an email that they believe the data to be fairly complete, including for the Houston area, at a spatial resolution of 10 meters. Given uncertainties in the flood mapping as well as in the exact locations of each facility, it is possible that this map over- or underestimates the number of affected facilities. It is also possible that facilities, while in the flooded area, were protected from and unaffected by floodwaters.

Make Public Engagement a Professional Priority

UCS Blog - The Equation (text only) -

During graduate school, I believed my responsibility as a scientist during outreach events was to share my work with as many non-scientists as possible. I assumed that my extroverted personality, boundless enthusiasm, and booming voice guaranteed my success at public outreach. I never considered improving or diversifying my communication skills, nor did I value the unique perspective that I might bring to science.

Like so many others, it wasn’t until the November 2016 election that I considered how I, the daughter of Indian immigrants from landlocked villages and modest means, came to study oceans and climate change. From this foundation, I gradually developed and now execute two public engagement aims that often intersect:

1. How the observations I make in the lab and field percolate into the communities around me.

2. The concerns facing marginalized communities, especially within science.

These efforts do not always take the same form, nor are they easy to pursue—certain issues can be especially difficult to write about—but I see that sharing painful stories about minority scientists increases the scientific community’s capacity for empathy, and communicating stories of innovation and progress in the battle against climate change imbues optimism and facilitates action.

Outside of my current position as a technician at UC Davis’ Bodega Marine Laboratory, I work with a local organization dedicated to raising awareness about climate change and a national organization committed to talking about the issues confronting self-identifying women scientists. I also serve on the digital advisory board of a regional publication that is seeking to add diverse voices to conversations about natural science.

Public engagement is a scientist’s implicit responsibility and can be beneficial for the public and scientist alike

Public engagement is often seen as a low priority for academic scientists. Many scientists do not feel compelled to take their research outside of academia. Common justifications include that developing resources for public engagement siphons time and energy from research, misrepresentation in the media could damage reputations, or institutions lack incentives for engagement. While these concerns are understandable, reserving our findings for our colleagues limits the impact of our work.

As scientists, we strive for intellectual products that improve and enhance our understanding of the world around us. Tools for effectively communicating to technical and lay audiences are not in opposition, nor are they as disparate as many may think; thoughtful, clear, and succinct communication tools are ubiquitously useful. By carefully considering audiences beyond our target journals and scientific societies, we create opportunities to develop unique collaborations that can result in the co-production of knowledge.

Effective public engagement is manifold, but requires experimentation

In this era of technology and social media, successful public engagement does not necessarily require face time (although you can use FaceTime or Skype A Scientist). Public outreach often encompasses classroom visits, laboratory open house events, and public talks/demonstrations. While personal interactions are inarguably priceless, these activities are generally eschewed in favor of research due to their high time commitment. This is where digital media can intervene.

During the era of MySpace, Friendster, and LiveJournal the concept of ‘blogging’ emerged—an opportunity for anyone with an opinion and keyboard to share their opinions. While these ancestral social media sites have faded, blogging has been transformed into an opportunity to use our voices (and fingers!) to reach new audiences. Websites like Medium and WordPress make blogging accessible, and many website building/hosting services seamlessly integrate blogging into their schemata. The time commitment is dictated by the blogger and the topics that they choose to communicate. Many academics will admit to initiating and abandoning their blogs for this very reason, myself included.

Conversely, Facebook, Twitter, and Instagram—among many, many others—provide approachable, yet professional interfaces for casual and concise communication. While a short orientation may be required to acquaint yourself with these platforms, their rewards are bountiful. Through Twitter alone, my professional network has expanded geographically as well as across disciplines and industries (a Twitter interaction instigated this very blog post!). While I maintain a blog series with pie-in-the-sky long-term goals, I find that ephemeral, short-term social media interactions can sometimes be more professionally productive per unit of effort and therefore serve as an excellent gateway into public engagement.

Identify what motivates you to speak up and connect with your community

The November 2016 election was my catalyst for public engagement, but has not been my sole motivator going forward. Specifically, blogging has been an incredible learning experience for me, providing insight on the complexity of people, and the pressure that academia puts on those who don’t conform to its rigid framework.

Public engagement is not a part of my formal job description, but it is something that I make time for outside of my 40-hour work week. As scientists, we are driven by questions and certainly find our own work compelling. But we must unravel these complex questions and stories and find the thread that links us with our communities.

 

Priya Shukla is an ocean and climate scientist with the Bodega Ocean Acidification Research (BOAR) group based at UC Davis’ Bodega Marine Laboratory. She received her undergraduate degree in Environmental Science and Management at UC Davis and earned her Master’s in Ecology from San Diego State University. Priya uses science communication to bridge issues concerning social justice, rapid environmental change, and the scientific community. 

Science Network Voices gives Equation readers access to the depth of expertise and broad perspective on current issues that our Science Network members bring to UCS. The views expressed in Science Network posts are those of the author alone.

Pages

Subscribe to Union of Concerned Scientists aggregator - Combined UCS Blogs