Combined UCS Blogs

40% growth? The Latest Electric Vehicle Sales Numbers Look Good

UCS Blog - The Equation (text only) -

US electric vehicle (EV) sales are up 45% for the twelve-month period from July 2016 through June 2017, compared to the prior twelve-month period. What does that mean for the future?

As I’ve noted previously, the US EV market saw 32% annual growth over 2012-2016. This rate would, if continued, result in EVs being 10% of all new car sales in 2025.

For perspective on this target: according to UCS analysis, California’s Zero-Emission Vehicle (ZEV) program would result in about 8% of California’s vehicles being zero-emissions (mostly electric) by 2025. California leads the nation in EV market penetration by quite a bit. According to the International Council on Clean Transportation, nearly 4% of California’s light-duty vehicle sales in 2016 were EVs, compared to less than 1% for the country as a whole. And this was without major automakers Honda and Toyota offering a plug-in vehicle in that year. Sixteen cities in the state already see EVs exceeding 10% of vehicle sales.

California has achieved this through a mixture of policy, infrastructure, consumer awareness and interest (although the Northeast is not far behind on that count), and automaker efforts. Seen in that light, the entire country reaching 10% EV sales in 2025 would be pretty good.

But what if the market were actually hitting a “tipping point” such that this recent growth could continue? If a 40% growth rate could be sustained for the next six years, then we would see EVs reach 10% of US vehicle sales in 2023, and possibly near 20% by 2025. Cost reductions from technology improvements and economies of scale would help sustain the growth rates, as well as expanded charging infrastructure.

What are people buying?

The Tesla Model S was the top seller both in June and year-to-date. This is an all-electric vehicle with a range of 249-335 miles, depending on the configuration (the 60 kWh versions, with ranges of 210-218 miles, were recently discontinued).

Figure 1: Tesla Model S. Source: tesla.com.

Plug-in hybrids are proving quite popular, as the #2 vehicle year-to-date is the Chevy Volt, and the #3 is the Prius Prime.

Figure 2: Chevy Volt. Source: chevrolet.com.

The Volt, with a 53-mile all-electric range in the 2017 model, is a well-established mainstay by the standards of this young market. It has been a consistent top seller since its introduction in December 2010.

Figure 3: Toyota Prius Prime. Source: toyota.com.

The Prius Prime is a new market entrant that was the May sales champion. It has a 25-mile electric-only range, so it could likely do most daily driving in all-electric mode if workplace charging were available (even a standard wall outlet would replenish the battery in 8 hours). Plug-in hybrids have a gasoline engine if needed for longer drives, but I’ve heard that drivers of these vehicles tend to keep their batteries topped off to do as much driving in electric mode as possible. If you don’t yet drive an EV, you might not realize the extent of the existing charging infrastructure, but it’s out there; Plugshare is a great resource.

Tesla’s Model X crossover SUV is the #4 vehicle year-to-date, while Chevy’s new all-electric Bolt, with its 238-mile range, rounds out the top 5 (the Nissan LEAF is just behind the Bolt). The top five models make up just over half the market, with a long list of other products also selling in the United States.

What’s missing?

Given the market strength of the newcomer Prius Prime, what other new vehicles might take a turn at the top of the sales charts in the months ahead?

Well, there are a number of other new models from Kia, Chrysler, Cadillac, Volkswagen, and others. Certainly, the Tesla Model 3, with its first vehicles shipped in July, looks to be a contender. There are over 400,000 reservations for the vehicles worldwide, so it could easily become the sales champion if Tesla can ramp up production quickly enough. But in years to come, we might see something very different.

There is one category notably lacking among US EVs sales: the pickup truck. The best-selling light-duty vehicle in the US has for 35 years been the Ford F-series, with 820,799 units sold in 2016 (this is more than double the sales of the top-selling car in 2016, the Toyota Camry).

Figure 4: Ford F-150. Source: ford.com.

Some companies perform aftermarket conversions to turn trucks into plug-in hybrids, and others have announced plans to build brand-new electric pickup trucks (such as Tesla, Via, Havelaar, and Workhorse). Trucks have a wide range of needs and duty cycles, and not all applications would be suited to electrification at present. There are definitely engineering challenges to resolve.

Still, a plug-in version of the F-150 could serve the needs of many owners, and could propel Ford to the top of the EV sales charts. This is not in Ford’s plans at the moment (although a basic hybrid F-150 is), but what if the company experiences positive results from its other electric and plug-in products? Might we see an electric F-150? Or would the Chevy Silverado or Dodge Ram (the #2 and #3 selling vehicles in 2016) have plug-in versions first?

The pickup truck market is too big to ignore. As battery technology continues to improve, it should become easier to make electrification work for at least part of this segment.

What’s next?

Typically, the second half of the year sees higher sales volume, with December being the biggest month. It should be particularly interesting to watch the growth of Tesla’s Model 3 production over the next six months. News items such as the new study from Bloomberg, Volkswagen’s investments in charging infrastructure, and other developments may heighten public interest in EVs generally.

The most effective means of raising consumer awareness of and interest in EVs are ride-and-drive events. If you haven’t tried one out yet, look for an event near you during Drive Electric Week!

The Trump Administration’s Record on Science Six Months after Inauguration

UCS Blog - The Equation (text only) -

To address unsolved questions, scientists develop experiments, collect data and then look for patterns. Our predictions of natural phenomena become more powerful over time as evidence builds within the scientific community that the same pattern appears over and over again. So, when the 2016 presidential candidates began speaking out about their positions on science policy, the scientific community was listening, collecting data, and looking for patterns.

In particular, candidate Donald Trump’s positions on space exploration, climate change science, and vaccines sent a chilling and frightening signal to the scientific community of what science policy might look like under a President Trump. We no longer have to wonder if candidate Trump’s positions on science policy would be indicative of President Trump’s positions, as we now have six months of data on the Trump administration’s science policy decisions.

Today, we release a report on President Trump’s six month record on science. In this report, we present evidence of patterns the President is using to systemically diminish the role of science in government decision making and in people’s lives. In its first six months, the Trump administration has sidelined independent science advice, placed profits over public protections, and reduced public access to government science and scientists.

 

Sidelining independent science advice

In the first six months of the Trump administration, senior level officials have misrepresented or disregarded scientific evidence even when such evidence has been pertinent to policy decisions. For example, EPA Administrator Scott Pruitt refused to ban the use of the pesticide chlorpyrifos even though the science provides evidence that this chemical affects neurological development in children. The administration also has circumvented advice from scientific experts outside the agency by dismissing experts from agency science advisory boards. For example, in April, Attorney General Jeff Sessions ended support for the Department of Justice’s National Commission on Forensic Science. The administration also has clearly dismissed years of research showing that climate change is primarily caused by humans and is affecting public health now. Additionally, President Trump has left many scientific leadership positions in the federal government vacant. Where President Trump has appointed someone to a science leadership position, those individuals have largely come from the industries they are now in charge of regulating.

Placing profits over public protections

When science is disregarded on decisions where scientific evidence is vital, one can logically question the basis of that decision. And that void can be filled by inappropriate influences. The Trump administration, aided and abetted by Congress, is displaying a clear pattern of disregarding science to benefit priorities of powerful interests at the expense of the public’s health and safety. In part, to accomplish this, the Trump administration quickly turned to a rarely used tool of Congress, the Congressional Review Act (CRA). The CRA allows Congress to render regulations issued within 60 days of the end of the House or Senate sessions null and void. Since its enactment in 1996, the tool had only been used once—the Trump administration has used it 14 times!

One of the regulations nullified, the stream protection rule, was intended to keep communities’ drinking water clean where mountaintop coal mining occurs. The Department of Interior had put this rule in place based on scientific evidence that there is a causal link between higher rates of birth defects, cancer, and cardiovascular and respiratory diseases in communities nearby areas where mountaintop coal mining occurs. As my colleague and co-author of the report Genna Reed revealed, two representatives who sponsored this CRA legislation, Bill Johnson of Ohio and Evan Jenkins of West Virginia, received over $1 million in political contributions from the mining industry, and echoed talking points from the National Mining Association and Murray Energy Company in their statements of support for the rule’s repeal. The CEO of Murray Energy Company also was invited to watch President Trump sign the CRA resolution into law.

Countless other examples like this exist under this administration regarding the rollback of policies related to climate change, vehicle fuel economy standards, ozone pollution, and chemical safety to name a few. In fact, the White House is boasting about rolling back many of these regulations. Apparently, removing protections that safeguard children from harmful neurological effects and that protect disadvantaged communities from getting cancer are things that our administration applauds nowadays.

Reducing public access to government science and scientists

While there are valid reasons why the government keeps some information sensitive or classified, usually there is no such valid reason why science cannot be communicated openly. Yet, the Trump administration has been actively working to reduce public access to scientists and their work. For example, many government webpages have now been altered or removed, particularly those that focus on climate change. The Trump administration also has retracted questions from surveys intended to support disadvantaged communities.

Additionally, scientists in federal agencies have been restricted from communicating their work to anyone outside of the agency, and also have been barred from attending and presenting at scientific conferences. Yesterday, Joel Clement, former Director of the Office of Policy Analysis at the Interior Department, blew the whistle on the Trump administration for their attempts to silence his work to help endangered communities in Alaska prepare for climate change by reassigning him to a position in accounting. As Clement rightfully points out, removing a scientist from their area of expertise and placing them in a position where their experience is not relevant is “a colossal waste of taxpayer money.” The public has the right to access government science and to hear from the scientists that produce it.

The attacks on science keep rolling in

The examples that I’ve highlighted in this blog entry are merely a smattering of the attacks on science discussed in our report. All of these attacks are happening at the same time that the President has proposed deep cuts to scientific agencies and funding for basic research, sending a signal to scientists that their work is not valued. Senator Bill Nelson of Florida recently took to the floor to call for an end to the “blatant, coordinated effort by some elected officials to muzzle the scientific community.” It is becoming difficult to suggest that a war on science doesn’t exist when evidence is piling up, and suggests that the Trump administration intends to silence science and scientists wherever and whenever possible.

We cannot retreat from progress that the use of science in decision making allows us to make: more children living a healthy life without asthma, a number of lives spared due to vaccinations, the protection of America’s endangered wildlife. Scientists and science supporters are already speaking up and taking to the streets to march, to advocate for the use of science in decision making. We can resist the Trump administration’s attacks on science—our democracy gives us the right to do so.

Environmental Injustice in the Early Days of the Trump Administration

UCS Blog - The Equation (text only) -

When the EPA was established in 1970 by Richard Nixon, there was no mandate to examine why toxic landfills were more often placed near low-income, Black, Latino, immigrant, and Native American communities than in more affluent, white neighborhoods. Nor was there much recognition that communities closer to toxic landfills, refineries, and industrial plants often experienced higher rates of toxics-related illnesses, like cancer and asthma.

Yet these phenomena were palpable to those living in affected communities. In the 1970s and 80s, local anti-toxics campaigns joined forces with seasoned activists from the civil rights movement, labor unions, and with public health professionals and scientists, drawing attention to the unevenly distributed impacts of toxic pollution, and forming what we now recognize as the environmental justice movement.

The new administration has mounted a swift and concerted attack on the federal capacity and duty to research, monitor, and regulate harmful pollutants that disproportionately affect women, children, low-income communities, and communities of color.  Two examples demonstrate the potential consequences: overturning the ban on chlorpyrifos, and a variety of actions that reduce collection of and public access to the data on which environmental justice claims depend.

Overturning the ban on chlorpyrifos

EPA Administrator Scott Pruitt overturned the chlorpyrifos ban, despite the fact that EPA scientists recommended that the pesticide be banned because of the risks it posed to children’s developing brains. Photo: Zeynel Cebeci/CC BY-SA 4.0 (Wikimedia Commons)

Chlorpyrifos is a commonly used pesticide. EPA scientists found a link between neurological disorders, memory decline and learning disabilities in children exposed to chlorpyrifos through diet, and recommended in 2015 that the pesticide be banned from agricultural use because of the risks it posed to children’s developing brains.

Over 62% of farmworkers in the U.S. work with vegetables, fruits and nuts, and other specialty crops on which chlorpyrifos is often used.  These agricultural workers are predominantly immigrants from Mexico and Central America, living under the poverty line and in close proximity to the fields they tend. A series of studies in the 1990s and 2000s found that concentrations of chlorpyrifos were elevated in agricultural workers’ homes more than ¼ mile from farmland, and chlorpyrifos residues were detected on work boots and hands of many agricultural worker families but not on nearby non-agricultural families.

In March 2017, EPA Administrator Scott Pruitt publicly rejected the scientific findings from his agency’s own scientists and overturned the chlorpyrifos ban, demonstrating the Trump administration’s disregard for the wellbeing of immigrant and minority populations. Farmworker families could be impacted for generations through exposure to these and other harmful pesticides.

Limiting collection of and access to environmental data

Because inequitable risk to systematically disadvantaged communities must be empirically proven, publicly available data on toxic emissions and health issues are crucial to environmental justice work. The Trump administration has already taken a number of actions that limit the collection and accessibility of data necessary to make arguments about environmental injustices that persist through time in particular communities.

Houston has a number of chemical plants in close proximity to low-income neighborhoods. Photo: Roy Luck/CC BY 2.0 (Flickr)

Workers, especially those laboring in facilities that refine, store or manufacture with toxic chemicals, bear inequitable risk. The Trump administration has sought to curb requirements and publicity about workplace risks, injuries and deaths. For example, President Trump signed off on a congressional repeal of the Fair Pay and Safe Workplaces rule, which required applicants for governmental contracts to disclose violations of labor laws, including those protecting safety and health. Without the data provided by this rule, federal funds can now support companies with the worst worker rights and protection records. President Trump also approved the congressional repeal of a rule formalizing the Occupational Safety and Health Administration’s (OSHA) long-standing practice of requiring businesses to keep a minimum of five years of records on occupational injuries and accidents.  While five years of record-keeping had illuminated persistent patterns of danger and pointed to more effective solutions, now only six months of records are required. This change makes it nearly impossible for OSHA to effectively identify ongoing workplace conditions that are unsafe or even life-threatening.

Another example is the administration’ proposed elimination of the Integrated Risk Information System, or IRIS, a program that provides toxicological assessments of environmental contaminants. The IRIS database provides important information for communities located near plants and industrial sites that produce toxic waste, both to promote awareness of the issues and safety procedures and as a basis for advocacy. These communities, such as Hinkley, CA, where Erin Brockovich investigated Pacific Gas and Electric Company’s dumping of hexavalent chromium into the local water supply, are disproportionately low income.

Responding to Trump: Developing environmental data justice

Data is not inherently good.  It can be used to produce ignorance and doubt, as in the tactics employed by the tobacco industry and climate change deniers.  It can also be used to oppressive ends, as in the administration’s collection of information on voter fraud, a phenomenon that is widely dismissed as non-existent by experts across the political spectrum.  Further, even the data collection infrastructure in place under the Obama administration failed to address many environmental injustices, such as the lead pollution in Flint, MI.  Thus we would argue that promoting environmental data justice is not simply about better protecting existing data, but also about rethinking the questions we ask, the data we collect, and who gathers it in order to be sure environmental regulation protects all of us.

 

Britt Paris is an EDGI researcher focused on environmental data justice. She is also a doctoral student in the Department of Information Studies at UCLA, and has published work on internet infrastructure projects, search applications, digital labor and police officer involved homicide data evaluated through the theoretical lenses of critical informatics, critical data studies, philosophy of technology and information ethics.

Rebecca Lave is a co-founder of EDGI (the Environmental Data and Governance Initiative), an international network of academics and environmental professionals that advocates for evidence-based environmental policy and robust, democratic scientific data governance. She was the initial coordinator of EDGI’s website tracking work, and now leads their publication initiatives. Rebecca is also a professor in the Geography Department at Indiana University.

 Science Network Voices gives Equation readers access to the depth of expertise and broad perspective on current issues that our Science Network members bring to UCS. The views expressed in Science Network posts are those of the author alone.

Pesticide Action Network

The National Flood Insurance Program is up for Reauthorization: Here’s What Congress Should Do

UCS Blog - The Equation (text only) -

The National Flood Insurance Program is up for reauthorization by the end of September and the clock is ticking for legislation to extend the program. With so many homeowners and small businesses depending on this vital program, will Congress take the necessary steps to reform and strengthen the program—especially in light of the growing risks of coastal and inland flooding?

Here’s a quick rundown of the latest bills and what they might mean for the future of the program.

The NFIP is in urgent need of reform

With about 5 million policyholders nationwide (see map), the NFIP is vital for homeowners—especially for those who live in flood-prone areas inland or along the coasts. You or someone you know is likely one of these homeowners; perhaps you’ve spend some time this summer in a beachfront community that participates in the NFIP. Reauthorizing the program is critical to protect such homeowners.

Source: FEMA

But the NFIP is also urgently in need of reforms to help put the program on a healthy financial footing and ensure that it encourages climate-smart choices. The program has been on the GAO’s High Risk list since 2006 and is over $24 billion in debt.

For quick refresher on the NFIP, see my earlier blogpost highlighting five ways to improve the program to promote climate resilience:

  • Update flood risk maps using the latest technology and to reflect the latest science
  • Phase in risk-based pricing and broaden the insurance base
  • Address affordability considerations for low and moderate income households
  • Provide more resources for homeowners and communities to reduce their flood risks by making investments ahead of time
  • Ensure that a well-regulated private sector flood insurance market complements the NFIP without undermining it
Action in the House

The House Financial Services Committee passed a package of 7 bills last month to reauthorize the NFIP and make changes to the program. These included: H.R. 2874, the 21st Century Flood Reform Act; H.R. 2868, the National Flood Insurance Program Policyholder Protection Act of 2017; HR 2875, the National Flood Insurance Program Administrative Reform Act of 2017; HR 1558, the Repeatedly Flooded Communities Preparation Act; HR 1422, the Flood Insurance Market Parity and Modernization Act; HR 2246, the Taxpayer Exposure Mitigation Act of 2017; and HR 2565, a bill to require the use of replacement cost value in determining the premium rates for flood insurance coverage under the National Flood Insurance Act, and for other purposes.

However, last week 26 House Republicans indicated to Speaker Ryan that they could not support the package in its current form. This means that Chairman Hensarling will have to work on further changes before a bill is brought to the Floor for a vote.

A major priority for Chairman Hensarling is ensuring that taxpayer interests are protected as the NFIP is extended. In his opening remarks last month he said:

“There are so many important voices in our debate today on the reauthorization of the National Flood Insurance Program… But as far as I’m concerned, perhaps the single-most important voice is the voice that remains underrepresented in the debate and that is the voice of the American taxpayer.”

Meanwhile Rep. Maxine Waters, the ranking Democrat on the committee, has expressed concern that the package of bills might actually “make matters worse by restricting coverage, increasing costs, and opening the door to cherry-picking by the private sector.”

Action in the Senate

In the Senate, there are several bipartisan bills that show that agreement is possible in several key areas, although there remain some important differences between the bills on how best to balance competing priorities. Work remains to help reconcile these bills.

Senators Cassidy (R-LA) and Gillibrand (D-NY) introduced S.1313 – Flood Insurance Affordability and Sustainability Act of 2017 last month. Senators Capito (R-WV) and Kennedy (R-LA) have also joined as co-sponsors of this bill. The bill seeks to extend the program for 10 years, increase funding for pre-disaster and flood mitigation assistance programs, take steps to address affordability concerns, preserve funding for flood mapping, and enhance the role of the private sector in the flood insurance market.

Separately, six senators—Senators Bob Menendez (D-NJ), John Kennedy (R-LA), Chris Van Hollen (D-MD), Marco Rubio (R-FL), Elizabeth Warren (D-MA), Thad Cochran (R-MS), Cory Booker (D-NJ) and Bill Nelson (D-FL)—have cosponsored the Sustainable, Affordable, Fair and Efficient National Flood Insurance Program Reauthorization Act (SAFE NFIP) 2017. This bill extends the program for 6 years, includes mean-tested affordability provisions, enhances funding for flood mitigation assistance and the pre-disaster hazard mitigation grant program, authorizes $800 million per year for 6 years for LiDAR mapping, eliminates interest payments on the NFIP’s debt and caps commissions on Write-Your-Own policies.

Last month Senators Scott (R-SC) and Schatz (D-HI) introduced S.1445 – Repeatedly Flooded Communities Preparation Act which directs communities with properties that have repeatedly flooded to develop submit and implement a community-specific plan for mitigating continuing flood risks. This is similar to a companion House bill mentioned above, H.R. 1558, which was introduced by Rep. Royce (R-CA).

Earlier this week, Senators Crapo and Brown introduced the National Flood Insurance Program Reauthorization Act of 2017. The bill extends the NFIP for six years, directs communities with significant numbers of repetitive loss properties to develop mitigation plans, provides funding for pre-disaster mitigation, preserves funding for updated flood mapping and has provisions to encourage flood risk disclosure.

(Interestingly, the bill also has an extensive section on funding for wildfires, long a priority for Senator Crapo.  It adds “wildfires on federal lands” in the definition of “major disasters” under the Stafford Act, which would allow funding from the Disaster Relief Fund to be made available to the Department of Interior or the USDA for fire suppression operations.)

The Nation needs a robust NFIP

Flood risks are growing along our coasts because of sea level rise and inland because of an increase in heavy rainfall. At the same time, growing development in floodplains is putting more people and property in harm’s way. Importantly, climate change will increase flood risks in many parts of the country, regardless of whether homeowners purchase flood insurance through the private market or the taxpayer-backed NFIP.

Last week UCS released a study showing that many coastal communities experience chronic inundation already, with hundreds more at risk by mid-century.

A recent study from Zillow highlights the long term risks our nation faces from sea level rise. It finds that:

Nationwide, almost 1.9 million homes (or roughly 2 percent of all US homes)—worth a combined $882 billion—are at risk of being underwater by 2100. And in some states, the fraction of properties at risk of being underwater is alarmingly high. More than 1 in 8 properties in Florida are in an area expected to be underwater if sea levels rise by six feet, representing more than $400 billion dollars in current housing value.

Flood risk maps in many parts of the country are outdated, inadequate or non-existent, and even the latest maps do not include sea level rise projections. This means that communities and local planners often do not have a clear understanding of their true flood risks, now and into the future. Updating these maps using the latest technology is costly and Congress will need to authorize adequate funding for this. FEMA’s Risk Mapping, Assessment and Planning (Risk MAP) program is an important cornerstone of these efforts.

Investing in flood mitigation measures ahead of time is a smart way to keep risks and costs down. That’s why Congress must beef up funding for FEMA’s Flood Mitigation Assistance and Pre Disaster Mitigation Grant Programs, alongside extending the NFIP. Prioritizing investments in nature-based protective measures, such as preserving wetlands, are also a very important way to help safeguard communities. In flood-prone areas with properties that get repeatedly flooded, expanding funding for voluntary home buyouts is vital so that homeowners have real options to move to safer ground. Coordinated federal, state and local actions are the best way to reduce flood risks to communities.

Finally, NFIP reforms must include affordability provisions to help low and moderate income homeowners. This could include means-tested vouchers or rebates on premiums and low interest loans or grants for flood mitigation measures and other provisions outlined in recent NAS reports.

Know your flood risk

If you’re a home owner, it’s a smart idea to know your flood risk and how it might change over time. Here are some resources to get you started:

Time for Congress to Act

There’s no shortage of bills in Congress to reauthorize and reform the NFIP. Now Congress needs to work toward reaching bipartisan agreement on robust legislation by September 30. There’s no excuse for delay or inaction—homeowners around the country are counting on a strong, fair and effective flood insurance program to keep them safe.

The Wall Street Journal Gets it Wrong on EPA Scientific Integrity…Again

UCS Blog - The Equation (text only) -

The Wall Street Journal ran an opinion piece yesterday titled “A Step Toward Scientific Integrity at the EPA” written by long-time critic of the EPA and purveyor of anti-science nonsense, Steven Milloy. His piece commends Administrator Pruitt on his recent dismissals of EPA advisory committee members, and questions the independence of advisory committees, like the EPA’s Science Advisory Board (SAB) and Clean Air Scientific Advisory Committee (CASAC), claiming that they contain biased government grantees and have made recommendations on ozone and particulate matter that aren’t supported by science. His arguments are twisted and unfounded, but are not surprising based upon his history working for industry front groups that attempt to spread disinformation to promote a science agenda benefitting powerful interests.

I want to set the record straight on the independence of EPA’s scientific advisory committees. Here’s what Steven Milloy gets very wrong:

  1. The EPA’s advisory committees have not been stacked with “activists.” In fact, industry representation is on par with representation from non-profit organizations.

I agree with Milloy on just one point: federal advisory committees must be balanced and unbiased. The Federal Advisory Committee Act mandates that all federal advisory committees are “fairly balanced in terms of the points of view represented and the functions to be performed.” This is an important piece of the act to ensure that the recommendations flowing from these advisory committees reflect a diversity of viewpoints and a range of expertise. There are also required conflict of interest disclosures made by each and every advisory committee member to ensure that any conflicts will not interfere with their ability to provide independent advice. Milloy claims that “only rarely do members have backgrounds in industry,” which is simply not true. An analysis of the EPA’s Science Advisory Board membership since 1996 reveals that 64 percent of the 459 members were affiliated with an academic institution, 9 percent with industry, 9 percent with non-governmental organizations (including industry-funded organizations like the Chemical Industry Institute of Toxicology), 8 percent with government, and 7 percent with consulting firms.  I found a similar breakdown in an analysis of the EPA’s Board of Scientific Counselors and for all seven of EPA’s scientific advisory committees.

In conversations I’ve had with former EPA Board of Scientific Counselors members, it has been clear that industry scientists have always had a voice on these committees which is why it’s especially suspect that the current administration has decided not to renew the terms of many advisory committee members, hoping to better represent the industry.

  1. Government grants are a major source of funding for academic scientists and these funds are contributing to research projects, not used for private gain.

Milloy’s claim that academic scientists who have received grant money from the EPA are making biased recommendations to the agency is completely unfounded. Receiving EPA funding for unrelated research projects is a fundamentally different thing than serving on a committee to make policy recommendations. EPA awards grants to academic scientists to learn more about scientific topics without a policy agenda and grantees are free to conduct the science and produce results any way they want. There is no predetermined or desired outcome, and the process is completely separate from EPA policy decisions. No incentives exist for committee members to come to a particular policy answer in order to get grant money on an entirely separate scientific research question from a separate office of an agency. To conflate these misunderstands how science and policy work. To Milloy, receiving a grant from government to work on science in the public interest would be biased in the same, if not more severe, way than receiving funds from a corporation to promote a product or otherwise support a private interest. For the work of advisory committee members, ensuring that federal science best supports public protections is key.

Congress’ attempt to correct this supposed problem, which is championed by Milloy in his piece, the EPA Science Advisory Board (SAB) Reform Act, includes a provision that board members may not have current contracts with the EPA or for three years after service which would only deter academic scientists from pursuing SAB positions. This Act is supported by the likes of Milloy because it would likely provide more opportunities for industry interests, not in need of government funding, to join the SAB.

  1. The advisory committee selection process is and should be based on expertise and experience related to the charge of the committee, not how many times an individual is nominated.

In his piece, Milloy calls the EPA’s advisory committee selection process “opaque” because a certain nominee wasn’t selected after having the most duplicate nominations. But, the EPA’s process for for selecting SAB and CASAC members is actually one of the most open and transparent processes across agencies and advisory committees. Members of the public have the opportunity to submit nominations, view nominees, and comment on the EPA’s roster of appointees before final selections are made. Ultimately, it’s up to the EPA administrator to decide the strongest and most balanced roster of committee members based on the needs of the agency. It’s not meant to be a process whereby any entity can win a nominee based on the number of comments received. Despite receiving 60 out of 83 nomination entries, Michael Honeycutt was likely not chosen to be a CASAC member because he has questionable scientific opinions and documented conflicts of interest. which are completely reasonable justifications.

  1. Particulate matter from power plants and vehicle emission does indeed have demonstrated health impacts, supported by the scientific literature.

Milloy’s article asserts that the claims that particulate matter has negative health impacts is not scientifically justified. This is demonstrably false. Not only is there a wealth of peer-reviewed literature backing up the claim, there is an entire field devoted to studying it. Milloy claims that there was no evidence of these impacts in 1996, but that’s because scientists weren’t collecting that data back then. While Milloy lives in the past, two decades worth of research (over 2,000 studies) since 1996 has shown that fine particulate matter (PM2.5) has been linked to strokes, heart disease, respiratory ailments and premature death.

Wall Street Journal’s second strike on EPA integrity

The Wall Street Journal’s readers deserve better than to read this junk-science drivel without full disclosure about the peddler of the disinformation. In 1993, Phillip Morris funded Milloy to lead an industry front group called the Advancement of Sound Science Coalition that cast doubt on the scientific evidence linking secondhand smoke to disease. In 1998, Milloy found a new benefactor in ExxonMobil, serving on a task force that mapped out ExxonMobil’s strategy to deceive the public about climate science, and funding him for many years to sow doubt under the guise of a slightly renamed front group, the Advancement of Sound Science Center run out of his Maryland home. Milloy’s current employer, the Energy and Environment Legal Institute, formerly the American Tradition Institute, is funded by the fossil fuel industry and has repeatedly filed inappropriate open records requests for the communications of climate scientists working at public universities. His most recent 2016 book is endorsed by none other than long-time junk science purveyor and climate change denier, Senator James M. Inhofe.

This is just a reminder that as we try to make sense of our government’s operations and the state of science for issues that affect our health and our planet’s health, we must consider the sources of our information very carefully. Facts matter, and here at UCS we’ll continue to draw attention to the silencing, sidelining, or distortion of scientific facts.

 

 

What is the Cost of One Meter of Sea Level Rise?

UCS Blog - The Equation (text only) -

The opening line of our recent Scientific Reports article reads “Global climate change drives sea level rise, increasing the frequency of coastal flooding.” Some may read this as plain fact. Others may not.

Undeniable and accelerating

100 years of data from tide gauges and more recently from satellites has demonstrated an unequivocal rise in global sea level (~8-10 inches in the past century). Although regional sea level varies on a multitude of time scales due to oceanographic processes like El Niño and vertical land motion (e.g., land subsidence or uplift), the overall trend of rising sea levels is both undeniable and accelerating. Nevertheless, variability breeds doubt. Saying that global warming is a hoax because it’s cold outside is like saying sea level rise doesn’t exist because it’s low tide.

Global sea level is currently rising at 34 mm/year, making it a relatively slow process. For instance, tides typically change sea level by 0.5-1.0 m every 12 hours, a rate that is ~100,000 times faster than global mean sea level rise.

It’s almost as if sea-level rise were slow enough for us to do something about it…

The civil engineering challenge of the 21st century

At the end of a recent news article by New Scientist, Anders Levermann, a climate scientist for the Potsdam Institute for Climate Impact Research, said “No one has to be afraid of sea level rise, if you’re not stupid. It’s low enough that we can respond. It’s nothing to be surprised about, unless you have an administration that says it’s not happening. Then you have to be afraid, because it’s a serious danger.”

Levermann’s quote captures the challenge of sounding the whistle on the dangers of climate change. We know that sea level rise is a problem; we know what’s causing it (increased concentrations of heat-trapping gasses like CO2 leading to the thermal expansion of sea water and the melting of land-based ice); we know how to solve the problem (reduce carbon emissions and cap global temperatures); yet, in spite of the warnings, the current administration recently chose to back out of a global initiative to address the problem.

Arguing that the Paris agreement is “unfair” to the American economy to the exclusive benefit of other countries is extremely shortsighted. This perspective serves to kick the climate-change can down the road for the next generation to pick up. This perspective, if it dominates US decision making moving forward, sets us up for the worst-case  scenarios of sea-level rise (more than two meters by 2100). Worse yet, this perspective may take us beyond the time horizon in which a straightforward solution may be found, leaving geoengineering solutions as our last-and-only resort.

If the Paris agreement is unfair to the American economy, imagine about how unfair 2.0+ m of sea-level rise would be.  We should seriously question the administration’s focus on improving national infrastructure without considering arguably the greatest threat to it.  Sea-level rise will be one of, if not THE greatest civil engineering challenge of the 21st century.

Sea level rise will:

An astronomically high dollar figure

As a thought experiment, try to quantify the economic value of one meter of sea level rise. Low-lying coastal regions support 30% of the global population and, most likely, a comparable percentage of the global economy. Even if each meter of sea level rise only affected a small percentage of this wealth and economic productivity, it would still represent an astronomically high dollar figure.

Although managed retreat from the coastline is considered a viable option for climate change adaptation, I don’t see a realistic option where we relocate major coastal cities such as New York City, Boston, New Orleans, Miami, Seattle, San Francisco, or Los Angeles.

What will convince the powers-that-be that unabated sea level rise is an unacceptable outcome of climate change? Historically, the answer to this question is disasters of epic proportions.

Hurricane Sandy precipitated large-scale adaptation planning efforts in New York City. Nuisance flooding in Miami has led to a number of on-going infrastructure improvements. The Dutch coast is being engineered to withstand once-in-10,000-year storms. Fortunately, most nations and US states, particular coastal states like Hawaii and California, will abide the Paris agreement.

This administration doesn’t seem to care about the science of climate change, but it does seem to care about economic winners and losers. Would quantifying the impacts of climate change in terms of American jobs and taxpayer dollars convince the administration to change their view of the Paris agreement?

Impossible to ignore

In the executive summary of the 2014 Risky Business report, Michael Bloomberg writes, “With the oceans rising and the climate changing, the Risky Business report details the costs of inaction in ways that are easy to understand in dollars and cents—and impossible to ignore.” This report finds that the clearest and most economically significant risks of climate change include:

  • Climate-driven changes in agricultural production and energy demand
  • The impact of higher temperatures on labor productivity and public health
  • Damage to coastal property and infrastructure from rising sea levels and increased storm surge

For example, the report finds that in the US by 2050 more than $106 billion worth of existing coastal property could be below sea level. Furthermore, a study in Nature Climate Change found that future flood losses in major coastal cities around the world may exceed $1 trillion dollars per year as a consequence of sea level rise by 2050.

The science and economics of climate change are clear.

So why do politicians keep telling us that it’s not happening and that doing something about it would be bad for the economy?

New Interactive Map Highlights Effects of Sea Level Rise, Shows Areas of Chronic Flooding by Community

UCS Blog - The Equation (text only) -

Last week, the Union of Concerned Scientists released a report showing sea level rise could bring disruptive levels of flooding to nearly 670 coastal communities in the United States by the end of the century. Along with the report, UCS published an interactive map tool that lets you explore when and where chronic flooding–defined as 26 floods per year or more–will force communities to make hard choices. It also highlights the importance of acting quickly to curtail our carbon emissions and using the coming years wisely.

Here are a few ways to use this tool:
  1. Explore the expansion of chronically inundated areas

    Sea level rise will expand the zone that floods 26 times per year or more. Within the “Chronic Inundation Area” tab, you can see how that zone expands over time for any coastal area in the lower 48 and for two different sea level rise scenarios (moderate and fast).

    Explore the spread of chronically inundated areas nationwide as sea level rises.

     

  2. Explore which communities join the ranks of the chronically inundated

    We define a chronically inundated community as one where 10% or more of the usable land is flooding 26 times per year or more. With a fast sea level rise scenario, about half of all oceanfront communities in the lower 48 would qualify as chronically inundated. Check out the “Communities at Risk” tab to see if your community is one of them.

    Explore communities where chronic flooding encompasses 10% or more of usable land area.

     

  3. Visualize the power of our emissions choices

    Drastically reducing global carbon emissions with the aim of capping future warming to less than 2 degrees Celsius above pre-industrial levels–the primary goal of the Paris Agreement–could prevent chronic inundation in hundreds of U.S. communities. Explore the “Our Climate Choices” tab to see the communities that would benefit from swift emissions reductions.

    Explore how slowing the pace of sea level rise could prevent chronic inundation in hundreds of US communities.

     

  4. Learn how to use this time wisely

    Our country must use the limited window of time before chronic inundation sets in for hundreds of communities, and plan and prepare with a science-based approach that prioritizes equitable outcomes. Explore our “Preparing for Impacts” tab and consider the federal and state-level policies and resources that can help communities understand their risks, assess their choices, and implement adaptation plans. This tab captures how we can use the diminishing response time wisely.

    Explore federal and state-level resources for communities coping with sea level rise.

Improving the map based on data and feedback

We hope that communities are able to use this tool to better understand the risks they face as sea level rises. We welcome your feedback and will be periodically updating the map as new data and new information comes to light.

Climate Change Just Got a Bipartisan Vote in the House of Representatives

UCS Blog - The Equation (text only) -

On rare occasions, transformative political change emerges with a dramatic flourish, sometimes through elections (Reagan in 1980, Obama in 2008) or key mass mobilizations (the March on Washington in 1963), or even court cases (the Massachusetts Supreme Judicial Court decision declaring marriage inequality unconstitutional.)

But most of the time, transformations happen slowly, step by arduous step, along a path that may be hard to follow and can only be discerned clearly in hindsight.

I believe that we are on such a path when it comes to Republican members of Congress acknowledging climate science and ultimately the need to act. I see some encouraging indications that rank and file Republican members of Congress are heading in the right direction.

In February, 2016, Democratic Congressman Ted Deutch and Republican Congressman Carlos Curbelo, launched the Climate Solutions Caucus, whose mission is “to educate members on economically-viable options to reduce climate risk and to explore bipartisan policy options that address the impacts, causes, and challenges of our changing climate.” Its ranks have now swelled to 48 members, 24 Republicans and 24 Democrats.

Last week, this group flexed its muscle. At issue was UCS-backed language in the National Defense Authorization Act (NDAA). The provision, authored by Democratic Congressman Jim Langevin, would require the Pentagon to do a report on the vulnerabilities to military instillations and combatant commander requirements resulting from climate change over the next 20 years. The provision also states as follows:

Climate change is a direct threat to the national security of the United States and is impacting stability in areas of the world where the United States armed forces are operating today, and where strategic implications for future conflicts exist.

Republican leadership led an aggressive effort to strip the language from the NDAA on the House floor through an amendment offered by Representative Perry (PA-R). But in the end, 46 Republican members (including all but one of the entire climate solutions caucus) voted against it, and fortunately it was not adopted.

We are hopeful this important provision will be included in the final NDAA bill that passes the senate, and then on to President Trump for his signature. He probably won’t like this language, but it seems doubtful that he will veto a military spending bill.

Implications

One shouldn’t read too much into this. The amendment is largely symbolic, and the only thing it requires is that the defense department conduct a study on climate change and national security. There is a long way to go from a vote such as this one to the enactment of actual policies to cut the greenhouse gas emissions that are the primary cause of climate change.

But, it is an important stepping stone. If this bill becomes law, a bipartisan congressional finding that climate change endangers national security becomes the law of the land. Among other things, this should offer a strong rebuttal to those who sow doubt about climate science.

It is also a validation of a strategy that UCS has employed for many years—to highlight the impacts of climate change in fresh new ways that resonate with conservative values. This was the thinking behind our National Landmarks at Risk report, which shows how iconic American landmarks are threatened by climate change.

This was also our strategy behind our recent report which highlights the vulnerability of coastal military bases to sea level rise. This report was cited and relied upon by Congressman Langevin in his advocacy for the amendment.

UCS will work to make sure that this language is included in the final bill, and we will continue to find other ways to cultivate bi-partisan support for addressing climate change. There will be much more difficult votes ahead than this one. But for now, I want to thank the Republican members of Congress for this important vote, and make sure our members and supporters know that the efforts of ours and so many others to work with Republicans and Democrats, and to bring the best science to their attention, is paying off.

Build the Wall and Blame the Poor: Checking Rep. King’s Statements on Food Stamps

UCS Blog - The Equation (text only) -

If you read “Steve King” and think of novelist Stephen King, don’t worry too much about it.

Iowa Representative Steve King dabbled in fear and fiction himself in an interview with CNN last Wednesday, suggesting that a US-Mexico border wall be funded with dollars from Planned Parenthood and the food stamp program.

Photo: CC BY SA/Gage Skidmore

This particular idea was new, but the sentiments King expressed about the Supplemental Nutrition Assistance Program (SNAP) and the people who use it, less so. With 2018 farm bill talks underway, misconceptions about the program and who it serves have manifested with increasing frequency, and setting the record straight about these misconceptions is more important than ever. Policymakers like King, who is a member of both the House Committee on Agriculture and its Nutrition Subcommittee, hold the fate of 21 million SNAP households in their hands, and it’s critical that they’re relying on complete and correct information to make decisions about this program.

Here’s a quick deconstruction of what was said—and what needs to be said—about Americans who use food stamps.

“And the rest of [the funding beyond Planned Parenthood cuts] could come out of food stamps and the entitlements that are being spread out for people that haven’t worked in three generations.”

The idea that food stamp users are “freeloaders” is perhaps one of the most common and least accurate. The truth is, most SNAP participants who can work, do work. USDA data shows that about two-thirds of SNAP participants are children, elderly, or disabled; 22 percent work full time, are caretakers, or participate in a training program; and only 14 percent are working less than 30 hours per week, are unemployed, or are registered for work. Moreover, among households with adults who are able to work, over three-quarters of adults held a job in the year before or after receiving SNAP—meaning the program is effectively helping families fill temporary gaps in employment. King’s constituents are no exception: in his own congressional district, over half of all households receiving SNAP included a person who worked within the past 12 months, and over a third included two or more people who worked within the past 12 months.

“I would just say let’s limit it to that — anybody who wants to have food stamps, it’s up to the school lunch program, that’s fine.”

The national school lunch program provides one meal per day to eligible children. Kids who receive free or reduced price lunch are also eligible to receive breakfast at school through the program, but only about half do. Even fewer kids receive free meals in the summer: less than ten percent of kids who receive free or reduced price lunch at school get free lunches when they’re out of school. This means that, for millions of families, SNAP benefits are critical to filling in the gaps so kids can eat. In fact, USDA data shows that more than 4 in 10 SNAP users are kids. Again, these patterns hold true in King’s district: over half the households that rely on SNAP benefits include children.

“We have seen this go from 19 million people on, now, the SNAP program, up to 47 million people on the SNAP program.”

True. In 1987, average participation in SNAP was around 19 million. In 2013, it peaked at 47 million, and dropped to around 44 million by 2016. The increase over this time period is attributable, at least in part, to changes in program enrollment and benefit rules between 2007 and 2011 and greater participation among eligible populations. However, participation data also demonstrates SNAP’s effective response to economic recession and growth. For example, there was an increase in 2008 as the recession caused more families to fall below the poverty line, and in 2014, for the first time since 2007, participation and total costs began to steadily decrease in the wake of economic recovery. Congressional Budget Office estimates predict that by 2027, the percentage of the population receiving SNAP will return close to the levels seen in 2007.

“We built the program because to solve the problem of malnutrition in America, and now we have a problem of obesity.”

It is undeniable that rising rates of obesity are a significant public health threat. But obesity is an incredibly complex phenomenon, the pathophysiology of which involves myriad social, cultural and biological factors. It is a different type of malnutrition, and we will not solve it simply by taking food away from those who can’t afford it. If we want to focus on increasing the nutritional quality of foods eaten by SNAP recipients, we can look to programs that have been successful in shifting dietary patterns to promote greater fruit and vegetable intake, using strategies such as behavioral economics or incentive programs. Truth be told, most of us—SNAP users or not—would benefit from consuming more nutrient-dense foods like fruits and vegetables.

“I’m sure that all of them didn’t need it.”

Without a doubt. And this could be said of nearly any federal assistance program. But the goal of the federal safety net is not to tailor programs to the specific needs of each person or family—this would be nearly impossible, and the more precise a system gets, the more regulation is required and the greater the administrative burden and financial strain becomes. The goal of federal assistance programs like SNAP is to do the most good for the greatest amount of people, within a system that most effectively allocates a limited amount of resources. And I’d venture to say that a program designed to lift millions of Americans out of poverty—with one of the lowest fraud rates of any federal program, an economic multiplier effect of $1.80 for every $1 spent in benefits, and an ability to reduce food insecurity rates by a full 30 percent—comes closer to hitting its mark than a wall.

Once Deemed Too Small to Be Counted, Rooftop Solar Is Now Racing Up the Charts

UCS Blog - The Equation (text only) -

Sometimes, the littlest of things can point to the biggest of leaps.

In December 2015, the US Energy Information Administration (EIA) announced a major milestone in the life and times of small-scale solar: the agency would start acknowledging the resource by state in its regular monthly generation and capacity report.

Just imagine that, though. Across the country, enough rooftops had started wearing enough solar hats as to potentially shift the profile of states’ electricity use and needs. A day for the clean energy technology scrapbooks, indeed.

And now, a year and a half later, let last week mark another: EIA stated that it will no longer simply be tallying the resource in its rear-view mirror—the agency will also begin looking out into the future and forecasting just how much small-scale solar it thinks will soon be added to the mix.

From ignored to counted to accounted for, all within a few quick spins around the Sun. They sure do grow up fast.

Getting a handle on small-scale solar

To get at why these milestones are so meaningful, we first need to be clear on what we’re talking about. Here, we’re looking at small-scale solar photovoltaics (PV), also known as rooftop, distributed, behind-the-meter, or customer-sited PV. These resources are typically located on the distribution system at or near a customer’s site of electricity consumption, and can be on rooftops, but aren’t always.

Small-scale solar is also, well, small (at least relative to large-scale solar). EIA uses a ceiling of 1 megawatt (MW) for its tracking, but these types of installations are often much smaller, including residential systems, which are commonly on the order of about 5 kilowatts (kW), or 0.005 MW.

And then there’s this: all that electricity being generated by behind-the-meter resources? It’s usually either partially or entirely “invisible” to the utility.

Enter the EIA.

When what happens behind the meter stays behind the meter

At the outset, the invisibility of these resources doesn’t matter much. By itself, one rooftop system isn’t going to generate all that much electricity, and one rooftop system isn’t going to change how much electricity the utility needs to provide. But as more and more of these small systems are installed, together they can actually start to make a real dent in major system loads.

The result? These little rooftop panels can start to move the planning dial.

EIA’s first action above—estimating the presence and contributions of small-scale solar—helps to shed light on just how much these resources are starting to contribute to the system. Now in a lot of places, it isn’t that much…yet. But thanks to EIA’s second action, there will also now be information to help ensure that policymakers and electricity providers sufficiently account for small-scale solar’s future contributions, too.

Let’s take a look:

Credit: EIA.

See the light yellow section? That’s small-scale solar. Sure, it might look a bit like a pat of butter compared to wind in the early years, but it is certainly growing, and it is certainly not negligible. Because that pat—well, EIA estimates that it totaled 19,467 gigawatthours (GWh) of generation in 2016.

To put that number in context, small-scale solar’s generation totaled more electricity than was consumed in 2015 by the residential sectors of half the states in this country. Well worth taking into account, indeed.

And when we look ahead? Well, the future looks bright and getting brighter for this young solar star. Opportunities for these installation abound, and in this week’s Short-Term Energy Outlook, EIA forecasts clear skies and solar on the rise:

Credit: EIA.

Celebrating the little things for the milestones that they are

So here: a few small announcements from EIA, signalling a few giant leaps for rooftop solar. This PV resource is an incredibly important driver of momentum in the clean energy space, but without information on just how much it’s growing, its benefits and contributions can be undervalued. By shining a light on the progress that has taken place to date—and the progress that is to come—EIA is able to provide vital insights on the significance of the transition underway.

Wayne National Forest/Creative Commons (Flickr)

How the Oregon Rebate for Electric Cars Works

UCS Blog - The Equation (text only) -

If you’re an Oregonian and thinking about an electric car, you may want to wait a bit as a bill is about to be signed into law that will establish a rebate of up to $2,500 for electric vehicles sold in the state. This rebate can be had in addition to the $7,500 federal tax credit for EVs, which means Oregonians can get up to $10,000 off an electric vehicle!

The bill also establishes an additional rebate of up to $2,500 for low to moderate income Oregon residents, who can then collectively save up to $12,500 on a qualifying electric vehicle. The rebate program will go into effect in early October 2017.

Which electric vehicles qualify for the rebate

A qualifying vehicle for the new Oregon rebate must:

  • Have a base manufacturer’s suggested retail price of less than $50,000
  • Be covered by a manufacturer’s express warranty on the vehicle drive train, including the battery pack, for at least 24 months from the date of purchase
  • Be either a battery electric vehicle OR a plug-in hybrid vehicle that has at least 10 miles of EPA-rated all-electric range and warranty of at least 15 years and 150,000 miles on emission control components.
    1. $2,500 goes to vehicles with battery capacities above 10 kWh.
    2. $1,500 goes to vehicles with a battery capacity of 10 kWh or less.
  • Be a new vehicle, or used only as a dealership floor model or test-drive vehicle
  • The rebate will apply to new electric vehicles that are purchased or leased, with a minimum 24-month lease term.

How the electric vehicle rebate will be given

  • Send in your rebate application within 6 months of buying the vehicle or starting the vehicle lease.
  • You may need to send it to the Oregon Department of Environmental Quality, or a third party non-profit. The application details have not yet been released.
  • The rebate will “attempt” to be issued within 60 days of receiving the application (the bill says attempt).

Additional rebates for low-income Oregonians (aka charge ahead rebate)

Ideally, EV rebate programs should provide additional financial assistance to low-income drivers. Low-income households typically spend more on transportation than higher earners, and transportation can comprise up to 30 percent of low-income household budgets. So, being able to save on transportation fuel and vehicle maintenance by choosing an electric vehicle can mean even more to low-income households in Oregon and beyond.

Fueling an electric vehicle in Oregon is like paying the equivalent of $0.97 for a gallon of gasoline. In addition, battery electric vehicles have fewer moving parts and don’t require oil changes, so electric vehicle maintenance costs have been estimated to be 35 percent lower than comparable gasoline vehicles.  The eGallon price is calculated using the most recently available state by state residential electricity prices. The state gasoline price above is either the statewide average retail price or a multi-state regional average price reported by EIA. The latest gasoline pricing data is available on EIA’s webpage. Find out more at www.energy.gov/eGallon.

How the Oregon charge ahead rebate works
  • Have a household income less than or equal to 80 percent of the area median income (low income) or between 80 and 120 percent of area median income (moderate income).
    1. Area median income is defined by the Oregon Housing and Community Services Department and is tied to the closest metropolitan area in Oregon.
  • Live in an area of Oregon that has elevated concentrations of air contaminants commonly attributed to motor vehicle emissions.
  • Retire or scrap a gas-powered vehicle that has an engine that is at least 20 years old AND replace that vehicle with an electric vehicle.
  • The electric vehicle can be used or new.
  • Send in an application to the Oregon Department of Environmental Quality or third party non-profit. Details are still be worked out.
  • Get up to an additional $2,500 in rebate off the electric vehicle.
How the Oregon electric vehicle rebate is funded

These rebates are being established as part of a broader transportation package, so the funding mechanisms in the bill are being levied not only for electric vehicles but also for maintaining Oregon’s roads, bridges, and tunnels and other transportation projects.

Beginning in 2020, electric vehicles will be subject to greater titles and registration fees in Oregon, expected to be about $110.

Oregon will also pay for road work with a 4 percent gas tax, increasing incrementally up to 10 cents by 2024. The bill also enforces a $16 vehicle registration fee, a 0.1 percent payroll tax, and 0.5 percent sales tax on new vehicles.

The bill additionally allows Oregon to introduce rush-hour congestion roadway tolls. Cyclists aren’t off the hook, either. Adult bicycles (defined as bikes with wheels at least 26 inches in diameter) over $200 will be subject to a $15 excise tax. These funds will go toward grants for bicycle and pedestrian transportation projects.

Overall, the electric vehicle rebate fund will be at least $12 million annually, though other monies, like donations, can be deposited into the fund too. $12 million is enough cash for 4,800 full $2,500 rebates each year.

Oregon residents bought 1,969 new pure EVs and 1,506 new PHEVs in 2016, so there’s still a good amount of room for this rebate to help grow the Oregon electric vehicle market. Overall, this is a wonderful program that will both help increase electric vehicle sales in Oregon and help expand the benefits of driving on electricity to those who need it the most.

A Quick Guide to the Energy Debates

UCS Blog - The Equation (text only) -

There’s an energy transition happening with major implications for how we use and produce electricity. But not everyone agrees on which direction the transition should take us. The ensuing debate reveals deeply-held views about markets, the role of government, and the place for state policies in a federal system.

UCS has regularly profiled the transition to clean energy, which is led by state choices and rapid growth in renewable energy, energy efficiency, and vehicle electrification. Wind and solar innovations have made these sources very competitive; as coal plants have grown older and new cleaner plants continue to be built, the mix of energy has changed.

With gas now exceeding coal, and monthly renewable generation passing nuclear, the debate has heated up. Here’s a quick rundown on the views of the actors involved.

Consumer interested in clean energy. Photo: Toxics Action Center.

What the markets say

In the electricity markets run by PJM, NYISO and ISO-New England, (covering roughly the region from Chicago to Virginia, and up to Maine), there is wide understanding that “Cheap gas is coal’s fiercest enemy.” There, the debate is how to deal with state policies that contribute revenues to nuclear plants, foremost, as well as renewables.

These grid operators—and the stakeholders with billions of dollars of revenues from their markets—have long-running efforts to refine price signals and participation rules to ensure competition. The Federal Energy Regulatory Commission (FERC) supervises these markets, and has a similar long-term commitment to seeing these markets succeed.

When it comes to environmental policies, and the notion of environmental externalities (e.g. the costs of pollution), the economists speak for these grid organizations. These markets have accepted the Regional Greenhouse Gas Initiative (RGGI), which adds a modest price on carbon allowances, and would be ready and able to include a more impactful carbon price. But the current circumstances, where states have selected various means to correct for externalities, make the market purists upset.

Some subsidies are more equal than others?

Renewable support is the law in more than 29 states—and fossil fuels receive $10s of billions of subsidies. UCS argued at FERC and in the PJM stakeholder process that market advocates lack any consistent justification for discriminating between subsidies. At best, they have said “we can live with some, but not too much, subsidy.” No comparison has been offered showing the impacts on market prices of one set of subsidies compared with another.

EIA charting energy mix changing for electricity production.

Others in the debate, from opposite ends of the commercial spectrum, warn that the grid operators should seek alignment with the environmental and diversity goals expressed by consumers and policy makers. Representing consumers and local government, American Municipal Power, based in Columbus, Ohio with members in 9 states from Delaware to Michigan, and electric co-operatives in NRECA, call for respect and recognition for decisions made outside the federally-supervised markets. At the same time, Exelon, owner of nuclear plants across the eastern US, aligns itself with the state support of renewable energy now that similar state policies have surfaced for existing nuclear plants.

FERC, the arbiter of this debate, expressed sincere hope that the parties will settle this themselves, so that the agency will not have to, as Exelon put it, “require states to forgo their sovereign power to make their own environmental policy as the price of admission to the federal wholesale markets.”

Review so far

Let’s try to summarize: the market folks see gas beating coal and nuclear on economics. The nuclear folks want state policies to support existing nuclear plants. States and consumer-owned utilities seek to keep federally-supervised markets from overriding democratically-decided choices.

Enter the DOE

Secretary of Energy Rick Perry, who as governor of Texas oversaw the greatest expansion of wind energy in the US, seeks to support coal with a forthcoming Department of Energy “baseload” study. From all indications, this initiative is meant to:

1) defeat the market where gas has out-competed coal;
2) trample the consumer and voter choices for renewables; and
3) reverse the trend of lower energy costs from innovation by requiring more payments to the oldest and most expensive generators.

Unfortunately, the April 14 memo from Perry ordering this study mixes flawed assertions about reliability with assumptions about economics. Organizations across the political spectrum have labored to explain that maintaining coal plants, or even the label of baseload generation, are economic concepts from another time.

When the debate continues, keep these facts in mind:

  • Coal provides less than 1% of electricity in New York and the 6-state New England grid.
  • The same is true in Washington, Oregon, and California.
  • At times, wind and solar have generated 50 to 60 percent or more of total electricity demand in some parts of the country, including Texas, while maintaining and even improving reliability.
  • In May, wind, solar, geothermal and biopower supplied a record 67 percent of electricity needs in California’s power pool, and more than 80 percent when you include hydropower.
  • In 2016, wind power provided more than 30 percent of Iowa’s and South Dakota’s annual electricity generation, and more than 15 percent in nine states.

With an energy transition clearly underway, some strange debates are breaking out. Like in so many things, perhaps the only consistent way to sort out the positions and policies is to follow the money.

Photo: Chris Hunkeler/CC BY-SA (Flickr)

Cooper: Nuclear Plant Operated 89 Days with Key Safety System Impaired

UCS Blog - All Things Nuclear (text only) -

The Nebraska Public Power District’s Cooper Nuclear Station about 23 miles south of Nebraska City has one boiling water reactor that began operating in the mid-1970s to add about 800 megawatts of electricity to the power grid. Workers shut down the reactor on September 24, 2016, to enter a scheduled refueling outage. That process eventually led to NRC special inspections.

Following the outage, workers reconnected the plant to the electrical grid on November 8, 2016, to begin its 30th operating cycle. During the outage, workers closed two valves that are normally open when while the reactor operates. Later during the outage, workers were directed to re-open the valves and they completed paperwork indicating the valves had been opened. But a quarterly check on February 5, 2017, revealed that both of the valves remained closed. The closed valves impaired a key safety system for 89 days until the mis-positioned valves were discovered and opened. The NRC dispatched a special inspection team to the site on March 1, 2017, to look into the causes and consequences of the improperly closed valves.

The Event

Workers shut down the reactor on September 24, 2016. The drywell head and reactor vessel head were removed to allow access to the fuel in the reactor core. By September 28, the water level had been increased to more than 21 feet above the flange where the reactor vessel head is bolted to the lower portion of the vessel. Flooding this volume—called the reactor cavity or refueling well—permits spent fuel bundles to be removed while still underwater, protecting workers from the radiation.

With the reactor shut down and so much water inventory available, the full array of emergency core cooling systems required when the reactor operates was reduced to a minimal amount. The reduction of systems required to remain in service facilitates maintenance and testing of out-of-service components.

In the late afternoon of September 29, workers removed Loop A of the Residual Heat Removal (RHR) system from service for maintenance. The RHR system is like a nuclear Swiss Army knife—it can supply cooling water for the reactor core, containment building, and suppression pool and it can provide makeup water to the reactor vessel and suppression pool. Cross-connections enable the RHR system to perform so many diverse functions. Workers open and close valves to transition from one RHR mode of operation to another.

As indicated in Figure 1, the RHR system at Cooper consisted of two subsystems called Loop A and Loop B. The two subsystems provide redundancy—only one loop need function for the necessary cooling or makeup job to be accomplished successfully.

Fig. 1 (Source: Nebraska Public Power District, Individual Plant Examination (1993))

RHR Loop A features two motor-driven pumps (labeled P-A and P-C in the figure) that can draw water from the Condensate Storage Tank (CST), suppression chamber, or reactor vessel. The pump(s) send the water through, or around, a heat exchanger (labeled HX-A). When passing through the heat exchanger, heat is conducted through the metal tube walls to be carried away by the Service Water (SW) system. The water can be sent to the reactor vessel, sprayed inside the containment building, or sent to the suppression chamber. RHR Loop B is essentially identical.

Work packages for maintenance activities include steps when applicable to open electrical breakers to de-energize components and protect workers from electrical shocks and close valves to allow isolated sections of piping to be drained of water so valves or pumps can be removed or replaced. The instructions for the RHR Loop A maintenance begun on September 29 included closing valves V-58 and V-60. These are valves that can only be opened and closed manually using handwheels. Valve V-58 is in the minimum flow line for RHR Pump A while V-60 is in the minimum flow line for RHR Pump C. These two minimum flow lines connect downstream of these manual valves and then this common line connects to a larger pipe going to the suppression chamber.

Motor-operated valve MOV-M016A in the common line automatically opens when either RHR Pump A or C is running and the pump’s flow rate is less than 2,731 gallons per minute. The large RHR pumps generate considerable heat when they are running. The minimum flow line arrangement ensures that there’s sufficient water flow through the pumps to prevent them from being damaged by overheating. MOV-M016A automatically closes when pump flow rises above 2,731 gallons per minute to prevent cooling flow or makeup flow from being diverted.

The maintenance on RHR Loop A was completed by October 7. The work instructions directed operators to reopen valves V-58 and V-60 and then seal the valves in the opened position. For these valves, sealing involved installing a chain and padlock around the handwheel so the valve could not be repositioned. The valves were sealed, but mistakenly in the closed rather than opened position. Another operator independently verified that this step in the work instruction had been completed, but failed to notice that the valves were sealed in the wrong position.

At that time during the refueling outage, RHR Loop A was not required to be operable. All of the fuel had been offloaded from the reactor core into the spent fuel pool. On October 19, workers began transferring fuel bundles back into the reactor core.

On October 20, operators declared RHR Loop A operable. Due to the closed valves in the minimum flow lines, RHR Loop A was actually inoperable, but that misalignment was not known at the time.

The plant was connected to the electrical grid on November 8 to end the refueling outage and begin the next operating cycle.

Between November 23 and 29, workers audited all sealed valves in the plant per a procedure required to be performed every quarter. Workers confirmed that valves V-58 and V-60 were sealed, but failed to notice that the valves were sealed closed instead of opened.

On February 5, 2017, workers were once again performing the quarterly audit of all sealed valves. This time, they noticed that valves V-58 and V-60 were not opened as required. They corrected the error and notified the NRC about its discovery.

The Consequences

Valves V-58 and V-60 had been improperly closed for 89 days, 12 hours, and 49 minutes. During that period, the pumps in RHR Loop A had been operated 15 times for various tests. The longest time that any pump was operated without its minimum flow line available was determined to be 2 minutes and 18 seconds. Collectively, the pumps in RHR Loop A operated for a total of 21 minutes and 28 seconds with flow less than 2,731 gallons per minute.

Running the pumps at less than “minimum” flow introduced the potential for their having been damaged by overheating. Workers undertook several steps to determine whether damage had occurred. Considerable data is collected during periodic testing of the RHR pumps (as suggested by the fact it was known that the longest a pump ran without its minimum flow line was 2 minutes and 18 seconds). Workers reviewed data such as differential pressures and vibration levels from tests over the prior two years and found that current pump performance was unchanged from performance prior to the fall 2016 refueling outage.

Workers also calculated how long it would take a RHR pump to operate before becoming damaged. They estimated that time to be 32 minutes. To double-check their work, a consulting firm was hired to independently answer the same question. The consultant concluded that it would take an hour for an RHR pump to become damaged. (The 28 minute difference between the two calculations was likely due to the workers onsite making conservative assumptions that the more detailed analysis was able to reduce. But it’s a difference without distinction—both calculations yield ample margin to the total time the RHR pumps ran.)

The testing and analysis clearly indicate that the RHR pumps were not damaged by their operating during the 89-plus days their minimum flow lines were unavailable.

The Potential Consequences  

The RHR system can perform a variety of safety functions. If the largest pipe connected to the reactor vessel were two rupture, the two pumps in either RHR Loop are designed to provide more than sufficient makeup flow to refill the reactor vessel before the reactor core overheats.

The RHR system has high capacity, low head pumps. This means the pumps supply a lot of water (many thousands of gallons each minute) but at a low pressure. The RHR pumps deliver water at roughly one-third of the normal operating pressure inside the reactor vessel. When small or medium-sized pipes ruptured, cooling water drains out but the reactor vessel pressure takes longer to drop below the point where the RHR pumps can supply makeup flow. During such an accident, the RHR pumps will automatically start but will send water through the minimum flow lines until reactor vessel pressures drops low enough. The closure of valves V-58 and V-60 could have resulted in RHR Pumps A and C being disabled by overheating about an hour into an accident.

Had RHR Pumps B and D remained available, their loss would have been inconsequential. Had RHR Pumps B and D been unavailable (such as due to failure of the emergency diesel generator that supplies them electricity), the headline could have been far worse.

NRC Sanctions

The NRC’s special inspection team identified the following two apparent violations of regulatory requirements, both classified as Green in the agency’s Green, White, Yellow and Red classification system:

  • Exceeding the allowed outage time in the operating license for RHR Loop A being inoperable. The operating license permitted Cooper to run for up to 7 days with one RHR loop unavailable, but the reactor operated far longer than that period with the mis-positioned valves.
  • Failure to implement an adequate procedure to control equipment. Workers used a procedure every quarter to check sealed valves. But the guidance in that procedure was not clear enough to ensure workers verified both that a valve was sealed and that it was in the correct position.

UCS Perspective

This near-miss illustrates the virtues, and limitations, of the defense-in-depth approach to nuclear safety.

The maintenance procedure directed operators to re-open valves V-58 and V-60 when the work on RHR Loop A was completed.

While quite explicit, that procedure step alone was not deemed reliable enough. So, the maintenance procedure required a second operator to independently verify that the valves had been re-opened.

While the backup measure was also explicit, it was not considered an absolute check. So, another procedure required each sealed valves to be verified every quarter.

It would have been good had the first quarterly check identified the mis-positioned valves.

It would have been better had the independent verifier found the mis-positioned valves.

It would have best had the operator re-opened the valves as instructed.

But because no single barrier is 100% reliable, multiple barriers are employed. In this case, the third barrier detected and corrected a problem before it could be contribute to a really bad day at the nuclear plant.

Defense-in-depth also accounts for the NRC’s levying two Green findings instead of imposing harsher sanctions. The RHR system performs many safety roles in mitigating accidents. The mis-positioned valves impaired, but did not incapacitate, one of two RHR loops. That impairment could have prevented one RHR loop from successfully performing its necessary safety function during some, but not all, credible accident scenarios. Even had the impairment taken RHR Loop A out of the game, other players on the Emergency Core Cooling System team at Cooper could have stepped in.

Had the mis-positioned valves left Cooper with a shorter list of “what ifs” that needed to line up to cause disaster or with significantly fewer options available to mitigate an accident, the NRC’s sanctions would have been more severe. The Green findings are sufficient in this case to remind Cooper’s owner, and other nuclear plant owners, of the importance of complying with safety regulations.

Accidents certainly reveal lessons that can be learned to lessen the chances of another accident. Near-misses like this one also reveal lessons of equal value, but at a cheaper price.

How President Trump’s Proposed Budget Cuts Would Harm Early Career Scientists

UCS Blog - The Equation (text only) -

Kaila Colyott is coming close to graduation as a Ph.D. candidate at the University of Kansas, but she’s not finishing with the same enthusiasm for her career prospects that she began graduate school with.  At the beginning, she wasn’t particularly worried about getting a job after graduation. “I was a first generation student coming from a largely uneducated background. I was pretty stoked about doing science, and I was told that more education would help me land a job in the future.”

She wasn’t ever informed that, under the current market, academics graduate with significant debt and without a promise of a job. Ms. Colyott said that she became more concerned about her job prospects as she learned more about the job market in academia, “I became more concerned over time as I witnessed academics hustling for money all the time.”

President Trump’s proposed across-the-board cuts to scientific research and training would make this problem worse. While they are out of step with what Congress wants and have yet to be realized, they would have tremendous impacts on our nation’s scientific capacity  and ability to enact science based policy.

Incentives for scientific careers are dwindling

According to a National Science Foundation (NSF) report, American universities awarded 54,070 PhD’s in 2014, yet 40% of those newly minted doctorates had no job lined up after graduation. There is a connection between funding for scientific research and job opportunities for early career scientists: one can slash the hopes and dreams of the other.

I am fortunate to work in science policy because of the funding and training opportunities afforded to me as an early career scientist. Having received two fellowships as a graduate student from NSF, one that allowed me to take on an internship in the White House’s Office of Science and Technology Policy, and a post-doctoral fellowship through the Department of Energy’s Oak Ridge Institute for Science and Education, I received robust training in both science and policy analysis.

Such training also offered me a career path outside of academia, made necessary by a limited number of tenure-track positions available at universities. Thus, I view this type of funding and training as essential for early career scientists as many will need to seek career paths outside of academia given the job market. Yet, President Trump’s proposed budget cuts signal to me that these same opportunities may not be afforded to a younger generation of scientists—such a signal is concerning.

Government funding of science is essential to early career scientists

There was a time when most PhD-level scientists would enter into a tenure-track position at a university after graduation. Today, even the most accomplished students pursuing a PhD have a particularly difficult time landing a tenure-track position because there are very few jobs and competition is stiff.

This creates the need for other options. Among those graduating with a PhD in 2014 who had indicated they had a post-graduate commitment, 39% were entering into a post-doc position (a temporary research position most common in the sciences). While many of these exist in research labs at universities across the country, there are also many post-doc positions available in the federal government.

Such opportunities can expose early career scientists to the process by which science informs the policy-making process in government while still allowing them to conduct research. This allows early career scientists the chance to increase both their interest and efficacy in science policy. Additionally, agencies such as the NSF and the National Institutes for Health (NIH) offer graduate students and post-docs fellowships and grants that allow them to build skills in forming their own research ideas and writing grant proposals.

Opportunities for early career scientists to obtain government fellowships or grants in the sciences may decrease under the Trump administration, if the administration’s budget cuts are actualized. For example, President Trump has proposed to cut NSF’s budget by 11 percent. As NSF struggled with its 2018 budget request to meet the 11% cut, the agency decided it would need to cut half of the annual number of prestigious graduate research fellowships offered to PhD students.

Such a cut would significantly reduce the availability of fellowships for biologists and environmental scientists, especially since the NSF biology directorate announced in June that it would cease its funding of Doctoral Dissertation Improvement Grants (DDIGs) due to the time needed to manage the program. Additionally, many other fellowships and grants have been proposed to be cut or re-structured such as the STAR grant program at EPA, the National Space Grant and Fellowship Program at NASA, and the Sea Grant program at NOAA.

These programs have led to many scientific advances that have reduced the costs of regulations, protected public health and the environment, and saved lives. For example, the STAR grant program at EPA implemented several major pollution focused initiatives such as the Particulate Matter Centers, Clean Air Research Centers, and Air, Climate and Energy Centers, which have together produced substantial research showing air pollution can decrease human life expectancy. A recent report by the National Academies of Sciences noted that this research likely saved lives and reduced healthcare costs, having helped to inform policies that improved air quality nationwide.

While many of the extreme proposed cuts to science funding will likely not come to fruition, given bipartisan support of scientific funding in Congress, even small cuts to government programs that offer funding for science could really impact job prospects for early career scientists. And the uncertainty created by these suggested cuts will discourage young people, especially those from disadvantaged backgrounds, from pursuing scientific careers at all.

Neil Ganem, a School of Medicine assistant professor of pharmacology and experimental therapeutics at Boston University, described how the realization of these cuts would have a negative economic effect. “It would mean that hospitals, medical schools, and universities would immediately institute hiring freezes. Consequently, postdocs, unable to find academic jobs, would start accumulating and be forced into other career paths, many, I’m sure, outside of science. Jobs would be lost. The reality is that if principal investigators don’t get grants, then they don’t hire technicians, laboratory staff, and students; they also don’t buy [lab supplies] or services from local companies. There is a real trickle-down economic effect.” Indeed, such cuts could be devastating to post-docs and their families, especially in the case a post-doc was offered a position only to see that funding pulled at the last minute.

Early career scientists are paying attention

When asked if Trump’s proposed budget cuts to basic research made her more concerned about her job prospects, Colyott said that they were one of many factors, but she expressed greater concern for the generation of young scientists below her. “These cuts make me concerned about younger scientists who won’t have the same resources that I had at my disposal—like NSF’s Graduate Research Fellowships or the DDIGs. Having the ability to propose my own ideas and receive funding for them built a lot of confidence in me such that I felt I could continue to do science.”

Colyott has been very active in science outreach as a graduate student and is very passionate about this field, and intends to seek a job in outreach after graduation to get first generation students like her interested in science. However, she is now worried about encouraging young students into academia. “Why would I want to encourage others to enter science when I already am nervous myself about my own job prospects?”

Even if President Trump’s egregious cuts to scientific funding do not come true, they most certainly send a signal to scientists, especially young scientists, that their skills are not valued. This message can be particularly disheartening to students attempting to gain a career in science, which may dissuade them from entering the field.

So, I have my own message for these younger scientists. I see you, I hear you, and I completely understand your fears about your job prospects. You deserve a chance to advance our understanding on scientific topics that are vital to better humanity. Your scientific research is valued and it is important, and there is a huge community of others who believe the very same thing. Science is collaborative by nature—I assure you, we all will work together to lift you up and make sure your voice is heard.

One of The Largest Icebergs on Record Just Broke off Antarctica. Now What?

UCS Blog - The Equation (text only) -

An iceberg, among the largest on record (since satellites started tracking in 1978), broke off the Larsen-C ice shelf along the Antarctic Peninsula.  The iceberg is greater than the area of Delaware and a volume twice that of Lake Erie.  What were the origins of this event, and now what?

Origins of the gigantic iceberg  IPCC 2013

Terms for cold regions of Earth. Contributions to global sea level rise as of the 2013 publication of the Intergovernmental Panel on Climate Change fifth assessment report (working group 1). Source: (to see enlarged graph) IPCC AR5 WG1 figure 4-25.

In order to understand the present and future implications, we can quickly run through some facts regarding the origins of this gargantuan iceberg.  As we do this, it’s helpful to get a refresher on terms and recent trends for sea level rise contributions from cold regions  (see figure from the IPCC AR5 WG1 Figure 4-25).

Glaciers outside of Greenland and Antarctica have been the largest ice source contribution to global sea level rise between 1993 and 2009.  Antarctica and Greenland have increased their contribution over the recent part of this period.

Now, a quick look at the iceberg, and how it formed:

What? An Iceberg, likely to be named A68, weighs more than a trillion tons.

Where?: This iceberg used to be part of the floating Larsen C Ice Shelf located along that part of Antarctica that looks like a skinny finger pointing toward South America.

When?:  The iceberg broke away sometime between July 10 and July 12, 2017 (uncertainty due to the gap between repeat passes by satellites).  Despite the current predominance of polar darkness in the region, several satellites detected this event with special instruments: NASA’s Aqua MODIS, NASA and NOAA’s Suomi VIIRS, European Space Agency Sentinel-1 satellites.

Why?:  It is natural for floating ice shelves to break off – or to “calve” – icebergs, as was captured in this unforgettable time lapse video clip from the film Chasing Ice.  The Larsen C ice shelf is a type that is fed by land-based ice – called glaciers – on the Antarctic Peninsula. The shelf size depends on the supply of ice from the glaciers and snow minus the loss of ice from calving and melting.

While calving is entirely natural, scientists are investigating other factors that could have played a role in the size and the timing of this event.  An ice shelf can melt and thin if the surface air temperature or ocean waters beneath an ice shelf warm above the freezing point.  The Antarctic Peninsula has experienced surface temperature warming over recent decades that is unprecedented over the last two millennia in the region.

Now what? Break up of Larsen B Ice Shelf

Larsen B ice shelf demise (NASA MODIS image by Ted Scambos, National Snow and Ice Data Center, University of Colorado, Boulder). Source: https://nsidc.org/news/newsroom/larsen_B/index.html

Immediate Risks: Not much in terms of global sea level rise since the ice shelf was already floating.  Similar to the demonstration with floating ice cubes melting in a cup of water and the liquid water level remains the same. If iceberg A68 had instead suddenly calved from land-based ice, according to Gavin Schmidt (NASA), it would have contributed to global sea level.

The iceberg could pose a navigation hazard for ships.   Iceberg A68 can drift for years and, based on typical iceberg tracks for this region, it would likely move to lower latitudes where more ships would have to avoid navigating too close. For now, few ships would head that far south during the Antarctic winter and are likely to place greater risk on large waves when they pass through the seas surrounding Antarctica.  These have “unlimited fetch” where strong winds can generate some of the largest waves in the world with a well-earned reputation amongst seafarers embedded in the nautical terms referring to these hazardous southern latitudes: “roaring forties” and “furious fifties.”

Near-term risks:  Scientists will closely track developments to see if the Larsen C ice shelf rebounds or follows the fate of nearby and lower latitude ice shelves that have disintegrated (Larsen A and Larsen B) over the past two decades.

The data that will be tracked include processes observed during Larsen B disintegration such as meltwater ponding, changes to snow accumulation or loss, and meltwater penetrating deep into the ice shelf through cracks that can increase ice loss.

To better understand the risks, we also need critical information, currently difficult to obtain, regarding ocean temperatures underneath the Larsen C ice shelf.   Warmer ocean waters lapping at the new fresh edge of the Larsen C ice shelf and penetrating deeper underneath could increase the risks for Larsen C shelf thinning and potential disintegration.

Long-term risks: Ice shelves buttress glaciers. If ice shelves are no longer there to buttress the glaciers and “put the brakes on” the flow of ice from the land-based ice sources, these glaciers could accelerate ice flow rates and directly contribute to sea level rise.  Many studies document many times greater flow in glaciers after complete disintegration of the Larsen B ice shelf.

“Glacier-ice shelf interactions: In a stable glacier-ice shelf system, the glacier’s downhill movement is offset by the buoyant force of the water on the front of the shelf. Warmer temperatures destabilize this system by lubricating the glacier’s base and creating melt ponds that eventually carve through the shelf. Once the ice shelf retreats to the grounding line, the buoyant force that used to offset glacier flow becomes negligible, and the glacier picks up speed on its way to the sea.”  Image by Ted Scambos and Michon Scott, National Snow and Ice Data Center, University of Colorado, Boulder. Source: NSIDC

If a similar sequence of events were to occur with the Larsen C ice shelf, then coastal planners likely need to know the scale of the potential risk and how quickly it could happen.   The Larsen C ice shelf is fed by glaciers on the skinny Antarctic Peninsula which contains an estimated combined equivalent of 1 cm contribution potential to future global sea level.

The pace and timing are big questions for scientist to monitor and make projections based on models that incorporate the processes observed.  These could improve sea level rise projections in a world with the Paris Agreement fully implemented (i.e. with limits of no more than 2 degrees Celsius global temperature rise above pre-industrial) versus higher emissions scenarios.  A good resource on the current estimates for the timing of a threshold for chronic inundation for many U.S. coastal communities is the new UCS report released yesterday and the accompanying peer-reviewed publication –  Dahl et al., 2017.

IPCC 2013 WG 1 Figure 4-25 Ted Scambos NSIDC NASA

Northern Plains Drought Shows (Again) that Failing to Plan for Disasters = Planning to Fail

UCS Blog - The Equation (text only) -

As the dog days of summer wear on, the northern plains are really feeling the heat. Hot, dry weather has quickly turned into the nation’s worst current drought in Montana and the Dakotas, and drought conditions are slowly creeping south and east into the heart of the Corn Belt. Another year and another drought presents yet another opportunity to consider how smart public policies could make farmers and rural communities more resilient to these recurring events.

Let’s start with what’s happening on the ground: Throughout the spring and early summer, much of the western United States has been dry, receiving less than half of normal rainfall levels. And the hardest hit is North Dakota. As of last week, 94 percent of the state was experiencing some level of abnormally dry conditions or drought, with over a quarter of the state in severe or extreme drought (a situation that only occurs 3 to 5 percent of the time, or once every 20 to 30 years).

Throughout the spring and early summer, drought conditions have worsened across the Dakotas and Montana, stressing crops and livestock.
Image: http://droughtmonitor.unl.edu/

But this drought is not just about a dry spring. Experts believe the problem started last fall when first freeze dates were several weeks later than usual, creating a “bonus” growing period for crops like winter wheat and pasture grasses, which drew more water from the soil. This is an important pattern for agriculture to stay tuned into, as recent temperature trends point to greater warming conditions in the winter.

Bad news for wheat farmers (and bread eaters)

The timing of the drought is particularly damaging to this region’s farm landscape, which centers around grasslands for grazing livestock, along with a mix of crops including wheat, corn, soy, and alfalfa.

Spring wheat has been especially hard hit—experts believe this is the worst crop in several decades in a region that produces more than 80 percent of the country’s spring wheat. (Here’s a great map of the wheat varieties grown across the country, which makes it easy to see that the bread and pasta products we count on come from Montana and the Dakotas).

As grasses wither, cattle ranchers have only bad options

More than 60 percent of the region’s pasture grasses are also in poor or very poor condition, leaving cattle without enough to eat. Given the forecast of high temperatures upcoming, and the creeping dry conditions into parts of the Corn Belt (at a time of year when corn is particularly sensitive to hot and dry conditions), it is shaping up to be a difficult situation for farmers and  ranchers all around the region.

So it’s appropriate that the Secretary of Agriculture released a disaster proclamation in late June, allowing affected regions to apply for emergency loans. But another of the Secretary’s solutions for ranchers with hungry livestock—authorizing “emergency grazing” (and just this week) “emergency haying” on grasslands and wetlands designated off-limits to agriculture—could exacerbate another problem.

Short-term emergencies can hurt our ability to plan for the long-term

The Conservation Reserve Program (CRP), created by the 1985 Farm Bill, pays landowners a rental fee to keep environmentally sensitive lands out of agricultural production, generally for 10-15 years. It also serves to protect well-managed grazing lands as well as to provide additional acres for grazing during emergencies such as drought.

Instead of planting crops on these acres, farmers plant a variety of native grasses and tree species well suited to provide flood protection, wildlife and pollinator habitat, and erosion prevention. In 2016, almost 24 million acres across the United States (an area roughly the size of Indiana) were enrolled in CRP. This included 1.5 million acres in North Dakota, which represents approximately 4 percent of the state’s agricultural land.

While this might sound like a lot, CRP numbers across the country are down, and in fact North Dakota has lost half of its CRP acreage since 2007. This is due in part to Congress  imposing caps on the overall acreage allowed in the program, but in large part due to the historically high commodity prices over the same time period, as well as increased demand for corn-based ethanol.

The loss of CRP acreage over the last decade demonstrates high concentrations of land conversion in the Northern Plains, nearly overlapping with the current drought. Image: USDA Farm Service Agency

Research on crop trends tells a complicated story about how effective this program is at protecting these sensitive lands in the long-term. The data demonstrate how grasslands, notably CRP acreage, are being lost rapidly across the United States. CRP acreage often comes back into crop production when leases expire (see examples of this excellent research here, here and finally here, which notes that often CRP lands turn into corn or soy fields). This may potentially erase the environmental benefits from these lands that were set aside.

At the same time, with negotiations toward a new Farm Bill underway, some ranchers and lawmakers are looking for even more “flexibility” in the CRP program. Some have expressed concerns about the amount of land capped for CRP. Some feel that CRP rental rates are too high, tying up the limited suitable land that young farmers need to get started, while others believe there are not enough new contracts accepted (for things like wildlife habitat) because of caps.

The bottom line is that it is critical to have emergency plans in place to protect producers in cases of drought. Emergency livestock grazing on CRP acreage is one solution to help prevent ranchers from selling off their herds (such sell-offs are already being reported). But, if CRP acreage continues to decline, what will happen when the next drought occurs, or if this drought turns into a multi-year disaster? And what will happen if floods hit the region next year, and the grasslands that could help protect against that emergency aren’t there?

Unfortunately, short-term emergencies can hurt our ability to plan for long term, and the trend toward losing CRP and grasslands is one example of this. It is no simple balance for policy to find solutions that simultaneously support short-term needs while encouraging risk reduction in the long term.

Agroecology helps farmers protect land while still farming it

But there’s another way to achieve conservation goals that doesn’t depend upon setting land aside. A number of existing farm bill programs encourage farmers to use practices on working lands that build healthier soils to retain more water, buffering fields from both drought and flood events. Increasing investment and strengthening elements of these programs is an effective way to help farmers and ranchers build long-term resilience.

Recent research from USDA scientists in the Northern Plains highlights climate change impacts and adaptation options for the region, and their proposed solution sound much like the agroecological practices UCS advocates for: increased cropping intensity and cover crops to protect the soil, more perennial forages, integrated crop and livestock systems, as well as economic approaches that support such diversification and the extension and education services needed to bring research to farmers.

As I wrote last year, drought experts recognize that proactive planning is critical, thinking ahead about how disasters can be best managed through activities such as rainfall monitoring, grazing plans, and water management is critical. Here we are again with another drought, and climate projections tell us that things are likely to get worse. In this year as a new Farm Bill is being negotiated, we have an opportunity to think long-term and make investments for the future to better manage future drought.

 

As Coal Stumbles, Wind Power Takes Off in Wyoming

UCS Blog - The Equation (text only) -

After several years of mostly sitting on the sidelines, Wyoming is re-entering the wind power race in a big way. Rocky Mountain Power recently announced plans to invest $3.5 billion in new wind and transmission over the next three years. This development—combined with the long-awaited start of construction on what could be the nation’s largest wind project—will put Wyoming among the wind power leaders in the region. That’s welcome news for a state economy looking to rebound from the effects of the declining coal industry.

Capitalizing on untapped potential

Wyoming has some of the best wind resources in the country. The state ranks fifth nationally in total technical potential, but no other state has stronger Class 6 and 7 wind resources (considered the best of the best). And yet, wind development has remained largely stagnant in Wyoming since 2010.

In the last seven years, just one 80-megawatt wind project came online in Wyoming as the wind industry boomed elsewhere—more than doubling the installed US wind capacity to 84,000 megawatts.

Fortunately, it appears that Wyoming is ready to once again join the wind power bonanza, bringing a much-needed economic boost along with it. On June 29th, Rocky Mountain Power—Wyoming largest power provider—filed a request with regulators for approval to make major new investments in wind power and transmission. The plan includes upgrading the company’s existing wind turbines and adding up to 1,100 MWs of new wind projects by 2020, nearly doubling the state’s current wind capacity.

In addition to the $3.5 billion in new investments, Rocky Mountain Power estimates that the plan will support up to 1,600 construction jobs and generate as much as $15 million annually in wind and property tax revenues (on top of the $120 million in construction-related tax revenue) to help support vital public services. What’s more—thanks to the economic competitiveness of wind power—these investments will save consumers money, according to the utility.

Rocky Mountain Power isn’t the only company making a big investment in Wyoming’s rich wind resources. After more than a decade in development, the Power Company of Wyoming (PCW) has begun initial construction on the first of the two-phase Chokecherry and Sierra Madre wind project, which will ultimately add 3,000 MW of wind capacity in Carbon County. The $5 billion project expects to support 114 permanent jobs when completed, and hundreds more during the 3-year construction period. PCW also projects that over the first 20 years of operation, the massive project will spur about $780 million in total tax revenues for local and state coffers.

Diversifying Wyoming’s economy with wind

When completed, these two new wind investments will catapult Wyoming to the upper tier of leaders in wind development in the west and nationally. And combined with Wyoming’s existing wind capacity, the total annual output from all wind projects could supply nearly all of Wyoming’s electricity needs, if all the generation was consumed in state. That’s not likely to happen though, as much of the generation from the Chokecherry and Sierra Madre project is expected to be exported to other western states with much greater energy demands.

Still, the wind industry is now riding a major new wave of clean energy momentum in a state better known for its coal production.

Coal mining is a major contributor to Wyoming’s economy, as more than 40 percent of all coal produced in the US comes from the state’s Powder River Basin. But coal production has fallen in recent years as more and more coal plants retire and the nation transitions to cleaner, more affordable sources of power. In 2016, Wyoming coal production dropped by 20 percent compared with the previous year, hitting a nearly 20-year low. That resulted in hundreds of layoffs and confounded the state’s efforts to climb out of a long-term economic slump.  And while production has rebounded some this year, many analysts project the slide to continue over the long-term.

Of course, Wyoming’s recent wind power investments and their substantial benefits alone can’t replace all its losses from the coal industry’s decline. But a growing wind industry can offset some of the damage and play an important role in diversifying Wyoming’s fossil-fuel dependent economy. In fact, Goldwind Americas, the US affiliate of a large Chinese wind turbine manufacturer, recently launched a free training program to unemployed coal miners in Wyoming who want to become wind turbine technicians.

A growing wind industry can also provide a whole new export market for the state as more and more utilities, corporations, institutions and individual consumers throughout the west want access to a clean, affordable, reliable and carbon-free power supply.

Sustaining the momentum

As the wind industry tries to build on its gains in Wyoming, what’s not clear today is whether the state legislature will help foster more growth or stand in the way. In the past year, clean energy opponents in the Wyoming legislature have made several attempts to stymie development, including by significantly increasing an existing modest tax on wind production (Wyoming is the only state in the country that taxes wind production) and penalizing utilities that supply wind and solar to Wyoming consumers. Ultimately, wiser minds prevailed and these efforts were soundly defeated.

That’s good news for all residents of Wyoming. Wind power has the potential to boost the economy and provide consumers with clean and affordable power. Now that the wind industry has returned to Wyoming, the state should do everything it can to keep it there.

Photo: Flickr, Wyoming_Jackrabbit

The San Francisco Bay Area Faces Sea Level Rise and Chronic Inundation

UCS Blog - The Equation (text only) -

Looking across the San Francisco Bay at the city’s rapidly rising skyscrapers, it’s easy to see why Ingrid Ballman and her husband chose to move to the town of Alameda from San Francisco after their son was born. With streets lined with single family bungalows painted in a rainbow of pastel colors and restaurant patios lined with senior citizens watching pelicans hunt offshore, Alameda is a world away from the gigabits per second pace of life across the bay.

Children playing along Alameda’s Crown Memorial State Beach along San Francisco Bay. An idyllic place to play, the California State Department of Parks and Recreation describes the beach as “a great achievement of landscaping and engineering,” a description that applies to much of the Bay Area’s waterfront.

“I had a little boy and it’s a very nice place to raise a child–very family-oriented, the schools are great. And we didn’t think much about any other location than Alameda,” Ballman says. Alameda has been, by Bay Area standards, relatively affordable, though with median home  prices there more than doubling in the last 15 years, this is becoming less the case.

After Ballman and her husband bought their home she began to think more about the island’s future. “At some point,” she says carefully, “it really became clear that we had picked one of the worst locations” in the Bay Area.

A hotspot of environmental risk

The City of Alameda is located on two islands…sort of. Alameda Island, the larger of the two, is truly an island, but it only became so in 1902 when swamps along its southeastern tip were dredged and the Oakland Estuary was created. Bay Farm Island, the smaller of the two, used to be an island, but reclamation of the surrounding marshes has turned it into a peninsula that is connected to the mainland. In the 1950s, Americans flocked to suburbs in search of the American Dream of a house with a white picket fence and 2.5 children, and Alameda Island, home to a naval base and with little space for new housing, responded by filling in marshes, creating 350 additional acres. Bay Farm Island was also expanded with fill to extend the island farther out into the bay.

The filling of areas of San Francisco Bay was common until the late 1960s, when the Bay Conservation and Development Commission was founded.

Many Bay Area communities are built on what used to be marsh land. These low-lying areas are particularly susceptible to sea level rise and coastal flooding. Alameda  Island is circled in red, Bay Farm Island is just to the south.

While many former wetland areas are slated for restoration, many others now house neighborhoods, businesses, and schools, and are among the Bay Area’s more affordable places to live. The median rent for an apartment in parts of San Mateo and Alameda Counties where fill has been extensive can be half what it is in San Francisco’s bedrock-rooted neighborhoods.

When Bay Area residents think about natural hazards, many of us think first of earthquakes. In Alameda, Ballman notes, the underlying geology makes the parts of the island that are built on fill highly susceptible to liquefaction during earthquakes. It is precisely this same geology that places communities built on former wetlands in the crosshairs of a growing environmental problem: chronic flooding due to sea level rise.

Chronic inundation in the Bay Area

Ballman studies a map I brought showing the extent of chronic inundation in Alameda with a moderate sea level rise scenario that projects about 4 feet of sea level rise by the end of the century. The map is a snapshot from UCS’s latest national-scale analysis of community-level exposure to sea level rise.

“Right here is my son’s school,” she says, pointing to a 12-acre parcel of land that’s almost completely inundated on my map. With this moderate scenario, the school buildings are safe and it’s mostly athletic fields that are frequently flooded.

I haven’t brought along a map of chronic inundation with a high sea level rise scenario–about 6.5 feet of sea level rise by 2100–for Ballman to react to, but with a faster rate of sea level rise, her son’s school buildings would flood, on average, every other week by the end of the century. While this scenario seems far off, it’s within the lifetime of Ingrid’s son. And problems may well start sooner.

Seas are rising more slowly on the West Coast than on much of the East and Gulf Coasts, which means that most California communities will have more time to plan their response to sea level rise than many communities along the Atlantic coast. Indeed, by 2060, when the East and Gulf Coasts have a combined 270 to 360 communities where 10% or more of the usable land is chronically inundated, the West Coast has only 2 or 3. Given how densely populated the Bay Area is, however, even small changes in the reach of the tides can affect many people.

As early as 2035 with an intermediate sea level rise scenario, neighborhoods all around the Bay Area–on Bay Farm Island, Alameda, Redwood Shores, Sunnyvale, Alviso, Corte Madera, and Larkspur– would experience flooding 26 times per year or more—UCS’s threshold for chronic flooding–with a moderate scenario. By 2060, the number of affected neighborhoods grows to include Oakland, Milpitas, Palo Alto, East Palo Alto, and others along the corridor between San Francisco and Silicon Valley.

By 2100, the map of chronically inundated areas around the Bay nearly mirrors the map of the areas that were historically wetlands.

By 2100, with an intermediate sea level rise scenario, many Bay Area neighborhoods would experience flooding 26 times or more per year. Many of these chronically inundated areas were originally tidal wetlands.

Affordable housing in Alameda

Like many Bay Area communities, Alameda has struggled to keep up with the demand for housing–particularly housing that is affordable to low- and middle-income families–as the population of the region has grown. In the past 10-15 years, large stretches of the northwestern shore of the island have been developed with apartment and condo complexes.

Driving by the latest developments and glancing down at my map of future chronic inundation zones, I was struck by the overlap. With a high scenario, neighborhoods only 10-15 years old would be flooding regularly by 2060. The main thoroughfares surrounding some of the latest developments would flood by the end of the century.

While the addition of housing units in the Bay Area is needed to alleviate the region’s growing housing crisis, one has to wonder how long the homes being built today will be viable places to live. None of this is lost on Ballman who states, simply, “There are hundreds of people moving to places that are going to be underwater.”

Many of Alameda’s newly developed neighborhoods would face frequent flooding in the second half of the century with intermediate or high rates of sea level rise.

“Some of the more affordable places to live,” says Andy Gunther of the Bay Area Ecosystems Climate Consortium, “are the places that are most vulnerable to sea level rise, including Pinole, East Palo Alto, and West Oakland.” Many of these communities that are highly exposed to sea level rise are low-income communities of color that are already suffering from a lack of investment. These communities have fewer resources at their disposal to cope with issues like chronic flooding.

Bay Area action on sea level rise

How neighborhoods–from the most affordable to the most expensive–throughout the Bay Area fare in the face of rising seas will depend, in part, on local, state, and federal policies designed to address climate resilience. A good first step would be to halt development in places that are projected to be chronically inundated within our lifetimes.

For Bay Area and other Pacific Coast communities that will experience chronic inundation in the coming decades, there is a silver lining: For many, there is time to plan for a threat that is several decades away, compared to communities on the Atlantic Coast that have only 20 or 30 years. And California is known for its environmental leadership, which has led to what Gunther calls an “incredible patchwork” of sea level rise adaptation measures.

Here are some of the many pieces of this growing patchwork quilt of adaptation measures:

In South San Francisco Bay, a number of shoreline protection projects have been proposed or are underway.

A regional response to sea level rise

Gunther notes that “We’re still struggling with what to do, but the state, cities, counties, and special districts are all engaged” on the issue of sea level rise. With hundreds of coastal communities nationwide facing chronic flooding that, in the coming decades, will necessitate transformative changes to the way we live along the coast, regional coordination, while challenging will be critical. Otherwise, communities with fewer resources to adapt to rising seas risk getting left behind.

“There’s a regional response to sea level rise that’s emerging,” says Gunther, and the recently passed ballot measure AA may be among the first indicators of that regional response.

In 2016, voters from the nine counties surrounding San Francisco Bay approved measure AA, which focuses on restoring the bay’s wetlands. Gunther says that this $500+ million effort could prove to be “one of the most visionary flood protection efforts of our time.” The passage of Measure AA was particularly notable in that it constituted a mandate from not one community or one county, but all nine counties in the Bay Area.

Toward a sustainable Bay Area

Waves of people have rushed in and out of the Bay Area for over 150 years, seeking fortunes here, then moving on as industries change. The stunning landscape leaves an indelible mark on all of us, just as we have left a mark on it, forever altering the shoreline and ecosystems of the bay.

For those of us, like Ingrid Ballman and like me, who have made our homes and are watching our children grow here, the reality that we cannot feasibly protect every home, every stretch of the bay’s vast coastline, is sobering. All around the bay, incredible efforts are underway to make Bay Area communities safer, more flood-resilient places to live. Harnessing that energy at the regional and state levels, and continuing to advocate for strong federal resilience-building frameworks has the potential to make the Bay Area a place we can continue to live for a long time, and a leader in the century of sea level rise adaptation that our nation is entering.

Spanish Translation (En español)

Pengrin/Flickr San Francisco Bay Joint Venture Union of Concerned Scientists Kristy Dahl San Francisco Estuary Institute and the Bay Area Ecosystems Climate Change Consortium.

El área de la bahía de San Francisco enfrenta aumento del nivel del mar e inundación crónica

UCS Blog - The Equation (text only) -

Cuando desde el lado de la bahía uno ve los rascacielos de San Francisco multiplicarse a paso frenético, es fácil entender por qué Ingrid Ballman y su esposo eligieron mudarse de la ciudad hacia Alameda después del nacimiento de su hijo. Con búngalos unifamiliares pintados en un arco iris de colores pastel y restaurantes con patios en donde adultos mayores pasan el tiempo mirando a los pelícanos pescar, Alameda es un mundo de diferencia entre el ritmo de ‘gigabits’ por segundo de San Francisco y la vida del otro lado de la bahía.

Niños jugando en la playa estatal Crown Memorial de Alameda, en la bahía de San Francisco. Un lugar de ensueño para jugar, el Departamento de Parques y Recreación del Estado de California describe la playa como “un gran logro de paisajismo e ingeniería”, descripción que se aplica a la mayor parte de la costa del área de la bahía.

“Tuve un niño y es un lugar agradable para criarlo, muy orientado a la familia, las escuelas son buenas. No pensamos mucho en ningún otro lugar más que Alameda”, dice Ballman. Alameda ha sido, para los estándares del área de la bahía, relativamente económica aunque con el promedio de los precios de las casas, que han subido más del doble en 15 años, esto es cada vez menos el caso.

Después de que Ballman y su esposo compraron su casa, ella comenzó a pensar más en el futuro de la isla. “Hasta cierto punto”, dice cuidadosamente, “realmente está claro que escogimos una de las peores ubicaciones” del área de la bahía de San Francisco.

Un punto estratégico de riesgo

La ciudad de Alameda está situada en dos islas…más o menos. La isla de Alameda, la más grande de las dos, es verdaderamente una isla, lo ha sido desde el 1902 cuando unos pantanos a lo largo de la punta sureste fueron dragados para crear el estuario de Oakland.

La isla de Bay Farm, la más pequeña de las dos, solía ser una isla, pero la recuperación de los humedales la convirtió en una península conectada a tierra firme. En los años cincuenta, cuando familias enteras migraron a los suburbios en busca del sueño americano (una casa con una cerca blanca y 2 hijos y medio), la isla de Alameda, con su base naval, contaba con poco espacio para construir nuevas viviendas.

La solución al influjo de población fue rellenar los humedales para crear 350 acres adicionales. La isla de Bay Farm también usó rellenos para ampliar la isla hacia la bahía.

El relleno de áreas de la bahía de San Francisco fue común hasta finales de los años sesenta, cuando fue fundada la Comisión para la Conservación y Desarrollo de la Bahía.

Muchas comunidades del área de la bahía están construidas sobre humedales Estas áreas bajas son particularmente susceptibles al aumento del nivel del mar e inundación costera.

Mientras que existen programas para recuperar humedales en muchas áreas, muchas otras zonas de relleno son hoy barrios establecidos, con negocios y escuelas, que son más económicos para vivir que otras zonas. La renta promedio de un apartamento en partes de los condados de San Mateo y Alameda, donde el relleno ha sido extensivo, puede valer la mitad que en los barrios de tierra firme de San Francisco.

Mientras de los residentes del área de la bahía consideran peligros ambientales, muchos de nosotros pensamos primero en los terremotos. En Alameda, hace notar Ballman, el terreno geológico hace que partes de la isla que fueron rellenadas sean altamente susceptibles a licuefacción (o pérdida de la firmeza del suelo) cuando hay terremotos. Es precisamente esta misma geología la que pone a estas comunidades, que fueron construidas sobre antiguos humedales, en la mira de un creciente problema ambiental: inundaciones crónicas ocasionadas por el aumento del nivel del mar.

Inundación crónica en el área de la bahía

Ballman estudia el mapa que traje que muestra la extensión de las inundaciones crónicas en Alameda en el año 2100 teniendo en cuenta un escenario intermedio en el aumento del nivel del mar que proyecta un incremento de 4 pies comparado al nivel actual. El mapa es una muestra del último análisis, a escala nacional, de UCS que muestra los riesgos que enfrentan las comunidades del país con el aumento del nivel del mar.

“Aquí está la escuela de mi hijo”, dice apuntando hacia una parcela de 12 acres de tierra que aparece casi completamente inundada en mi mapa. Con este escenario intermedio los edificios de la escuela están a salvo y son principalmente los campos deportivos los que se inundan frecuentemente.

En esta ocasión, no traje un mapa de inundaciones crónicas con un escenario alto que proyecta un aumento de 6.5 pies del nivel del mar para finales de siglo para que Ballman lo viera. Ese mapa muestra que si no logramos reducir las emisiones, y continuamos al mismo paso de aumento del nivel del mar, para finales de siglo los edificios de la escuela de su hijo se inundarán, en promedio, cada dos semanas. Aunque este escenario parece lejano, el hijo de Íngrid vivirá para ese entonces. Más aún, estos impactos podrían adelantarse.

El nivel del mar está aumentando más lentamente en la costa oeste que en la mayor parte de las costas este y del Golfo, lo cual significa que la mayoría de las comunidades californianas tendrán más tiempo para planear su respuesta ante el aumento del nivel del mar.

Ciertamente, para el año 2060, cuando las costas del este y del Golfo cuenten con 270 a 360 comunidades donde el 10% o más de la tierra utilizable se inunda crónicamente, la costa del oeste solamente tendrá 2 o 3. Dado que el área de la bahía está densamente poblada, sin embargo, aún pequeños cambios en el alcance de las mareas podría afectar a mucha gente.

Tan pronto como el año 2035, teniendo en cuenta un escenario intermedio del aumento del nivel del mar, los barrios de la isla Bay Farm, Alameda, Redwood Shores, Sunnyvale, Alviso, Corte Madera y Larkspur vivirán inundaciones 26 veces al año o más (este es el umbral que ha definido UCS para catalogar las áreas que sufren inundaciones crónicas).

Para el año 2060, el número de barrios afectados ascendería hasta incluir Palo Alto, East Palo Alto y otras zonas a lo largo del corredor entre San Francisco y Silicon Valley.

Para el año 2100, el mapa de áreas crónicamente inundadas alrededor de la bahía es muy parecido al mapa de áreas que previamente fueron humedales.

Para el año 2100, en un escenario intermedio del aumento del nivel de mar, muchos vecindarios del área de la bahía experimentarían inundaciones 26 veces o más al año. Muchas de estas áreas crónicamente inundadas fueron originalmente humedales de mareas.

Vivienda asequible en Alameda

Como en muchas otras comunidades del área de la bahía, Alameda ha luchado para mantenerse a la altura de la demanda de la vivienda, particularmente la vivienda asequible para familias de ingresos bajos y medios, ante el crecimiento de la población en la región.

En los últimos 10 a 15 años, en grandes extensiones de la costa del noroeste de la isla se han desarrollado complejos de apartamentos y condominios. Conduciendo por las últimas construcciones y echando un vistazo a mi mapa de futuras zonas de inundaciones crónicas, me sentí perpleja por la superposición.

En un escenario alto, los barrios construidos solamente hace 10 o 15 años se inundarán regularmente para el año 2060. En este mismo escenario, para final de siglo, las vías principales que rodean algunos de las últimas construcciones se inundarán.

A pesar de que es necesario construir más unidades residenciales en el área de la bahía para aliviar la creciente crisis de vivienda, uno se pregunta, ¿cuánto tiempo serán lugares viables para vivir las casas hoy en construcción? Ballman entiende la magnitud del problema y dice simplemente, “hay cientos de personas mudándose a lugares que estarán bajo el agua”.

Muchos de los barrios recién construidos en Alameda enfrentarían frecuentes inundaciones en la segunda mitad del siglo con índices intermedios o altos de aumento del nivel de mar.

“Algunos de los lugares más accesibles para vivir”, dice Andy Gunther de Bay Area Ecosystems Climate Consortium, “son los lugares más vulnerables al aumento del nivel del mar, incluyendo Pinole, East Palo Alto y West Oakland”. Muchas de estas comunidades son comunidades de bajos recursos pertenecientes a minorías étnicas y raciales, que tienen que lidiar con la falta de inversión en sus barrios, y quienes, por lo tanto, tendrán menos recursos para enfrentar el aumento del nivel del mar.

Medidas adoptada por la bahía de San Francisco con miras al aumento del nivel de mar

Cómo le vaya a los barrios del área de la bahía, tanto al más barato como al más caro, ante el aumento del nivel del mar dependerá en parte de las políticas locales, estatales y federales diseñadas para enfrentar el cambio climático. Un buen primer paso sería detener las construcciones en lugares en los que se proyecta estarán crónicamente inundados en el transcurso de nuestras vidas

Pero para las comunidades de la bahía hay buenas noticias en medio de la adversidad: muchas tienen décadas para planear como enfrentarán los cambios venideros mientras las comunidades del Golfo y de la costa Atlántica tan solo cuentan con 20 o 30 años para tomar estas decisiones. California es conocido por su liderazgo en el medioambiente, que ha conducido a lo que Gunther llama “un increíble mosaico” de medidas de adaptación ante el aumento del nivel del mar.

Aquí tenemos algunas de las muchas piezas del creciente trabajo del “mosaico” de medidas de adaptación:

  • El año pasado, cuando la ley 2800 fue aprobada, el Gobernador de California Jerry Brown creó el Climate-Safe Infrastructure Working Group que busca integrar un rango de posibles escenarios climáticos al diseño y planeación de infraestructura.
  • La ciudad de San Francisco desarrolló guías para planear la ciudad pensando en el aumento del nivel del mar.
  • Con subsidio de la Agencia de Protección al Medioambiente (EPA, por sus siglas en inglés) el Novato Watershed Program está aprovechando los procesos naturales para reducir los riesgos de inundaciones a lo largo de Novato Creek.
  • El Instituto del Estuario de San Francisco (SFEI, por sus siglas en inglés) está trabajando para entender la historia natural de San Francisquito Creek, cerca de Palo Alto, y de East Palo Alto con la finalidad de desarrollar estructuras de control de inundaciones y metas de restauración funcionales y sostenibles.
  • El Santa Clara Valley Water District está programado para empezar a trabajar este verano para mejorar el drenaje de los canales del este y el oeste de Sunnyvale propensos a inundaciones y a reducir los riesgos de inundaciones en 1,600 hogares. El distrito también está abordando los problemas de inundaciones por mareas en cooperación con el Cuerpo de Ingenieros del Ejército de los Estados Unidos.
  • Como parte de sus esfuerzos para afrontar el aumento del nivel del mar, el condado de San Mateo instaló visores de realidad virtual a lo largo de las orillas del mar para involucrar al público en una discusión sobre cómo el aumento del nivel del mar afectaría su comunidad.
  • A nivel regional, la Bay Conservation Development Commission en colaboración con la Administración Nacional Oceánica y Atmosférica (NOOA, por sus siglas en inglés) y otras agencias locales, estatales y federales para el proyecto Adapting to Rising Tides que proporciona información, herramientas y orientación para organizaciones que buscan soluciones a los restos que trae el cambio climático.
  • La competencia Resilient by Design para el área de la bahía reúne a ingenieros, miembros de la comunidad y diseñadores quienes conjuntamente diseñan soluciones para enfrentar las consecuencias del aumento del nivel del mar.

En el sur de la bahía de San Francisco se han propuesto o están en marcha un número de proyectos para la protección de la costa. Fuentes: Instituto del Estuario de San Francisco y Consorcio de Cambio Climático de los Ecosistemas del Área de la Bahía..

Respuesta regional ante el aumento del nivel del mar

Gunther menciona que en el tema del aumento del nivel del mar “aún estamos lidiando con lo que hay que hacer, pero todas las ciudades, estados, condados y distritos especiales están comprometidos”. Con cientos de comunidades costeras en el país enfrentando inundaciones crónicas, en las próximas décadas necesitarán cambios transformadores en la forma de vida costera, coordinación regional, junto con compromiso estatal y federal, serán críticos para abordar los difíciles retos por venir. De otra forma, las comunidades con menos recursos para adaptarse a los riesgos del aumento del nivel del mar se arriesgarán a quedarse atrás.

“Está emergiendo una respuesta regional ante el aumento del nivel del mar”, dice Gunther, y la medida AA recientemente aprobada por votación puede estar entre los primeros indicadores de respuesta regional. En el año 2016, los votantes de los nueve condados que rodean la bahía de San Francisco aprobaron la medida AA, que se enfoca en la restauración de los humedales de la bahía.

Gunther dice que estos esfuerzos de más de $500 millones de dólares podrían probar ser “uno de los esfuerzos más visionarios de protección a las inundaciones de nuestros tiempos”. La aprobación de la medida AA fue particularmente notable porque constituyó un mandato no de una comunidad o un condado, sino de nueve condados de la bahía.

Hacia una área de la bahía sostenible

Por más de 150 años, oleadas de personas han venido a la bahía buscando fortuna, y luego han partido con el cambio de las industrias. El maravilloso paisaje deja una marca imborrable en todos nosotros, así como nosotros hemos dejado una marca en él, alterando por siempre la costa y los ecosistemas de la bahía.

Para aquellos de nosotros, como Ingrid Ballman y como yo, quienes hemos echado raíces y estamos viendo crecer a nuestros hijos aquí, la realidad de que no podemos de forma viable proteger cada casa ni cada tramo de la vasta costa de la bahía da que pensar.

A través de toda la bahía van en camino esfuerzos increíbles para hacer que las comunidades sean lugares más seguros y más resistentes para vivir. Aprovechar esa energía a niveles regional y estatal y continuar haciendo cabildeo para solidificar fuertes marcos de resistencia federales, tiene el potencial de hacer del área de la bahía un lugar sostenible y un líder en el nuevo siglo de la adaptación al aumento del nivel del mar al que está entrando nuestra nación.

 

Pengrin/Flickr San Francisco Bay Joint Venture Union of Concerned Scientists Kristy Dahl San Francisco Estuary Institute and the Bay Area Ecosystems Climate Change Consortium.

Turkey Point: Fire and Explosion at the Nuclear Plant

UCS Blog - All Things Nuclear (text only) -

The Florida Power & Light Company’s Turkey Point Nuclear Generating Station about 20 miles south of Miami has two Westinghouse pressurized water reactors that began operating in the early 1970s. Built next to two fossil-fired generating units, Units 3 and 4 each add about 875 megawatts of nuclear-generated electricity to the power grid.

Both reactors hummed along at full power on the morning of Saturday, March 18, 2017, when problems arose.

The Event

At 11:07 am, a high energy arc flash (HEAF) in Cubicle 3AA06 of safety-related Bus 3A ignited a fire and caused an explosion. The explosion inside the small concrete-wall room (called Switchgear Room 3A) injured a worker and blew open Fire Door D070-3 into the adjacent room housing the safety-related Bus 3B (called Switchgear Room 3B.)

A second later, the Unit 3 reactor automatically tripped when Reactor Coolant Pump 3A stopped running. This motor-driven pump received its electrical power from Bus 3A. The HEAF event damaged Bus 3A, causing the reactor coolant pump to trip on under-voltage (i.e., less than the desired voltage of 4,160 volts.) The pump’s trip triggered the insertion of all control rods into the reactor core, terminating the nuclear chain reaction.

Another second later and Reactor Coolant Pumps 3B and 3C also stopped running. These motor-driven pumps received electricity from Bus 3B. The HEAF event should have been isolated to the Switchgear Room 3A, but the force of the explosion blew open the connecting fire door, allowing Bus 3B to also be affected. Reactor Coolant Pumps 3B and 3C tripped on under-frequency (i.e., alternating current electricity at too much less than the desired 60 cycles per second). Each Turkey Point unit has three Reactor Coolant Pumps that force the flow of water through the reactor core, out the reactor vessel to the steam generators where heat gets transferred to a secondary loop of water, and then back to the reactor vessel. With all three pumps turned off, the reactor core would be cooled by natural circulation. Natural circulation can remove small amounts of heat, but not larger amounts; hence, the reactor automatically shuts down when even one of its three Reactor Coolant Pumps is not running.

At shortly before 11:09 am, the operators in the control room received word about a fire in Switchgear Room 3A and the injured worker. The operators dispatched the plant’s fire brigade to the area. At 11:19 am, the operators declared an emergency due to a “Fire or Explosion Affecting the Operability of Plant Systems Required to Establish or Maintain Safe Shutdown.”

At 11:30 am, the fire brigade reported to the control room operators that there was no fire in either Switchgear Room 3A or 3B.

Complication #1

The Switchgear Building is shown on the right end of the Unit 3 turbine building. Switchgear Rooms 3A and 3B are located adjacent to each other within the Switchgear Building. The safety-related buses inside these rooms take 4,160 volt electricity from the main generator, the offsite power grid, or an EDG and supply it to safety equipment needed to protect workers and the public from transients and accidents. Buses 3A and 3B are fully redundant; either can power enough safety equipment to mitigate accidents.

Fig. 1 (Source: Nuclear Regulatory Commission)

To guard against a single file disabling both Bus 3A and Bus 3B despite their proximity, each switchgear room is designed as a 3-hour fire barrier. The floor, walls, and ceiling of the room are made from reinforced concrete. The opening between the rooms has a normally closed door with a 3-hour fire resistance rating.

Current regulatory requirements do not require the room to have blast resistant fire doors, unless the doors are within 3 feet of a potential explosive hazard. (I could give you three guesses why all the values are 3’s, but a correct guess would divulge one-third of nuclear power’s secrets.) Cubicle 3AA06 that experienced the HEAF event was 14.5 feet from the door.

Fire Door D070-3, presumably unaware that it was well outside the 3-feet danger zone, was blown open by the HEAF event. The opened door created the potential for one fire to disable Buses 3A and 3B, plunging the site into a station blackout. Fukushima reminded the world why it is best to stay out of the station blackout pool.

Complication #2

The HEAF event activated all eleven fire detectors in Switchgear Room 3A and activated both of the very early warning fire detectors in Switchgear Room 3B. Activation of these detectors sounded alarms at Fire Alarm Control Panel 3C286, which the operators acknowledged. These detectors comprise part of the plant’s fire detection and suppression systems intended to extinguish fires before they cause enough damage to undermine nuclear safety margins.

But workers failed to reset the detectors and restore them to service until 62 hours later. Bus 3B provided the only source of electricity to safety equipment after Bus 3A was damaged by the HEAF event. The plant’s fire protection program required that Switchgear Room 3B be protected by the full array of fire detectors or by a continuous fire watch (i.e., workers assigned to the area to immediately report signs of smoke or fire to the control room.) The fire detectors were out-of-service for 62 hours after the HEAF event and the continuous fire watches were put in place late.

Workers were in Switchgear Room 3B for nearly four hours after the HEAF event performing tasks like smoke removal. But a continuous fire watch was not posted after they left the area until 1:15 pm on March 19, the day following the HEAF event. And these workers were placed in Switchgear Room 3A, not in Switchgear Room 3B housing the bus that needed to be protected.

Had a fire started in Switchgear Room 3B, neither the installed fire detectors nor the human fire detectors would have alerted control room operators. The lights going out on Broadway, or whatever they call the main avenue at Turkey Point, might have been their first indication.

Complication #3

At 12:30 pm on March 18, workers informed the control room operators that the HEAF event damaged Bus 3A such that it could not be re-energized until repairs were completed. Bus 3A provided power to Reactor Coolant Pump 3A and to other safety equipment like the ventilation fan for the room containing Emergency Diesel Generator (EDG) 3A. Due to the loss of power to the room’s ventilation fan, the operators immediately declared EDG 3A inoperable.

EDGs 3A and 3B are the onsite backup sources of electrical power for safety equipment. When the reactor is operating, the equipment is powered by electricity produced by the main generator as shown by the green line in Figure 2. When the reactor is not operating, electricity from the offsite power grid flows in through transformers and Bus 3A to the equipment as indicated by the blue line in Figure 2. When under-voltage or under-frequency is detected on their respective bus, EDG 3A and 3B will automatically start and connect to the bus to supply electricity for the equipment as shown by the red line in Figure 2.

Fig. 2 (Source: Nuclear Regulatory Commission with colors added by UCS)

Very shortly after the HEAF event, EDG 3A automatically started due to under-voltage on Bus 3A. But protective relays detected a fault on Bus 3A and prevented electrical breakers from closing to connect EDG 3A to Bus 3A. EDG 3A was operating, but disconnected from Bus 3A, when the operators declared it inoperable at 12:30 pm due to loss of the ventilation fan for its room.

But the operators allowed “inoperable” EDG 3A to continue operating until 1:32 pm. Given that (a) its ventilation fan was not functioning, and (b) it was not even connected to Bus 3A, they should not have permitted this inoperable EDG from operating for over an hour.

Complication #4

A few hours before the HEAF event on Unit 3, workers removed High Head Safety Injection (HHSI) pumps 4A and 4B from service for maintenance. The HHSI pumps are designed to transfer makeup water from the Refueling Water Storage Tank (RWST) to the reactor vessel during accidents that drain cooling water from the vessel. Each unit has two HHSI pumps; only one HHSI pump needs to function in order to provide adequate reactor cooling until the pressure inside the reactor vessel drops low enough to permit the Low Head Safety Injection pumps to take over.

On the day before, workers found a small leak from a small test line downstream of the common pipe for the recirculation lines of HHSI Pumps 4A and 4B (circled in orange in Figure 3). The repair work was estimated to take 18 hours. Both pumps had to be isolated in order for workers to repair the leaking section.

Pipes cross-connect the HHSI systems for Units 3 and 4 such that HHSI Pumps 3A and 3B (circled in purple in Figure 3) could supply makeup cooling water to the Unit 4 reactor vessel when HHSI Pumps 4A and 4B were removed from service. The operating license allowed Unit 4 to continue running for up to 72 hours in this configuration.

Fig. 3 (Source: Nuclear Regulatory Commission with colors added by UCS)

Before removing HHSI Pumps 4A and 4B from service, operators took steps to protect HHSI Pumps 3A and 3B by further restricting access to the rooms housing them and posting caution signs at the electrical breakers supplying electricity to these motor-driven pumps.

But operators did not protect Buses 3A and 3B that provide power to HHSI Pumps 3A and 3B respectively. Instead, they authorized work to be performed in Switchgear Room 3A that caused the HEAF event.

The owner uses a computer program to characterize risk of actual and proposed plant operating configurations. Workers can enter components that are broken and/or out of service for maintenance and the program bins the associated risk into one of three color bands: green, yellow, and red in order of increasing risk. With only HHSI Pumps 4A and 4B out of service, the program determined the risk for Units 3 and 4 to be in the green range. After the HEAF event disabled HHSI Pump 3A, the program determined that the risk for Unit 4 increased to nearly the green/yellow threshold while the risk for Unit 3 moved solidly into the red band.

The Cause(s)

On the morning of Saturday, March 18, 2017, workers were wrapping a fire-retardant material called Thermo-Lag around electrical cabling in the room housing Bus 3A. Meshing made from carbon fibers was installed to connect sections of Thermal-Lag around the cabling for a tight fit. To minimize the amount of debris created in the room, workers cut the Thermal-Lag material to the desired lengths at a location outside the room about 15 feet away. But they cut and trimmed the carbon fiber mesh to size inside the room.

Bus 3A is essentially the nuclear-sized equivalent of a home’s breaker panel. Open the panel and one can open a breaker to stop the flow of electricity through that electrical circuit within the house. Bus 3A is a large metal cabinet. The cabinet is made up of many cubicles housing the electrical breakers controlling the supply of electricity to the bus and the flow of electricity to components powered by the bus. Because energized electrical cables and components emit heat, the metal doors of the cubicles often have louvers to let hot air escape.

The louvers also allow dust and small airborne debris (like pieces of carbon fiber) to enter the cubicles. The violence of the HEAF event (a.k.a. the explosion) destroyed some of the evidence at the scene, but carbon fiber pieces were found inside the cubicle where the HEAF occurred.  The carbon fiber was conductive, meaning that it could transport electrical current. Carbon fiber pieces inside the cubicle, according to the NRC, “may have played a significant factor in the resulting bus failure.”

Further evidence inside the cubicle revealed that the bolts for the connection of the “C” phase to the bottom of the panel had been installed backwards. These backwards bolts were the spot where high-energy electrical current flashed over, or arced, to the metal cabinet.

As odd as it seems, installing fire retardant materials intended to lessen the chances that a single fire compromises both electrical safety systems started a fire that compromised both electrical safety systems.

The Precursor Events (and LEAF)

On February 2, 2017, three electrical breakers unexpectedly tripped open while workers were cleaning up after removing and replacing thermal insulation in the new electrical equipment room.

On February 8, 2017, “A loud bang and possible flash were reported to have occurred” in the new electrical equipment room as workers were cutting and installing Thermo-Lag. Two electrical breakers unexpectedly tripped open. The equipment involved used 480 volts or less, making this a low energy arc fault (LEAF) event.

NRC Sanctions

The NRC dispatched a special inspection team to investigate the causes and corrective actions of this HEAF event. The NRC team identified the following apparent violations of regulatory requirements that the agency is processing to determine the associated severity levels of any applicable sanctions:

  • Failure to establish proper fire detection capability in the area following the HEAF event.
  • Failure to properly manage risk by allowing HHSI Pumps 4A and 4B to be removed from service and then allowing work inside the room housing Bus 3A.
  • Failure to implement effective Foreign Material Exclusion measures inside the room housing Bus 3A that enabled conductive particles to enter energized cubicles.
  • Failure to provide adequate design control in that equipment installed inside Cubicle 3AA06 did not conform to vendor drawings or engineering calculations.

UCS Perspective

This event illustrates both the lessons learned and the lessons unlearned from the fire at the Browns Ferry Nuclear Plant in Alabama that happened almost exactly 42 years earlier. The lesson learned was that a single fire could disable primary safety systems and their backups.

The NRC adopted regulations in 1980 intended to lessen the chances that one fire could wreak so much damage. The NRC found in the late 1990s that most of the nation’s nuclear power reactors, including those at Browns Ferry, did not comply with these fire protection regulations. The NRC amended its regulations in 2004 giving plant owners an alternative means for managing the fire hazard risk. Workers were installing fire protection devices at Turkey Point in March 2017 seeking to achieve compliance with the 2004 regulations because the plant never complied with the 1980 regulations.

The unlearned lesson involved sheer and utter failures to take steps after small miscues to prevent a bigger miscue from happening. The fire at Browns Ferry was started by a worker using a lit candle to check for air leaking around sealed wall penetrations. The candle’s flame ignited the highly flammable sealant material. The fire ultimately damaged cables for all the emergency core cooling systems on Unit 1and most of those systems on Unit 2. Candles had routinely been used at Browns Ferry and other nuclear power plants to check for air leaks. Small fires had been started, but had always been extinguished before causing much damage. So, the unsafe and unsound practice was continued until it very nearly caused two reactors to meltdown. Then and only then did the nuclear industry change to a method that did not stick open flames next to highly flammable materials to see if air flow caused the flames to flicker.

Workers at Turkey Point were installing fire retardant materials around cabling. They cut some material in the vicinity of its application. On two occasions in February 2017, small debris caused electrical breakers to trip open unexpectedly. But they continued the unsafe and unsound practice until it caused a fire and explosion the following month that injured a worker and risked putting the reactor into a station blackout event. Then and only then did the plant owner find a better way to cut and install the material. That must have been one of the easiest searches in nuclear history.

The NRC – Ahead of this HEAF Curveball

The NRC and its international regulatory counterparts have been concerned about HEAF events in recent years. During the past two annual Regulatory Information Conferences (RICs), the NRC conducted sessions about fire protection research that covered HEAF. For example, the 2016 RIC included presentations from the Japanese and American regulators about HEAF. These presentations included videos of HEAF events conducted under lab conditions. The 2017 RIC included presentations about HEAF by the German and American regulators. Ironically, the HEAF event at Turkey Point occurred just a few days after the 2017 RIC session.

HEAF events were not fully appreciated when regulations were developed and plants were designed and built. The cooperative international research efforts are defining HEAF events faster than could be accomplished by any country alone. The research is defining factors that affect the chances and consequences of HEAF events. For example, the research indicates that the presence of aluminum, like in cable trays holding the energized electrical cables, can be ignited during a HEAF event, significantly adding to the magnitude and duration of the event.

As HEAF research defined risk factors, the NRC has been working with nuclear industry representatives to better understand the role these factors may play across the US fleet of reactors. For example, the NRC recently obtained a list of aluminum usage around high voltage electrical equipment.

The NRC needs to understand HEAF factors as fully as practical before it can determine if additional measures are needed to manage the risk. The NRC is also collecting information about potential HEAF vulnerabilities. Collectively, these efforts should enable the NRC to identify any nuclear safety problems posed by HEAF events and to implement a triaged plan that resolves the biggest vulnerabilities sooner rather than later.

Pages

Subscribe to Union of Concerned Scientists aggregator - Combined UCS Blogs