Combined UCS Blogs

A Power Plan No More: Trump Team Slaps Down Progress, Clears Way for Dirty Air

UCS Blog - The Equation (text only) -

Photo: Steve Tatum/Flickr

Today, with the legal system pinning its back on the ropes, the Trump Administration’s Environmental Protection Agency (EPA) released a new proposed rule for power plant carbon pollution standards.

Or perhaps more accurately, EPA proposed a new version of an old rule, as the content retreads that which the agency already finalized three years ago, previewed four years ago, was directed to pursue five years ago, and solidified the obligation to create nearly ten years ago.

But history, learning, progress—what’s that? For this administration, the opportunity to stand on the shoulders of those who came before; to reach higher, stretch further; to learn and evolve and grow—to all of that, the Trump Administration spat out a surly no.

And so today what we get instead are our public health protectors turning their backs on the future, on science, on us. Craven in the face of the real and true climate crisis at hand, these leaders took an opportunity to deliver forward progress and delivered a postcard from the past instead, full of old assumptions, shameless exemptions, and out-and-out deceit.

Be incredulous.

Get upset.

Then, demand more.

Because these are our purported leaders yanking us right in reverse, exactly at a time when the real path forward could not be clearer—we’ve gathered all the tools around us needed to succeed, and heard all the devastating calls to action we can bear.

We had a plan. It wasn’t enough, but it was a start, and we had a plan. Today, we have a power plan no more.

From landmark to off the mark in ½ an administration flat

In August 2015, following years of robust rulemaking on top of years of painstakingly constructed motivating frameworks, the Obama Administration’s EPA finalized the Clean Power Plan.

This was a landmark rule by all accounts. It carefully navigated the particularities of the nation’s integrated power system, giving states wide flexibility to achieve gains where best they could, while balancing that latitude against the guiding hand of public health protections. The rule underwent record levels of stakeholder engagement, evolving and improving over time as it incorporated diverse input, and its underlying framework allowed room to be improved further still in the face of future change.

What’s more, contrary to polluter cries of  “unachievable” and “a threat to grid reliability,” the Clean Power Plan was in fact conservative in the face of historic power sector change, rendered nearly out of date before it ever went live.

Because by the close of 2017—years before the first compliance deadline ever came to pass; before, even, the rule ever actually got underway—the nation’s power sector had already reduced its carbon dioxide emissions 28 percent compared to 2005 levels—well along the way to the 32 percent by 2030 requirement the Clean Power Plan had originally had in store.

Of course, that’s at the national level. Across the country, different states are at different points in the transition, and that’s where the Clean Power Plan’s true value came in: charting a course forward for laggards, too, to ensure cleaner air would be available to all.

But the Clean Power Plan is not our nation’s power plan anymore.

Even though it was eminently achievable.

Even though people, real people, will be hurt because of this public health rollback.

Even though the only way to justify this change was for the agency’s new analysis to farcically rule out large swaths of what can be counted as a benefit, and amp up what gets considered as a cost. And even then still only because the agency recognized that coal plants would need a further boost, so gave them permission to pollute far more by amending requirements for New Source Review.

And still beyond that, beyond all of that, what’s most jarring is the fact that in issuing this proposed rule, the Trump Administration is tacitly acknowledging that climate change is real and human-caused, as it’s still abiding by the agency’s 2009 Endangerment and Cause or Contribute Findings, the foundational framework that ultimately resulted in the Clean Power Plan. And that means that EPA acknowledges it has a responsibility to act.

Which brings us to this: If you acknowledge a responsibility to act, and you deliver a proposal that could increase emissions instead, then just who is it you’re working to protect?

Because stunningly few will be better off under this rule, and heaven help the rest.

We’ll win—But that’s not good enough

The Union of Concerned Scientists will be actively engaged in this rulemaking process, joining with others to issue a deep and forceful rejection of the cynical and capricious proposal at hand. And when the inevitable day in court arrives, the law will be on our side.

But that doesn’t help people struggling today:

  • Coal plants are closing today. New coal plants are not being built. What coal workers and coal communities need is real and true support; a committed path to the future, not dangling distractions in the face of wholesale change.
  • Coal-fired power plant pollution is devastating public health today. Public health and environmental wreckage do not improve simply because you’ve stopped counting or considering all that they are forced to bear.
  • Climate change impacts are being felt today, and growing worse every day. Heat illness, wildfires, flooding—it’s happening, and communities are paying the price. Acknowledging the responsibility to act and then doing anything but is as gutless as it gets.

What we need is a plan. We need leaders willing to lead, and we need a plan.

But with this new proposal, we did not get leaders, and we lost our power plan.

Photo: Steve Tatum/Flickr

Will California Continue its Progress on Clean Electricity?

UCS Blog - The Equation (text only) -

Source: geniusksy/Adobe Stock

With two weeks left in the California legislative session, the fate of several proposals that would make big changes to California electricity policy are still up in the air.

There’s Senate Bill 100 (De León) which would raise the Renewables Portfolio Standard to 60% by 2030 and create a longer-term goal to reach 100% carbon-free electricity by 2045.  Assembly Bill 813 (Holden) would lay the groundwork for the California Independent System Operator (the grid manager for most of the state) to transition to a regional electricity market. Senate Bill 64 (Wieckowski) would improve energy agencies and air regulators’ understanding of how natural gas power plant operations are changing over time and how those changes may impact air quality.

Swirling around all these issues is whether and how the Legislature is going to weigh in this year on utility wildfire liability.

No matter what happens in Sacramento this August, it seems clear to me that California will need to make some big decisions in the coming years. Will we continue our clean energy progress? Will we seek more ambitious solutions as climate change impacts worsen?

Creating a robust, resilient, and low-cost supply of carbon-free electricity is critical to reducing the global warming and air pollution that results from consuming fossil-based sources of energy across many sectors of our economy. Here are 7 issues (in no order of importance) at the top of my mind that I think need to be addressed in the near future:

  1. Set a long-term clean electricity goal, but don’t take our eyes off 2030: it takes time to make the necessary investments in carbon-free generation and other supporting infrastructure like transmission lines and the distribution grid. We’ll need long-term signals—like SB 100—to help guide the research and investment that will be needed to make this transition a reality. At the same time, we need to make sure our nearer-term (2020 and 2030) clean energy goals are met in ways that allow Californians to experience the environmental and economic benefits of these early actions.
  2. Plan to transition away from natural gas: coal is used less and less in California and by 2020 all direct imports of coal power will be phased out. But we still depend on natural gas generation to meet about a third of our electricity needs and that number will not decline enough without a concerted effort. If Californians truly want to take the carbon out of our electricity sector, we need a plan for how to wean ourselves off this fossil fuel. UCS just released an analysis that we hope begins a longer conversation about how to transition away from gas and how to make sure we go about reducing natural gas generation in the most cost-effective and socially equitable way possible.
  3. Make the grid more flexible with clean technology: wind and solar generation vary with weather patterns, which means the clean grid of the future must be flexible enough to adapt to greater variability in electricity supplies. This flexibility needs to come from clean technologies like energy storage, that can control their power output. We also need more strategies, like time varying electricity rates, to shift our electricity use towards times of the day when renewables are most abundant. The debate over AB 813 may be fierce, but regardless of this year’s choice to launch a Western regional grid or not, grid operators in the future need to be able to share resources and access renewables throughout a wider geographic footprint. It’s just more efficient and the grid will be more flexible and able to accommodate more carbon-free electricity if California can sell its excess solar power to other states during the day, and buy excess wind power from its neighbors at night.
  4. Unlock the value of distributed energy resources: there are unique and valuable localized benefits to clean energy investments like rooftop solar and small-scale storage that, when installed in the right locations, save us money by postponing or avoiding upgrades to the distribution system. Smaller, more local clean energy resources can make the grid more resilient when a big power plant or transmission line goes down because of extreme weather or some other type of grid emergency. We need a better way to quantify the value of these resources to make paths to market clearer for technology innovators.
  5. Do more to reduce carbon in the building sector: heating water and space in California’s homes and buildings with natural gas emits as much global warming pollution as all in-state power plants. And, this doesn’t count methane that leaks from gas pipelines. California’s policies and programs to reduce natural gas usage in buildings lag behind other clean energy efforts. In the next few years, decision makers need to identify ways to lower the cost of technology that can reduce energy use in buildings, and transition away from fossil fuels for the energy we need.
  6. Use renewables to charge electric cars: millions of electric vehicles on the road are a key part of the state’s vision for clean energy in the next decade. We need to make sure we charge all these electric cars when renewables are most abundant. This means building new charging infrastructure and creating consumer habits that will maximize daytime charging and staggering when cars draw power from the grid to minimize surges in electricity demand.
  7. Make the clean energy transition equitable: Talented and skilled workers will be needed to create California’s clean energy future – in infrastructure, manufacturing, software, construction, maintenance, and more. The public, private, and non-profit sectors, including educational institutions, should collaborate to train and develop the workforce needed to fuel this growth. As new business models for the clean energy grid are developed and tested, workers should benefit from the industry’s growth and be paid fairly.

Advancing all this good stuff will require robust and cross-sectoral communication, information sharing, investment planning, and risk-management processes that engage all stakeholders. This is especially important as California’s electricity and transportation sectors have grown and become more diverse, and as California strives to make deeper cuts to global warming emissions throughout all sectors of its economy, including the goods and services we use.

Legislators and advocates are busy working on the future laws and regulations that will make a clean energy future a reality. But all of us have a part to play in this transition if we want California to be a global leader. All eyes are on California to show the world how to wean millions of people and an enormous economy off fossil fuels. It’s imperative we get this right.

The EPA’s Proposed Chemical Disaster Rule is a Disaster in the Making

UCS Blog - The Equation (text only) -

Photo: LadyDragonflyCC/Flickr

It isn’t new news or another hot take: communities of color are disproportionately exposed to and impacted by toxic chemical releases. The impacts of these incidents are severe, including death and serious injury, and often the children and families affected also have the fewest resources to protect themselves. They are also more immediately impacted by the decisions the Environmental Protection Agency (EPA) makes on chemical risk management.

UCS released a white paper todayThe Impact of Chemical Facilities on Environmental Justice Communities: Review of Selected Communities Affected by Chemical Facility Incidents – that addresses the EPA’s proposed rule to reverse improvements to the Risk Management Program (RMP), a regulatory mechanism intended to ensure the safety and security of over 12,000 facilities that use or store hazardous chemicals nationwide, as made under the Obama administration and finalized in January 2017. Specifically, it highlights the potential health impacts of past catastrophic incidents at chemical facilities on nearby communities.

Initially, the EPA delayed enforcement of the 2017 RMP changes, but just last week the delay was ruled as “arbitrary and capricious” (read: an abuse of power; illegal) by the DC Circuit Court. The court win means EPA must implement the changes at once – but the proposed rule is still on the table. The white paper highlights examples of the impacts of industrial chemical facilities incidents on the people in surrounding areas, which are most often workers and environmental justice communities (majority people of color, often low-income or living at or below the poverty line), thereby demonstrating the need for the 2017 rule.

Key provisions of the 2017 rule that would increase information for first responders, fenceline communities, and the broader public are eliminated in the proposed rule, along with the move toward safer technologies and practices. Despite the EPA’s acknowledgment over the past several years that chemical facility incidents occur frequently and can affect nearby local communities as well as facility employees themselves, the EPA’s 2018 proposed rule and supporting documentation ignores these findings – and the communities and workers who fought hard for common-sense protections. As the white paper details, the 2018 proposed rule not only increases the likelihood that incidents will occur by removing essentially all preventive measures, but it also increases the impact on communities when these incidents happen. The findings in the report underscore an important need to honor the safeguards from the 2017 rule and to investigate the long-term and cumulative health impacts from chemical facility incidents, including the regular release of toxics into surrounding communities.

Without a doubt, the EPA’s proposed chemical disaster rule would leave the public without greater protection from life-threatening industrial incidents. There is still time to let them know that removing safeguards for communities and workers is unconscionable and we will not stand for it. We have fought hard for improvements in the past and won, and we can do it again. There is still time to submit a public comment to the record, but the comment period for the proposed rule ends in two days. UCS has created an RMP public comment guide for tips on writing a strong comment for the proposed rule, which can be found here.

Photo: LadyDragonflyCC/Flickr

Why Would Illinois Want More Pollution from Coal Power?

UCS Blog - The Equation (text only) -

Old coal-burning power plants have the greatest emissions per energy delivered. Photo: snowpeak/Wikimedia Commons

Changes to an important state air pollution standard are being considered by the Illinois Pollution Control Board this summer. To assess the potential effects of changing the rule, my colleagues and I collaborated with the Clean Air Task Force to analyze the public health impacts of coal-fired power plants in Illinois. We found striking differences among the Dynegy plants that would be affected by the proposed rule change to be decided on as soon as Thursday August 23.

Under the current Illinois Multi-Pollutant Standard (MPS), the Dynegy coal plants that cause the most harm to Illinois residents are the ones more likely to be closed or be upgraded with air pollution control technology. But if the Pollution Control Board adopts Dynegy’s proposal to change how state air pollution limits are calculated, it could result in the company closing its cleaner plants and keeping its dirtiest plants open because it would no longer need the cleanest plants in its fleet to comply with the state requirements.

My colleague James Gignac, lead analyst in the Midwest Climate and Energy Program at the Union of Concerned Scientists (UCS), further reflects on the impacts of the proposed change to the MPS, below.


Recently acquired by Vistra, Dynegy is a Texas-based company that owns 11 power stations in Illinois with a total generating capacity of 8,200 megawatts. Eight of these stations are coal-fired power plants connected to the grid operator MISO. Dynegy is not a regulated utility (it is an independent power producer or merchant generator), yet has sought to force ratepayers to subsidize its power plants. In addition to its legislative efforts, Dynegy has been working with the Illinois EPA to change the Illinois MPS, a 2006 clean air standard that applies to the eight coal plants in MISO territory.

The proposed changes to the standard would create annual caps on tons of sulfur dioxide and nitrogen oxides emitted by the company’s entire coal fleet rather than maintaining the existing standards that require small groups of plants to meet stringent rates of pollution (in pounds per the amount of coal burned). If approved, the new limit on sulfur dioxide would be nearly double what Dynegy emitted last year and the cap on nitrogen oxide emissions would be 79 percent higher than 2016 emissions. Sulfur dioxide leads to formation of particulate matter, and nitrogen oxides create ozone, both of which lead to many serious respiratory and cardiovascular health effects. Illinois EPA and Dynegy argue that emissions theoretically could have been higher under the existing standard and therefore the new caps should be considered an improvement that also provides operational flexibility to the industry.

What’s at stake?

As part of an upcoming report, UCS partnered with the Clean Air Task Force which has developed a methodology and software application to estimate the health impacts of individual coal plants. Below is a chart showing key annual public health impacts caused by the eight Dynegy plants subject to the MPS based on their 2016 operations:

Estimated 2016 Health Effect Incidents of Eight Dynegy Coal Plants

Note: Because Newton Unit 2 was permanently retired in September 2016, we have adjusted the plant-level data provided to us by CATF in proportion to the megawatt-hours generated by Newton Unit 1 in 2016. Coal Plant Location Megawatt-hours (2016) Premature Deaths Heart Attacks Asthma Attacks Asthma ER Visits Acute Bronchitis Hospital Admins Baldwin Baldwin, IL 9,793,431 26.1 15.7 171.8 11 14.9 7.3 Coffeen Coffeen, IL 4,606,098 4.4 2.6 29.1 1.9 2.5 1.2 Duck Creek Canton, IL 2,108,062 1.5 0.9 10.3 0.7 0.9 0.4 E.D. Edwards Bartonville, IL 2,811,862 36 21.9 237.6 15 20.7 10.2 Joppa Joppa, IL 3,162,666 38.6 22.8 248.1 15.5 21.6 10.8 Havana Havana, IL 2,353,449 9.2 5.6 61.0 3.9 5.3 2.6 Hennepin Hennepin, IL 1,436,468 24.5 14.9 161.6 10.2 14.1 7.0 Newton Unit 1 Newton, IL 2,157,885 25.5 15.3 165.7 10.4 14.4 7.1


Coal plants emit various types of air pollutants but can reduce the harmful impacts by installing pollution controls such as scrubbers. This technology can be a major long-term investment and many plants do not have the full suite of equipment. In addition to pollution controls and emission levels, the health effects of coal plants are influenced by their downwind population levels.

We can see from the data above that Coffeen and Joppa both produced over 3 million megawatt-hours in 2016, yet the health impacts from Joppa were dramatically higher despite having a lower power output. Similarly, Duck Creek and Newton Unit 1 produced roughly equal megawatt-hours in 2016 but the harm caused by Newton Unit 1 was far greater.

In other words, the dirtiest plants in Dynegy’s Illinois fleet cause approximately 9 to 17 times more premature deaths compared to Coffeen and Duck Creek, respectively.

The concern of environmental and public health advocates is that Dynegy’s proposed change to the MPS would allow it to close cleaner plants like Coffeen and Duck Creek, which are more expensive to operate, because the company will no longer need them to offset pollution from the dirtier units. Dynegy could then run plants like Joppa and Newton Unit 1 to generate the same amount of electricity but result in greater health impacts like those listed above. Closing plants reduces the amount of available electrical generating capacity in the region which tends to increase power prices paid to companies like Dynegy. Closing cleaner plants that are more expensive to operate results in greater profits for Dynegy.

If the MPS is not changed, Dynegy would more likely retire the dirtier and more harmful plants instead. Less air pollution is a good thing for the health of Illinois residents, and continued progress toward cleaner air was the intent of the rule when it was originally adopted.

The Illinois Pollution Control Board held hearings on the proposed rule change this past winter and spring.  A decision is expected soon.


James’ reflections highlight the need for the Illinois Pollution Control Board to reject the proposed rule change because it will not benefit the environment and public health of Illinois residents. The operational flexibility that Dynegy and Vistra desires should not outweigh the public health benefits of the existing rule.

Profitable companies who knowingly purchase aging, polluting coal plants should expect to comply with existing law and responsibly install modern pollution controls or invest in cleaner, more competitive sources of generation. The Dynegy plants can be reliably replaced with other resources and doing so with renewable energy and energy efficiency can deliver significant economic benefits and bill savings to electric customers in central and southern Illinois. Vistra and Dynegy’s efforts to keep their coal plants open at the same time they attempt to roll back air quality standards is contrary to the clean energy transition underway in Illinois and should be rejected.

Our upcoming report, available in October, will further explore the many benefits of replacing Illinois coal plants with clean energy technologies.

Photo: snowpeak/Wikimedia Commons Photo: Wavebreakmedia/Shutterstock

The Senate Will Accelerate Kelvin Droegemeier’s White House Science Advisor Nomination. That’s a Good Thing.

UCS Blog - The Equation (text only) -

Try not to breathe too easily, but the Senate is in fast drive mode to consider the nomination of Kelvin Droegemeier to lead the White House Office and Science and Technology Policy. And well it should. These days, this is one nomination we should all be excited about, as this Superman of science policy is sorely needed in the White House.

Many scientists cheered Dr. Droegemeier’s nomination after the White House went 19 months without a science advisor. I believe he would be a great pick for any administration, in any country.

The Office of Science and Technology Policy provides the president with advice on everything from energy to health care to pandemics. It needs a confirmed leader.

Feedback on his nomination has been almost universally positive.

A Senate committee will hold a confirmation hearing next Thursday, August 23 at 10:15 a.m. I hope that he will get up and not only talk about the passion that he has for scientific research, but also take a stand for the role of robust federal scientific workforce in informing public health and environmental policy. Historically, OSTP has helped ensure that federal agencies have both resources and independence to use the best available science to make policy. It can do so again.

It remains to be seen whether Dr. Droegemeier will be appointed to serve as science advisor to the president as well as OSTP director; the former doesn’t require Senate confirmation. And while some suspect that the president will simply provide his science advisor with a sword to fall on, methinks that it isn’t that simple. A lack of science advice is a disadvantage for any world leader. Pretend that you’re trying to negotiate a nuclear or climate agreement: you can’t get there from here without understanding the science.

It’s important for Dr. Droegemeier to make it out okay and help end the longest drought of science advice the White House has seen in modern times.

For Washington Voters, I-1631 is a Chance to Tackle Climate Change Head On

UCS Blog - The Equation (text only) -

Photo: Troye Owens/Flickr

The magnitude of the climate challenge is daunting; a constellation of causes and impacts, promising no simple fix.

But a new proposal in Washington state has identified a powerful place to start.

I-1631, on the ballot this November, is grounded in the reality that to truly address climate change today, it’s simply no longer enough to drive down carbon emissions—communities must now also be readied for climate impacts, including those already at hand, and all those still to come.

As a result, this community-oriented, solutions-driven carbon pricing proposal is generating enthusiastic support from a broad and growing coalition across the state.

No single policy can solve all climate challenges, but I-1631 presents a critically important start. And, because it was specifically designed to prioritize those most vulnerable to climate change and the inevitable transitions to come—through intersections with jobs, health, geography, and historical social and economic inequities—the policy stands to be a powerful change for good, and that is the very best metric we’ve got.

Here, a summary of what it’s all about.

Overarching framework

I-1631 is organized around a commonsense framework: charge a fee for carbon pollution to encourage the shift toward a cleaner economy, then accelerate that transition by investing the revenues in clean energy and climate resilience.

The Clean Air, Clean Energy Initiative states:

Investments in clean air, clean energy, clean water, healthy forests, and healthy communities will facilitate the transition away from fossil fuels, reduce pollution, and create an environment that protects our children, families, and neighbors from the adverse impacts of pollution.

Funding these investments through a fee on large emitters of pollution based on the amount of pollution they contribute is fair and makes sense.

I-1631 emerged as the result of a years-long collaboration between diverse stakeholders—including labor, tribal, faith, health, environmental justice, and conservation groups—leading to a proposal that’s deeply considerate of the many and varied needs of the peoples and communities caught in the climate crossfire. The Union of Concerned Scientists is proud to have been a part of this alliance and to now support I-1631.

How it works

There are two main components to I-1631—the investments and the fee. Let’s take them in turn.

Investing in a cleaner, healthier, and more climate-resilient world.

I-1631 prioritizes climate solutions by investing in the communities, workforces, and technologies that the state will need to thrive moving forward. This means identifying and overcoming the vulnerabilities these groups face, and re-positioning the state’s economic, health, and environmental priorities to achieve a resilient and robust future.

The policy proactively approaches this by assigning collected fees to one of three investment areas, guided by a public oversight board and content-specific panels:

  • Clean Air and Clean Energy (70 percent): Projects that can deliver tens of millions of tons of emissions reductions over time, including through renewables, energy efficiency, and transportation support. Within four years, would also create a $50 million fund to support workers affected by the transition away from fossil fuels, to be replenished as needed thereafter.
  • Clean Water and Healthy Forests (25 percent): Projects that can increase the resiliency of the state’s waters and forests to climate change, like reducing flood and wildfire risks and boosting forest health.
  • Healthy Communities (5 percent): Projects that can prepare communities for the challenges caused by climate change—including by developing their capacity to directly participate in the process—and to ensure that none are disproportionately affected.

Of these investments, the initiative further specifies a need to target well over a third of all funds to projects that benefit areas facing particularly high environmental burdens and population vulnerabilities, as well as projects supported by Indian tribes. This works to ensure that those who are most vulnerable are not left behind, but instead positioned to thrive in a changing world.

Another vital part of the proposal is that at least 15 percent of Clean Air and Clean Energy funds must be dedicated to alleviating increases in energy costs for low-income customers that result from pollution reduction initiatives. Without such a stipulation, the policy could lead lower-income households to feel its effects more. But instead, I-1631 directs funds to eliminate such cost increases. This could be through energy-saving investments, such as weatherizing a home, or by directly limiting costs, such as through bill assistance programs.

Qualifying light and power businesses and gas distribution businesses can, instead of paying a fee, claim an equivalent amount of credits and then directly invest in projects according to an approved clean energy investment plan.

Charging a fee for carbon pollution.

To pay for these investments, I-1631 would charge large emitters for the carbon emissions they release. In turn, the policy would send a signal to the market to spur innovation and investments in lower-carbon, less polluting alternatives.

The proposed fee begins at $15 per metric ton of carbon content in 2020 and proceeds to increase by $2 per metric ton each year thereafter, plus any necessary adjustments for inflation. This is estimated to generate hundreds of millions of dollars annually.

Notably, the price does not go up indefinitely. Instead, as a reflection of the intention of the price—to achieve a climate-relevant reduction in carbon emissions—once the state’s 2035 greenhouse gas reduction goal of 25 percent below 1990 levels is met, and the state’s emissions are on a trajectory that indicates compliance with the state’s 2050 goal of 50 percent below 1990 levels, the fee is to be fixed.

And just who is it that pays? Generally, the largest emitters in the state—fossil-fuel fired power plants, oil companies, and large industrial facilities.

However, the proposal also recognizes that Washington has some industries in direct competition with others in places without a comparable carbon fee, and thus a price on carbon could make them less competitive. As a result, the policy specifically provides select exemptions to these entities, including agriculture, pulp and paper mills, and others. The proposal also excludes coal-fired power plants that have committed to shutting down by 2025, in recognition of existing legal settlements and constraints.

Ultimately, the policy seeks to spur the state’s economy towards a forward-looking, carbon-considerate model, but to do it in such a way that workers and vulnerable communities do not end up bearing a disproportionate share of the costs.

Where it stands

Following months of organizing and signature gathering, on top of years of stakeholder engagement and collaboration, I-1631 will officially be put to vote in Washington this November.

This is not the first time carbon pricing has come up in the state; I-1631 builds from previous measures attempted in the legislature and on the ballot.

And this policy has the advantage of being designed from the ground up. It unites diverse stakeholders in common cause, and proactively addresses the fact that vulnerable communities are at risk of being hit first and worst.

What’s more, I-1631’s method of tackling the problem from both sides—charging a fee for pollution and investing funds in that which it aims to change—is an effective policy design, and a popular one at that. It wouldn’t be the first carbon pricing policy in the US, with cap-and-trade programs running in California and the Northeast, though it would be the first to employ an explicit price.

In the face of all this positive momentum, fossil fuel interests have been mounting an aggressive opposition campaign. But their desperate attempts at finding an objection that will stick—calling out threats to jobs and undue burdens on the poor—are undercut by the policy’s careful exemptions, sustained support for worker transitions, and significant direct attention paid to those who need it most.

The fact is, climate change is here, now, and communities are suffering the costs. I-1631 points a way forward, for all.

With this proposal, Washington is demonstrating that climate and community leadership can still be found if you let the people speak—heartening at a time when evidence of such leadership from the nation’s capital is itself sorely missed.

Anticipated Transient Without Scram

UCS Blog - All Things Nuclear (text only) -

Role of Regulation in Nuclear Plant Safety #8

In the mid-1960s, the nuclear safety regulator raised concerns about the reliability of the system relied upon to protect the public in event of a reactor transient. If that system failed—or failed again since it had already failed—the reactor core could be severely damaged (as it had during that prior failure.) The nuclear industry resisted the regulator’s efforts to manage this risk. Throughout the 1970s, the regulator and industry pursued non-productive exchange of study and counter-study. Then the system failed again—three times—in June 1980 and twice more in February 1983. The regulator adopted the Anticipated Transient Without Scram rule in June 1984. But it was too little, too late—the hazard it purported to manage had already been alleviated via other means.

Anticipated Transients

Nuclear power reactors are designed to protect workers and members of the public should anticipated transients and credible accidents occur. Nuclear Energy Activist Toolkit #17 explained the difference between transients and accidents. Anticipated transients include the failure of a pump while running and the inadvertent closure of a valve that interrupts the flow of makeup water to the reactor vessel.

The design responses to some anticipated transients involve automatic reductions of the reactor power level. Anticipated transients upset the balance achieved during steady state reactor operation—the automatic power reductions make it easier to restore balance and end the transient.


For other transients and for transients where power reductions do not successfully restore balance, the reactor protection system is designed to automatically insert control rods that stop the nuclear chain reaction. This rapid insertion of control rods is called “scram” or “reactor trip” in the industry. Nuclear Energy Activist Toolkit #11 described the role of the reactor protection system.

Scram was considered to be the ultimate solution to any transient problems. Automatic power reductions and other automatic actions might mitigate a transient such that scram is not necessary. But if invoked, scram ended any transient and placed the reactor in a safe condition—or so it was believed.

Anticipated Transient Without Scram (ATWS)

Dr. Stephen H. Hanauer, was appointed to the NRC’s Advisory Committee on Reactor Safeguards (ACRS) in 1965. (Actually, the ACRS was part of the Atomic Energy Commission (AEC) in those days. The Nuclear Regulatory Commission (NRC) did not exist until formed in 1975 when the Energy Reorganization Act split the AEC into the NRC and what today is the Department of Energy.) During reviews of applications for reactor operating licenses in 1966 and 1967, Hanauer advocated separating instrumentation systems used to control the reactor from the instrumentation systems used to protect it (i.e., trigger automatic scrams.) Failure of this common system caused an accident on November 18, 1958, at the High Temperature Reactor Experiment No. 3 in Idaho.

The nuclear industry and its proponents downplayed the concerns on grounds that the chances of an accident were so small and the reliability of the mitigation systems so high that safety was good enough. Dr. Alvin Weinburg, Director of the Oak Ridge National Laboratory, and Dr. Chauncey Starr, Dean of Engineering at UCLA, publicly contended that the chances of a serious reactor accident were similar to that of a jet airliner plunging into Yankee Stadium during a World Series game.

In February 1969, E. P. Epler, a consultant to the ACRS, pointed out that common cause failure could impair the reactor protection system and prevent the scram from occurring. The AEC undertook two efforts in response to the observation: (1) examine mechanisms and associated likelihoods that a scram would not happen when needed, and (2) evaluate the consequences of anticipated transients without scrams (ATWS).

The AEC published WASH-1270, “Technical Report on Anticipated Transients Without Scram,” in September 1973. Among other things, this report established the objective that the chances of an ATWS event leading to serious offsite consequences should be less than 1×10-7 per reactor-year. For a fleet of 100 reactors, meeting that objective translates into once ATWS accident every 100,000 years—fairly low risk.

The AEC had the equivalent of a speed limit sign but lacked speedometers or radar guns. Some argued that existing designs had failure rates as high as 1×10-3 per reactor-year—10,000 times higher than the safety objective. Others argued that the existing designs had failures rates considerably lower than 1×10-7 per reactor-year. The lack of riskometers and risk guns fostered a debate that pre-dated the “tastes great, less filling” debate fabricated years later to sell Miller Lite beer.

An article titled “ATWS—Impact of a Nonproblem,” that appeared in the March 1977 issue of the EPRI Journal summarized the industry’s perspective (beyond the clue in the title):

ATWS is an initialism for anticipated transient without scram. In Nuclear Regulatory Commissionese it refers to a scenario in which an anticipated incident causes the reactor to undergo a transient. Such a transient would require the reactor protection system (RPS) to initiate a scram (rapid insertion) of the control rods to shut down the reactor, but for some reason the scram does not occur. … Scenarios are useful tools. They are used effectively by writers of fiction, the media, and others to guide the thinking process.

Two failures to scram has already occurred (in addition to the HTRE-3 failure). The boiling water reactor at the Kahl nuclear plant in Germany experienced a failure in 1963 and the N-reactor at Hanford in Washington had a failure in 1970. The article suggested that scram failures should be excluded from the scram reliability statistical analysis, observing that “One need not rely on data alone to make an estimate of the statistical properties of the RPS.” As long as scenarios exist, one doesn’t need statistics getting in the way.

The NRC formed an ATWS task force in March 1977 to end, or at least focus, the non-productive debate that had been going on since WASH-1270 was published. The task force’s work was documented in NREG-0460, “Anticipated Transients Without Scram for Light Water Reactors,” issued in April 1978. The objective was revised from 1×10-7 per reactor-year to 1×10-6 per reactor-year.

Believe it or not, but somehow changing the safety objective without developing the means to objectively gauge performance towards meeting it did not end or even appreciably change it. Now, some argued that existing designs had failure rates as high as 1×10-3 per reactor-year—1,000 times higher than the safety objective. Others argued that the existing designs had failures rates considerably lower than 1×10-6 per reactor-year. The 1970s ended without resolution to the safety problem that arose more than a decade earlier.

The Browns Ferry ATWS, ATWS, and ATWS

On June 28, 1980, operators reduced the power level on the Unit 3 boiling water reactor (BWR) at the Browns Ferry Nuclear Plant in Alabama to 35 percent and depressed the two pushbuttons to initiate a manual scram. All 185 control rods should have fully inserted into the reactor core within seconds to terminate the nuclear chain reaction. But 76 control rods remained partially withdrawn and the reactor continued operating, albeit at an even lower power level. Six minutes later, an operator depressed the two pushbuttons again. But 59 control rods remained partially withdrawn after the second ATWS. Two minutes later, the operator depressed the pushbuttons again. But 47 control rods remained partially withdrawn after the third ATWS. Six minutes later, an automatic scram occurred that resulted in all 185 control rods being fully inserted into the reactor core. It took four tries and nearly 15 minutes, but the reactor core was shut down. Fission Stories #107 described the ATWSs in more detail.

In BWRs, control rods are moved using hydraulic pistons. Water is supplied to one side of the piston and vented from the other side with the differential pressure causing the control rod to move. During a scram, the water vents to a large metal pipe and tank called the scram discharge volume. While never proven conclusively, it is generally accepted that something blocked the flow of vented water into the scram discharge volume. Flow blockage would have reduced the differential pressure across the hydraulic pistons and impeded control rod insertions. The scram discharge volume itself drains into the reactor building sump. The sump was found to contain considerable debris. But because it collects water from many places, none of the debris could be specifically identified as having once blocked flow into the scram discharge volume.

Although each control rod had its own hydraulic piston, the hydraulic pistons for half the control rods vented to the same scram discharge volume. The common mode failure of flow blockage impaired the scram function for half the control rods.

The NRC issued Bulletin 80-17, “Failure of 76 of 185 Controls Rods to Fully Insert During a Scram at a BWR,” on July 3, 1980, with Supplement 1 on July 18, 1980, Supplement 2 on July 22, 1980, Supplement 3 on August 22, 1980, Supplement 4 on December 18, 1980, and Supplement 5 on February 2, 1981, compelling plant owners to take interim and long-term measures to prevent what didn’t happen at Browns Ferry Unit 3—a successful scram on the first try—from not happening at their facilities.

ATWS – Actual Tack Without Stalling

On November 19, 1981, the NRC published a proposed ATWS rule in the Federal Register for public comment. One could argue that the debates that filled the 1970s laid the foundation for this proposed rule and the June 1980 ATWSs at Browns Ferry played no role in this step or its timing. That’d be one scenario.

The Salem ATWS and ATWS

During startup on February 25, 1983, following a refueling outage, low water level in one of the steam generators on the Unit 1 pressurized water reactor at the Salem nuclear plant triggered an automatic scram signal to the two reactor trip breakers. Had either breaker functioned, all the control rods would have rapidly inserted into the reactor core. But both breakers failed. The operators manually tripped the reactor 25 seconds later. The following day, NRC inspectors discovered that an automatic scram signal had also happened during an attempted startup on February 22, 1983. The reactor trip breakers failed to function. The operators had manually tripped the reactor. The reactor was restarted two days later without noticing, and correcting, the reactor trip breaker failures. Fission Stories #106 described the ATWSs in more detail.

In PWRs, control rods move via gravity during a scram. They are withdrawn upward from the reactor core and held fully or partially withdrawn by electro-magnets. The reactor trip breakers stop the flow of electricity to the electro-magnets, which releases the control rods to allow gravity to drop them into the reactor core. Investigators determined that the proper signal went to the reactor trip breakers on February 22 and 25, but the reactor trip breakers failed to open to stop the electrical supply to the electro-magnets. Improper maintenance of the breakers essentially transformed oil used to lubricated moving parts into glue binding those parts in place—in the wrong places on February 22 and 25, 1983.

The Salem Unit 1 reactor had two reactor trip breakers. Opening of either reactor trip breaker would have scrammed the reactor. The common mode failure of the same improper maintenance practices on both breakers prevented them both from functioning when needed, twice.

The NRC issued Bulletin 83-01, “Failure of Reactor Trip Breakers (Westinghouse DB-50) to Open on Automatic Trip Signal,” on February 25, 1983, Bulletin 83-04, “Failure of Undervoltage Trip Function of Reactor Trip Breakers,” on March 11, 1983, and Bulletin 83-08, “Electrical Circuit Breakers with Undervoltage Trip in Safety-Related Applications Other Than the Reactor Trip System,” on December 28, 1983, compelling plant owners to take interim and long-term measures to prevent failures like those experienced on Salem Unit 1.

ATWS Scoreboard: Brown Ferry 3, Salem 2

ATWS – Actual Text Without Semantics

The NRC published the final ATWS rule adopted on June 26, 1984, or slightly over 15 years after the ACRS consultant wrote that scrams might not happen when desired due to common mode failures. The final rule was issued less than four years after a common mode failure caused multiple ATWS events at Browns Ferry and about 18 months after a common mode failure caused multiple ATWS events at Salem. The semantics of the non-productive debates of the Seventies gave way to actual action in the Eighties.

UCS Perspective

The NRC issued NUREG-1780, “Regulatory Effectiveness of the Anticipated Transient Without Scram Rule,” in September 2003. The NRC “concluded that the ATWS rule was effective in reducing ATWS risk and that the cost of implementing the rule was reasonable.” But that report relied on bona-fide performance gains achieved apart from the ATWS rule and which would have been achieved without the rule. For example, the average reactor scrammed 8 times in 1980. That scram frequency dropped to less than an average of two scrams per reactor per year by 1992.

Fig. 1 (Source: Nuclear Regulatory Commission)

The ATWS rule did not trigger this reduction or accelerate the rate of reduction. The reduction resulted from the normal physical process, often called the bathtub curve due to its shape. As procedure glitches, training deficiencies, and equipment malfunctions were weeded out, their fixes lessened the recurrence rate of problems resulting in scrams. I bought a Datsun 210 in 1980. That acquisition had about as much to do with the declining reactor scram rate since then as the NRC’s ATWS rule had.

There has been an improvement in the reliability of the scram function since 1980. But again, that improvement was achieved independently from the ATWS rule. The Browns Ferry and Salem ATWS event prompted the NRC to mandate via a series of bulletins that owners take steps to reduce the potential for common mode failures. Actions taken in response to those non-rule-related mandates improved the reliability of the scram function more than the ATWS rule measures.

If the AWTS rule had indeed made nuclear plants appreciably safer, then it would represent under-regulation by the NRC. After all, the question of the need for additional safety arose in the 1960s. If the ATWS rule truly made reactors safer, then the “lost decade” of the 1970s is inexcusable. The ATWS rule should have been enacted in 1974 instead of 1984 if it was really needed for adequate protection of public health and safety.

But the ATWS rule enacted in 1984 did little to improve safety that wasn’t been achieved via other means. The 1980 and 1983 ATWS near-miss events at Browns Ferry and Salem might have been averted by an ATWS rule enacted a decade earlier. Once they happened, the fixes they triggered fleet-wide precluded the need for an ATWS rule. So, the ATWs rule was too little, too late.

The AEC/NRC and nuclear industry expended considerable effort during the 1970s not resolving the AWTS issue—effort that could better have been applied resolving other safety issues more rapidly.

ATWS becomes the first Role of Regulation commentary to fall into the “over-regulation” bin. UCS has no established plan for how this series will play out. ATWS initially appeared to be an “under-regulation” case, but research steered it elsewhere.

* * *

UCS’s Role of Regulation in Nuclear Plant Safety series of blog posts is intended to help readers understand when regulation played too little a role, too much of an undue role, and just the right role in nuclear plant safety.

Strong Leadership Makes for Satisfied Federal Scientists: A Case Study at the FDA

UCS Blog - The Equation (text only) -

As our research team was analyzing the results of our newest federal scientist survey that was released earlier this week, it was heartening to see that at some agencies, like at the U.S. Food & Drug Administration (FDA), the job satisfaction and ability to work appear to be even better than in years past. One of the best characterizations of the sentiments expressed by FDA scientists is this quote from a respondent: “The current administration has overall enforced certain science policies which harm the public in general. However, the current commissioner is fantastic and committed to the FDA’s mission. He is consistently involved in policy development which allows the protection and promotion of public health.”

We sent 9,378 FDA scientists and scientific experts a survey; of which 354 responded, yielding an overall response rate of 3.8 percent. Overall, our findings suggest that scientists at the FDA are faring better than their colleagues at the other 16 federal agencies surveyed. FDA scientists overall appeared to have faith in FDA leadership, including the FDA commissioner Dr. Scott Gottlieb.

So what is FDA doing right?

Commissioner Gottlieb visits the agency’s Center for Devices and Radiological Health (CDRH) in Silver Spring, MD in November 2017 (Photo credit: Flickr/US FDA)

A genuine interest in getting the science right

Encouragingly, and as in previous UCS surveys, FDA scientists called attention to efforts by the agency to protect scientific integrity, with some responses indicating a strong sense of trust in supervisors and leadership. Most FDA scientists reported no change in personal job satisfaction or perception of office effectiveness; some respondents noted increased job satisfaction during the past year. 25 percent (87 respondents) said that the effectiveness of their division or office has increased compared with one year ago. Part of the reason for the agency’s effectiveness is its ability to collect the scientific and monitoring information needed to meet its mission, a metric that has significantly improved between 2015 and 2018. (See figure below). Further, 65 percent (222 respondents) felt that their direct supervisors consistently stand behind scientists who put forth scientifically defensible positions that may be politically contentious.

In 2018, the majority of FDA respondents felt that the agency frequently collected the information needed to meet its mission. When compared with previous results, the most significant differences were found between the 2015 and 2018 surveys (p<0.0001).

Perhaps it is because Gottlieb is a medical doctor who seems genuinely interested in evidence-based policies that we have not been bombarded with policy proposals that sideline science from the FDA since he began leading the agency in July 2017. FDA scientists who took the survey have corroborated this. One respondent wrote that that “the Commissioner’s office is tirelessly upholding best practices in various scientific fields such as smoking cessation, opioid/addiction crisis, generic drug manufacturing, sustainable farming practices.” Another respondent wrote, “FDA has a proactive Commissioner who —so far—has consistently followed science-based information and promoted science-based initiatives in the interest of public health.”

He has encouraged the work of FDA’s advisory committees like the Drug Safety and Risk Management Advisory Committee and the Anesthetic and Analgesic Drug Products Advisory Committee that recently met to make recommendations to the FDA on its regulation of transmucosal immediate-release fentanyl (TIRF) products, which were being prescribed for off-label uses for years. Gottlieb does not get defensive about weak spots in FDA’s portfolio. According to one respondent, “I’ve been pleasantly surprised by Commissioner Gottlieb’s knowledge and focus on FDA science. I was at a brief with him and he was interested in the science and less focused on the legal and political effects than I would have guessed. He was open-minded and curious, and asked questions when he didn’t understand the issue fully. It improved my outlook on my Agency’s future.” The ability for Gottlieb to ask questions and listen to agency scientists as well as outside experts is an important quality for a Commissioner making decisions that impact our health and safety.

I took this photo of an advertisement on a DC metro train this summer, revealing that Gottlieb is serious about recruitment to the agency.

Taking action to improve hiring practices and retention of staff

Soon after Commissioner Gottlieb was confirmed, the FDA took steps to examine its own hiring practices to identify improvements that could be made to build and keep a stronger workforce. The agency wrote a report, held a public meeting, and received feedback from FDA staff throughout the process because according to Gottlieb,  “The soul of FDA and our public health mission is our people. Retaining the people who help us achieve our successes is as important as recruiting new colleagues to help us meet our future challenges.” For scientific staff, the agency plans to do more outreach to scientific societies and academic institutions for recruitment and to reach out to early career scientists and make them aware that public service at the FDA is a viable career option.

A commitment to transparency

Not only have there been some encouraging policies put in place by the FDA, but Gottlieb seems committed to informing the public about these decisions. He is very active on twitter and issues so many public statements that reporters feel almost overwhelmed by his updates. This is in contrast of course to leaders like former EPA Administrator Scott Pruitt who seldom announced his whereabouts in advance and was openly hostile to reporters.

Still room for improvement at the FDA and across the government

To be sure, there have been some bumps along the road. Last year, Gottlieb disbanded the FDA’s Food Advisory Committee, which was the only federal advisory committee focused entirely on science-based recommendations on food safety, and he delayed implementation of changes to the nutrition facts label that would have included a line for added sugars by this summer.

Further, survey respondents noted that inappropriate outside influence, such as from regulated industries, is apparent and stymies science-based decisionmaking at the agency. 22 percent (70 respondents) felt that the presence of senior decisionmakers from regulated industries or with financial interest in regulatory outcomes inappropriately influences FDA decisionmaking. Nearly a third (101 respondents) cited the consideration of political interests as a barrier to science-based decisionmaking and 36 percent (114 respondents) felt that the influence of business interests hinders the ability of the agency to make science-based decisions. In addition, respondents reported workforce reductions at the agency and said these lessened their ability to fulfill FDA’s science-based mission.

One thing became very clear as we reviewed the results of UCS’ seventh federal scientist survey that closed this spring: scientists across many federal agencies have been unable to do their jobs to the best of their ability under the Trump administration. Since the start of 2017, agencies have been hollowed out and there has been a sharp decline in expertise and capacity. Reduced staff capacity combined with political interference and the absence of leadership in some cases has made it harder for scientists to carry out important work. As the threat of political influence looms large over the government, much of federal scientists’ ability to do their work to advance the mission of the agencies has to do with the quality of leadership and the administrator or commissioner’s commitment to evidence over politics as a basis for decisionmaking.

Is Scientific Integrity Safe at the USDA?

UCS Blog - The Equation (text only) -

U.S. Department of Agriculture (USDA) Agricultural Research Service (ARS) plant physiologist Franck Dayan observes wild-type and herbicide-resistant biotypes of Palmer Amaranth (pigweed) as Mississippi State University graduate student, Daniela Ribeiro collects samples for DNA analysis at the ARS Natural Products Utilization Research Unit in Oxford, MS on July 20, 2011. USDA photo by Stephen Ausmus. Photo: Stephen Ausmus, USDA/CC BY 2.0 (Flickr)

Science is critical to everything the US Department of Agriculture does—helping farmers produce a safe, abundant food supply, protecting our soil and water for the future, and advising all of us about good nutrition to stay healthy. I recently wrote about the Trump administration’s new USDA chief scientist nominee, Scott Hutchins, and the conflicts he would bring from a career narrowly focused on developing pesticides for Dow.

But meanwhile, Secretary of Agriculture Sonny Perdue last week abruptly announced a proposed reorganization of the USDA’s research agencies. This move has implications for whoever takes up the post of chief scientist—as do new survey findings released yesterday, which suggest that the Trump administration is already having detrimental effects on science and scientists at the USDA.

An attack on science, and a shrinking portfolio for the next chief scientist

The job for which Scott Hutchins (and this guy before him) has been nominated is actually a multi-pronged position. The under secretary is responsible for overseeing the four agencies that currently make up the USDA’s Research, Education, and Economics (REE) mission area: the Agricultural Research Service, the Economic Research Service (ERS), the National Agricultural Statistics Service, and the National Institute for Food and Agriculture (NIFA). Collectively, these agencies carry out or facilitate nearly $3 billion worth of research on food and agriculture topics every year. In addition, the REE under secretary is the USDA’s designated chief scientist, overseeing the Office of the Chief Scientist, established by Congress in 2008 to “provide strategic coordination of the science that informs the Department’s and the Federal government’s decisions, policies and regulations that impact all aspects of U.S. food and agriculture and related landscapes and communities.” OCS and the chief scientist are also responsible for ensuring scientific integrity across the department.

Altogether, it’s no small job, but it may soon get smaller. Secretary Perdue’s unexpected reorganization proposal last week would pluck ERS figuratively from within REE and place it in the Secretary’s office. Perdue’s announcement also included a plan to literally move ERS, along with NIFA, to as-yet-undetermined locations outside the DC area.

Perdue’s proposal cited lower rents and better opportunities to recruit agricultural specialists. But that rationale sounds fishy to UCS and other observers, as well as former USDA staff (the most recent NIFA administrator had this unvarnished reaction) and current staff who were caught by surprise. The move looks suspiciously like subordinating science to politics, likely giving big agribusiness and its boosters in farm-state universities ever more influence over the direction of USDA research that really should be driven by the public interest. Moreover, on the heels of a White House proposal earlier this year to cut the ERS budget in half—which Congress has thus far ignored—Perdue’s “relocate or leave” plan for ERS staff sure seems like a back-door way to gut the agency’s capacity.

New USDA scientist survey findings give more cause for concern

Even before announcements of a conflicted chief scientist nominee and ill-conceived reorganization, things weren’t exactly rosy for those working within REE agencies. In a survey conducted in February and March and released by UCS yesterday, scientists and economists in ARS, ERS, NASS, and NIFA raised concerns about the effects of political interference, budget cuts, and staff reductions. In partnership with Iowa State University’s Center for Survey Statistics and Methodology, we asked more than 63,000 federal scientists across 16 government agencies about scientific integrity, agency effectiveness, and the working environment for scientists in the first year of the Trump administration. At the USDA, we sent the survey to more than 3,600 scientists, economists, and statisticians we identified in the four REE agencies; about 7 percent (n=258) responded.

Among the findings summarized in our USDA-specific fact sheet are that scientists:

  • Face restrictions on communicating their work—78 percent said they must obtain agency preapproval to communicate with journalists; and
  • Report workforce reductions are a problem—90 percent say they’ve noticed such reductions in their agencies. And of those, 92 percent say short-staffing is making it harder for the USDA to fulfill its science-based mission.

To sum up: the next USDA chief scientist will lead a shrinking, under-resourced, and somewhat demoralized cadre of scientists facing political interference and possibly increased influence from industry (a trend we are already seeing in the Trump/Perdue USDA). All this at a time when the department really needs to advance research that can help farmers meet the myriad challenges they face and safeguard the future of our food system.

Soon, I’ll follow up with questions the Senate might want to ask Scott Hutchins—in light of all this and his own chemical industry baggage—when they hold his confirmation hearing.

We Surveyed Thousands of Federal Scientists. Here are Some Potential Reasons Why the Response Rate Was Lower than Usual

UCS Blog - The Equation (text only) -

In February and March of this year, the Union of Concerned Scientists, in partnership with Iowa State University’s Center for Survey Statistics and Methodology, sent a survey to over 63,000 federal career staff across 16 federal agencies, offices, and bureaus. Our goal was to give scientists a voice on the state of science under the Trump administration as we had during previous administrations.

We worked diligently to maintain the anonymity of the federal scientists taking our survey, providing three different methods for participants to take the survey (online, phone, and a mail-in option). Scientists took advantage of all three methods.

We followed up with reminders nearly weekly. Some scientists who were invited to take the survey did reach out to confirm that UCS and Iowa State University were conducting a legitimate survey, and the link that we sent them was safe to click on. In addition, some agencies communicated to their staff that the survey was legitimate and that experts were free to take it on their own time.

And while we received enough responses for the results to be valid, the final overall response rate on this years’ federal scientists survey sits at 6.9%. Compared to response rates on prior surveys conducted by UCS over the past 13 years, which have typically ranged from 15-20%, this year’s rate is lower. Let’s unpack some potential reasons why, and what the impact may be on interpreting results.

Reasons Why the Response Rate was Low
  1. Fear

It is possible that federal scientists and scientific experts were fearful or reluctant to comment on the state of science under the Trump administration. This may be borne from some political appointees reprimanding career staff for speaking publicly about their work.

Additionally, it is possible that given the heightened threat of cyber-attacks in the modern era, scientists were afraid their information might be monitored or leaked. Survey respondents were given a unique identifier to ensure the integrity of the survey, and while these identifiers were deleted before the survey results were prepared for release, we heard reports that simply being associated with that unique identifier was too much of a barrier.

  1. Discouragement from Senior Leadership

At some offices within the Environmental Protection Agency (EPA) as well as at the Fish and Wildlife Service (FWS), senior leadership sent emails to employees that discouraged them from taking the 2018 UCS survey. FWS emails stated “Requests for service employees to participate in surveys, from both internal and external sources, must be approved in advance of the issuance of the survey.” But this is only true of surveys issued through the agency. Federal employees are not required to receive an ethics clearance to take an outside survey if they take it on their own time and with their own equipment. On the other hand, other offices within the EPA as well as the National Oceanic and Atmospheric Administration (NOAA) and the US Department of Agriculture (USDA) sent emails reminding employees that they were welcome to take the survey given that they took it using their own time and equipment.

  1. Larger Survey Sample

This is the largest survey that UCS has ever conducted. Our prior surveys have been administered to up to 4 agencies, whereas we surveyed 16 agencies, offices, and bureaus this year. It may be easier to achieve higher response rates with smaller survey samples because it is possible for researchers to devote more time to working with the survey sample and building trust.

  1. Lack of Public Directory and/or Job Descriptions

UCS can survey federal scientists because their name, email address, and job title are publicly available, or at least they should be. For some agencies that we surveyed, like the National Highway Traffic and Safety Administration (NHTSA) and the Department of Energy (DOE) who do not have public directories available, we submitted Freedom of Information Act (FOIA) requests for this information (it’s been a year and half, and we still don’t have the directory from DOE). For other agencies, such as the EPA, a public directory was available but didn’t have complete information (e.g., job titles). Having the job title of the career staffer is important as it allows us to narrow down our survey sample to those who are likely to be a scientist or scientific expert. In the case of the EPA, Census Bureau, and DOE’s Office of Energy Efficiency and Renewable Energy (EERE), we did not have this information, so we had to administer the survey to the entire agency, or to only offices that we assumed would do scientific work. This greatly increases the number of individuals in an agency sample such that response rates are likely skewed lower relative to other agencies.

Does this low response rate matter in the interpretation of survey results?

A low response rate can give rise to sampling bias, meaning that some individuals in our survey sample are less likely to be included than others (some suggest that only the most disgruntled employees would respond). However, there is a growing body of literature that suggests that this may not be the case. Counterintuitively, it’s possible that surveys with lower response rates may yield more accurate results compared to those with higher response rates. Another study showed that administering the same survey for only 5 days (achieving a 25% response rate) versus weeks (achieving a 50% response rate) largely did not result in statistically different results. Results that were significantly different across these surveys only differed between 4-8 percentage points.

Further, we have never suggested that the responses received at an agency represent the agency as a whole. Rather, the responses represent the experiences of those who chose to respond. And when hundreds or thousands of federal scientists report censorship, political influence on their work, or funding being distributed away from work just because the issue is viewed as politically contentious…well, we have a problem.

I’m very happy that we gave these scientists a voice, because they had a lot to say and it’s time that they’re heard.

Trump Administration Takes Aim at Public Health Protections

UCS Blog - The Equation (text only) -

Photo: Daniels, Gene/ The U.S. National Archives

In a new regulatory effort, the Trump Administration’s Environmental Protection Agency (EPA) claims to be working to increase consistency and transparency in how it considers costs and benefits in the rulemaking process.

Don’t be fooled.

Under the cover of these anodyne goals, the agency is in fact trying to pursue something far more nefarious. Indeed, what the EPA is actually working to do is formalize a process whereby the decision of whether or not to go ahead with a rule is permanently tilted in industry’s favor. How? By slashing away at what the agency can count as “benefits,” resulting in a full-on broadside to public health.

EPA handcuffs itself to let industry roam free

Though it may seem obscure, the implications of this fiddling are anything but.

That’s because EPA regularly engages in what’s known as “cost-benefit analysis,” or a comparison of the costs of implementing a rule to the benefits that are expected to result. This doesn’t always shape how a standard gets set—for some air pollutants, for example, Congress actually requires the agency to specifically not develop standards based on cost, but rather based on health, to ensure that the public stays sufficiently protected. Other regulations weigh costs at varying levels of import, related to the specifics of the issue at hand.

Still, cost-benefit analysis is widely used, even when it describes rather than informs. The process lends context to rulemaking efforts, though it certainly isn’t perfect: cost-benefit analysis faces challenges, especially in quantifying those impacts that don’t lend themselves well to quantitative reductions. But on either side serious practitioners agree: this new effort by EPA is ill-conceived.

And the consequence of EPA’s proposed manipulations? Well, when the agency next goes to tally up the impacts of a rule, the traditionally towering benefits of its regulations could suddenly be cut way down in size. Not because public health is suddenly fixed, but just because it’s the only way to get the equation to solve in favor of industry time after time.

What’s more, alongside this effort EPA is simultaneously endeavoring to place untenable restrictions on the data and research the agency can consider in its rulemaking process, effectively hamstringing its own ability to fully and adequately evaluate impacts to public health.

Together, the net result would be a regulatory framework aggressively biased in industry’s favor, and a Trump Administration suddenly able to claim that public health protections are just not worth the cost.

To industry, with love

The good news is that this nascent proposal is incredibly hard to defend—on morals, and on merits.

The bad news is that the Trump Administration is highly motivated to do everything it can to find in favor of industry, so it’s still sure to be a fight.

Here, three key points to note:

  1. Ignoring co-benefits would permanently tilt the scales—and just does not make sense. One of the primary ways EPA is looking to shirk its regulatory responsibilities is by attempting to exclude the consideration of “co-benefits,” or those that arise as a result of a rule but not from the target pollutant itself, during its cost-benefit evaluations. Absurd. Although these indirect benefits—the avoided ER visits, the precluded asthma attacks, the workdays still in play—are just as real as indirect costs, under this proposal only the latter would continue to stay in the ledger.


  1. Requiring consistency across agency actions goes against EPA’s statutory requirements. The EPA is suggesting that cost-benefit methodologies should be applied uniformly across rulemaking efforts. This not only fails to recognize that not all protections should be evaluated in the same ways, but also that Congress itself outlined differences in how the agency should evaluate proposals depending on specific circumstances. As a result, the agency isn’t even allowed to do what it’s trying to do. And even worse than this nonsense standardization? The fact that the agency is trying to implement the requirement at the level least protective of public health.


  1. EPA already tried this out, and those efforts were roundly denounced. Prior to this proposal, EPA actually made a preliminary attempt at using a co-benefits-limited approach in its proposed repeal of the Clean Power Plan. There, it attempted to separate out and consider only the benefits that accrued from carbon dioxide emissions reductions, despite the billions of dollars of additional health benefits anticipated to come from indirect benefits of the rule. This action was taken alongside a slew of other discriminatory accounting maneuvers, revealing an agency desperately doing anything it could to deliver for industry, including by tipping the scales.

This regulatory effort was carefully constructed to conceal intentions and motivations, but it’s clear from the agency’s surrounding narrative and parallel policy initiatives that it is being advanced in strict pursuit of an industry-favored finding.

Where to next?

Let’s not forget the mission of the EPA: to protect human health and the environment.

From that frame, it’s hard to see what good this effort would do. It doesn’t bring EPA closer to an objective analytical truth, it doesn’t elevate and further that which is in the public’s interest, and it certainly doesn’t suggest an agency doing everything it can to advance its one core mission.

Instead, what we see is EPA displaying shockingly overt piety to industry over public, and in the process, failing to defend the very thing the agency was created to protect.

We’ve filed comments with EPA to call this rigged process out, and we’ll continue to stand up for the mission of the agency even when EPA lets it slide.

Because this demands a fight.

A fight for an agency that fights for the public, and a fight for a ledger that pulls people and places out of the red, not permanently cements them in it.

Photo: Daniels, Gene/ The U.S. National Archives

UCS Survey Shows Interior Department is Worse Than We Thought—And That’s Saying Something

UCS Blog - The Equation (text only) -

Photo: US Geological Survey

Can scientific staff at the US Department of the Interior rest easy knowing that their colleagues at other agencies have it worse when it comes to political interference?

Survey says: Nope.

Today the Union of Concerned Scientists (UCS) released the results from their periodic survey of scientific professionals at federal agencies, and the results from the Department of Interior (DOI) are damning. Not only do the responses indicate plummeting morale, job satisfaction, and agency effectiveness, but politics is now being felt significantly at the US Geological Survey, a non-regulatory scientific bureau at DOI that has historically operated without substantial political interference. In all, concerns about political interference, censorship of politically contentious issues, and workforce reductions at DOI are higher than most other agencies.

The comments from the survey read like an organizational leadership seminar’s list of fatal flaws: Hostile workplace, check; fear of retaliation and discrimination, check; self-censorship, check; poor leadership, check; chronic understaffing, check. To make matters worse, the political leadership at Interior, led by Secretary Ryan Zinke, has a deserved reputation for barring career staff from decision-making processes.

In addition to the undue influence of political staff, the top concern from DOI scientific staff was lack of capacity. One respondent commented: “Many key positions remain unfulfilled, divisions are understaffed, and process has slowed to a crawl.”

As a former career civil servant at Interior I can attest to the plummeting morale at the agency—even before I resigned in October 2017 there was a pall over every office and bureau and career staff were feeling completely ignored by Trump administration officials. This led to some very bad decisions from Zinke, but that has not led to greater inclusion—in fact, team Zinke has continued to alienate career staff and seems to be betting that they will remain silent.

Some good investigative journalism and a lot of Freedom of Information Act disclosures have shown that only industry representatives get meetings with the top brass, decisions are made without input from career staff, censorship (especially of climate change related science) is on the upswing, science is routinely ignored or questioned, and expert advisory boards are being ignored, suspended, or disbanded.

All of this adds up to an agency that is being intentionally hollowed out, with consequences for American health and safety and for our nation’s treasured lands and wildlife. Americans are clamoring for more information on how their businesses, lands, and communities can address the climate impacts they see all year round—but DOI scientists responding to the survey pointed to how Zinke is slowly shutting down the Landscape Conservation Cooperatives (LCC) that deliver that information. Congress provided Zinke with the money to keep growing the LCC’s, but he continues to let them wither on the vine just as they are providing important and timely support for communities in need.

As the Federal Trustee for American Indians and Alaska Natives, Interior should be expected to support tribes and villages in need of resources and capacity for relocating or addressing dramatic climate change impacts, but Zinke is leaving them to fend for themselves despite a bipartisan call to get them out of harm’s way.

As the land manager for America’s most treasured landscapes, Interior is expected to be an effective steward of our National Parks and other areas dedicated to conservation, recreation, and the protection of wildlife habitat. Instead, Zinke ordered the largest reduction in conservation lands in our nation’s history when he shrunk Bears Ears National Monument by 85% and Grand Staircase Escalante National Monument by nearly half. Scientists responding to the survey referred to these decisions as lacking scientific justification. Thanks to recently disclosed documents and emails, we now know that science was pushed aside and the real reason for shrinking the Monuments was to encourage oil and gas extraction in those locations, despite Zinke’s emphatic statements to the contrary. The most damning evidence? The new maps for these shrunken Monuments match the maps that industry lobbyists provided for him. This is yet another insult to the American Indians for whom this area is sacred.

While this is consistent with the Administration’s goal of hobbling federal agencies and opening the door for industry donors, it is not consistent with the use of taxpayer dollars to protect national assets and address health and safety needs, and it is not consistent with the role of public servant. The UCS survey results are a damning indication of the depth of dysfunction that Ryan Zinke has fostered at Interior, and it is essential that Congress implement its important oversight role to prevent the rot from spreading still further.

Happy 10th Birthday to the Consumer Product Safety Improvement Act!

UCS Blog - The Equation (text only) -

Photo: Valentina Powers/CC BY 2.0 (Flickr)

Since the Consumer Product Safety Improvement Act (CPSIA) became law, it has done a number of things to protect children from exposure to lead in toys and other items, improved the safety standards for cribs and other infant and toddler products, and created the database so that consumers have a place to go for research on certain products or reporting safety hazards and negative experiences. Today, along with a group of other consumer and public health advocacy organizations, we celebrate the 10th anniversary of the passage of this law. I am especially grateful that this act was passed a decade ago, as both a consumer advocate and an expecting mom.

Most of us might not realize it, but being a consumer now is a lot better than it would have been ten years ago.

When I sat down to begin the process of making a baby registry several months back, I didn’t know quite what to expect. With so many decisions to make about products that were going to be used by the person I already hold most dear in this world, I felt the anxiety begin to build. Perhaps I knew a little bit too much about how chemicals can slip through the regulatory cracks and end up on the market or how some companies deliberately manipulate the science in order to keep us in the dark about the safety of their products. But as I began to do research on children’s products, I ran into some pretty neat bits of information and have the Consumer Product Safety Improvement Act to thank.

First, cribs all have to meet conformity standards that were developed by the CPSC in 2011. The rule requires that all crib manufacturers cannot sell drop-side cribs, and must strengthen crib slates and mattress supports, improve the quality of hardware, and require more rigorous testing of cribs before sale. This means if a crib is for sale anywhere in the US, it has been accredited by a CPSC-approved body and meets distinct safety requirements so that not only can your baby sleep safely but parents can sleep soundly (insert joke about parents and lack of sleep here). Between 2006-2008 and 2012-2014, the percentage of deaths associated with cribs attributed to crib integrity vs. hazardous crib surroundings has decreased from 32 percent to 10 percent.

This isn’t the only product type for which CPSC has created standards in the past 10 years. So far, CPSC has written rules for play yards, baby walkers, baby bath seats, children’s portable bed rails, strollers, toddler beds, infant swings, handheld infant carriers, soft infant carriers, framed infant carriers, bassinets, cradles, portable hook-on chairs, infant sling carriers, infant bouncer seats, high chairs, and most recently it approved standards for baby changing tables this summer.

Next, I can rest assured that no baby products contain dangerous levels of the reproductive toxins, phthalates, because of a provision in CPSIA that restricted a total of eight types of phthalates in children’s toys and child care articles to a very strict standard of 0.1% on a permanent basis. It also established a Chronic Hazard Advisory Panel of experts to review the science on phthalates that would eventually inform a CPSC final rule. This rule was issued in October 2017 and became effective beginning in April 2018.

I can also be sure that the toys purchased for my child will not contain unsafe levels of the developmental toxin, lead, as long as they were tested and accredited by a CPSC-approved entity. As of 2011, the CPSIA limited the amount of lead that can be in children’s products to 100 ppm. And once we found that perfect paint color for the walls after hours of staring at violet swatches, I didn’t need to worry about its lead content considering that the CPSIA set the limit at 0.009 percent or 90 ppm for paint and some furniture that contains paint.

Finally, when in doubt, I discovered I can query the database to check whether there have been reports of a product’s hazard or head over to to double check that a product I’m planning on buying doesn’t have any recall notices on it.

There’s clearly been a lot of progress since the CPSIA was passed a decade ago, and I have to say, I feel fortunate that I’m beginning the parenting stage of my life as many of its provisions are being fully implemented. In all my reading on pregnancy and parenting, I’ve learned that there are only so many things you can control before your child arrives. The safety of my home is one of those things, so I’m thankful that the CPSIA has given me the ability to make informed decisions about the products with which I’m furnishing my child’s room.

And as I wear my Union of Concerned Scientists hat, I’m also encouraged that the CPSIA gave the agency the space to ensure that its scientists were able to do their work without fear of interference, including whistleblower protections. As the CPSC embarks upon its next ten years of ensuring the goals of the CPSIA are fully realized, we urge the agency to continue to enforce its safety standards, ensure that manufacturers of recalled products are held accountable, and educate the public about its product hazard database and other tools for reporting and researching harmful products. Unrelatedly, the agency should also continue to stay weird on twitter, because its memes bring joy to all. Case in point below.

Photo credit: twitter/US CPSC

The Good, the Bad, and the Ugly: The Results of Our 2018 Federal Scientists Survey

UCS Blog - The Equation (text only) -

Photo: Virginia State Parks/CC BY 2.0 (Flickr)

In February and March of this year, the Union of Concerned Scientists (UCS) conducted a survey of federal scientists to ask about the state of science over the past year, and the results are in. Scientists and their work are being hampered by political interference, workforce reductions, censorship, and other issues, but the federal scientific workforce is resilient and continuing to stand up for the use of science in policy decisions.

This survey was conducted in partnership with Iowa State University’s Center for Survey Statistics and Methodology building upon prior surveys conducted by UCS since 2005. However, this year’s survey is unique in that it is the largest that UCS has ever conducted to date (sent to over 63,000 federal employees across 16 federal agencies, offices, and bureaus), and it is the first survey to our knowledge to gauge employee’s perceptions of the Trump administration’s use of science in decisionmaking processes.

The Trump administration’s record on science on a number of issues in multiple agencies is abysmal. Anyone who has paid attention to the news even slightly will know this. Therefore, my expectations were that the surveyed scientists and scientific experts would report out that they were working in a hostile work environment, that they are encountering numerous barriers to doing and communicating science, and that too many scientists are leaving the federal workforce. And while many of the respondents reported out on these negative issues, many respondents also reported out a lot of good work that is happening.

To be certain, some agencies seem to be faring better than others. Respondents from the National Oceanic and Atmospheric Administration (NOAA), Centers for Disease Control (CDC), and the Food and Drug Administration (FDA) reported better working environments and leadership that were conducive to continuing science-based work that informs decisionmaking at their agencies. However, respondents from bureaus at the Department of Interior (DOI) as well as the Environmental Protection Agency (EPA) seem to be having a difficult time with political interference, maintaining professional development, and censorship, to name a few issues illustrated by this survey. This agency-level variation, as well as variation in response rates  across surveyed agencies, should be considered when interpreting results across all agencies.

Below, I highlight some results of this year’s survey, but you can also find all of the results, methodology, quotes from surveyed scientists, and more at

The Ugly: Political interference in science-based decisionmaking

The Trump administration has been no stranger to interfering with science-based processes at federal agencies. For example, both Ryan Zinke and Scott Pruitt changed the review processes of science-based grants such that they are critiqued based on how well they fit the administration’s political agenda instead of their intellectual merit. UCS also discovered through a Freedom of Information Act (FOIA) request that the White House interfered in the publication of a study about the health effects of a group of hazardous chemicals found in drinking water and household products throughout the United States.

Surveyed scientists and scientific experts in our 2018 survey noted that political interference is one of the greatest barriers to science-based decisionmaking at their agency. In a multiple response survey question in which respondents chose up to three barriers to decisionmaking, those ranked at the top were: Influence of political appointees in your agency or department, influence of the White House, limited staff capacity, delay in leadership making a decision, and absence of leadership with needed scientific expertise. This result was different as compared to our 2015 survey in which respondents reported that limited staff capacity and complexity of the scientific issue were the top barriers—influence of other agencies or the administration, as it was phrased in our 2015 survey, was not identified as a top barrier. One respondent from the EPA noted that political interference is undoing scientific processes: “…efforts are being made at the highest levels to unwind the good work that has been done, using scientifically questionable approaches to get answers that will support the outcomes desired by top agency leadership.”

Many respondents also reported issues of censorship, especially in regard to climate change science. In total, 631 respondents reported that they have been asked or told to omit the phrase “climate change” from their work. A total of 703 respondents reported that they had avoided working on climate change or using the phrase “climate change” without explicit orders to do so. But it is not only climate change—over 1,000 responding scientists and scientific experts reported that they have been asked or told to omit certain words in their scientific work because they are viewed as politically contentious. One respondent from the US Department of Agriculture (USDA) noted that scientists studying pollinator health are being scrutinized: “We have scientists at my location that deal with insect pollinator issues, and there appears to be some suppression of work on that topic, in that supervisors question the contents of manuscripts, involvement in certain types of research, and participation in public presentation of the research. It has not eliminated the work of those scientists, but their involvement in those areas is highly scrutinized.”

The Bad: The scientific workforce is likely dwindling

Nearly 80% of respondents (3,266 respondents in total) noticed workforce reductions either due to staff departures, hiring freezes, and/or retirement buyouts. Of those respondents who noticed workforce reductions, nearly 90% (2,852 respondents in total) reported that these reductions make it difficult for them to fulfill their agency’s science-based missions. A respondent from the Fish and Wildlife Service summed up the issue: “Many key positions remain unfulfilled, divisions are understaffed, and process has slowed to a crawl.”

As of June 2018, the 18th month of his administration, President Trump had filled 25 of the 83 government posts that the National Academy of Sciences designates as “scientist appointees.” Maybe now that President Trump has nominated meteorologist Kelvin Droegemeier to lead the White House’s Office of Science and Technology Policy, we will see other scientific appointments as well. For now, agencies that are understaffed and that do not have leadership with needed scientific expertise will likely continue to have a difficult time getting their scientific work completed.

The Good:  The scientific workforce is resilient

While 38% of those surveyed (1628 respondents in total) reported that the effectiveness of their division or offices has decreased over the past year, 15% reported an increase in effectiveness (643 respondents total) and 38% (1567 respondents total) reported no change in effectiveness over the past year. It is still not a good sign that over 1,000 scientists and scientific experts are reporting that the effectiveness of their office/division has decreased under the Trump administration, but it is also good to see that there are still a number of scientists and scientific experts being able to continue to do their important work.

Further, a majority of respondents (64%; 2452 respondents in total) reported that their agencies are adhering to their scientific integrity policies and that they are receiving adequate training on them. While those surveyed reported on barriers to science-based decisionmaking such as those described above and more that fall outside of the scope of these policies, it is still a step forward to see that the federal scientific workforce knows about the policies and perceives them to be followed. Many responding scientists reported that they are doing the best work they can under this administration. As one respondent from the US Geological Survey (USGS) said, “USGS scientific integrity guidelines are among the best in the federal service. They are robust and followed by the agency. What happens at the political level is another story.”

There is still work to do

Some scientists are continuing to get their work done and others are having a difficult time. Many scientists see their leadership as a barrier to their science-based work, whereas some scientists think their leadership recognizes the importance of science to their agency’s mission.

However, when hundreds to thousands of scientists are reporting that there is political interference in their work, that they fear using certain terms like “climate change,” or that they are seeing funds being distributed away from work viewed as politically contentious – this is an ugly side of this administration’s treatment of science. Those numbers should be as close to zero as possible because when science takes a back seat to political whims, the health and safety of the American people loses.

What’s New with NextGrid?

UCS Blog - The Equation (text only) -

Photo: UniEnergy Technologies/Wikimedia

Last year, the Illinois Commerce Commission (ICC) launched NextGrid,  a collaboration between key stakeholders to create a shared base of information on electric utility industry issues and opportunities around grid modernization. NextGrid is the Illinois Utility of the Future Study, which is being managed by the University of Illinois and consists of seven working groups comprised of subject matter experts, utilities, business interests, and environmental organizations. The Union of Concerned Scientists is a member of two of these working groups.

The working groups have been tasked with identifying solutions to address challenges facing Illinois as it moves into the next stage of electric grid modernization, including the use of new technologies and policies to improve the state’s electric grid. The groups’ work will culminate in a draft report to be released in late 2018.

So, what is grid modernization? And what’s at stake with the NextGrid process in Illinois?

Illinois’ energy challenges

Our current grid was built decades ago and designed primarily for transmitting electricity from large, centralized power plants like coal and natural gas.  There are now new technologies, like wind and solar, that are making this approach to electricity transmission and its related infrastructure outdated. If we don’t modernize the grid now, we risk over-relying on natural gas, when we should be taking advantage of renewable energy sources that are cleaner and more affordable.

As a result, utilities and states around the country are embarking on grid modernization processes. There are two main components of a modern grid: the first is data communication, which Illinois has addressed with the rapid deployment of smart meters over the last several years. Smart meters give customers access to more information about their energy use, and allow utilities to offer different programs and options to customers as well as to more efficiently address outages.

A second key component of a modern grid is the incorporation of higher levels of renewables and energy efficiency. The Future Energy Jobs Act (FEJA), which became law in 2016, fixed flaws in the state’s Renewable Portfolio Standard (RPS) by ensuring stable and predictable funding for renewable development, and that new solar and wind power will be built in Illinois. FEJA also greatly increased the state’s energy efficiency targets. With respect to solar in particular, FEJA directed the state to create a community solar program, and also the Illinois Solar for All program, which will enable many more people to participate in solar power who may not be in a position to install panels on their own rooftops. Overall, FEJA is moving Illinois towards a more modern grid.

NextGrid builds on these Illinois clean energy efforts. The Next Grid study will examine trends in electricity production, usage, and emerging technologies on the customer and utility sides of the meter that drive the need to consider changes in policy and grid technology.

What has been discussed so far?

To ensure clean energy is a prominent part of the solutions being discussed in NextGrid, UCS is participating in two of the seven working groups: Regulatory and Environmental Policy Issues and Ratemaking.

 Some of the key topics discussed so far include:

  • The increased adoption of distributed energy resources (DER), which include solar, storage and demand management. DER adoption is increasing due to the Future Energy Jobs Act (FEJA). There will be a significant change in the electricity load from DER, and as a result, utilities need to engage in planning and investment that incorporates them.
  • Energy storage has been highlighted for its ability to increase grid reliability and resilience. As the costs of energy storage technology such as batteries continue to fall, they are becoming a viable answer to many grid modernization challenges.
  • Time-of-use pricing programs that have fewer daily price fluctuations allow users more consistency in making consumption decisions. Our Flipping the Switch Report outlines the benefits of time-varying rates.

The NextGrid process has the potential to shape Illinois’s energy future and serve as a roadmap to different options. We need to ensure that clean energy plays a central role in this roadmap.

How can you get involved?

On June 14, the ICC held a public comment session in Chicago to provide stakeholders and the public with information on the progress of the study.  UCS Science Network Member Warren Lavey provided public comment at the session noting that Illinois should explore additional time-varying pricing options and energy storage. Pursuing these policies would save money for customers and providers, enhance grid reliability and flexibility, and protect human health and the environment. More time-varying pricing options and cost-effective energy storage would build on Illinois’ investments in and policies supporting renewable energy systems and smart meters. These reforms would also strengthen the state’s infrastructure for electric vehicles and other developments.

Two more public comment sessions will be held this week, August 15 in Urbana and August 16 in Carbondale. Participants will have the opportunity to ask questions and offer written and verbal comments to be considered by the commission as they develop the NextGrid report. The draft report is set for release this fall, and the public has the opportunity to weigh in again by commenting on draft working group chapters as they are posted on the NextGrid website.

What UCS wants to see in the final report

The final report should include an actionable roadmap of clean energy options that builds on Illinois’ successes to date.

The final report should also elevate the need for an equitable transition away from fossil fuels, and the benefits of expanding equitable access to solar and energy storage technologies. Ideally, the report will identify ways to increase the deployment of energy storage across Illinois, with the goal of integrating higher levels of renewable energy onto the grid.

Finally, the report should include a discussion of additional opportunities for user-friendly time-varying rates that could be considered, which will benefit the grid operator, consumers, and the environment. We want the NextGrid process to provide a clear pathway for Illinois to continue being a leader in clean energy and modern grid development.

Photo: UniEnergy Technologies/Wikimedia

Obstruction of Injustice: Making Mountains out of Molehills at the Cooper Nuclear Plant

UCS Blog - All Things Nuclear (text only) -

The initial commentary in this series of posts described how a three-person panel formed by the Nuclear Regulatory Commission (NRC) to evaluate concerns raised by an NRC worker concluded that the agency violated its procedures, policies, and practices by closing out a safety issue and returning the Columbia Generating Station to normal regulatory oversight without proper justification.

I had received the non-public report by the panel in the mail. That envelope actually contained multiple panel reports. This commentary addresses a second report from another three-person panel. None of the members of this panel served on the Columbia Generating Station panel. Whereas that panel investigated contentions that NRC improperly dismissed safety concerns, this panel investigated contentions that the NRC improperly sanctioned Cooper for issues that did not violate any federal regulations or requirements. This panel also substantiated the contentions and concluded that the NRC lacked justification for its actions. When will the injustices end?

Mountains at Cooper

The NRC conducted its Problem Identification and Resolution inspection at the Cooper nuclear plant in Brownville, Nebraska June 12 through June 29, 2017. The report dated August 7, 2017, for this inspection identified five violations of regulatory requirements.

An NRC staffer subsequently submitted a Differing Professional Opinion (DPO) contending that the violations were inappropriate. The basis for this contention was that there were no regulatory requirements applicable to the issues; thus, an owner could not possibly violate a non-existent requirement.

Molehills at Cooper

Per procedure, the NRC formed a three-person panel to evaluate the contentions raised in the DPO. The DPO Panel evaluated the five violations cited in the August 7, 2017, inspection report.

Fig. 1 (Source: Unknown)

  • Molehill #1: The inspection report included a GREEN finding for a violation of Criterion XVI in Appendix B to 10 CFR Part 50. Appendix B contains 18 quality assurance requirements. Criterion XVI requires owners to identify conditions adverse to quality (e.g., component failures, procedure deficiencies, equipment malfunctions, material defects, etc.) and fix them in a timely and effective manner. The DPO Panel “…determined that this issue does not represent a violation of 10 CFR 50 Appendix B, Criterion XVI, inasmuch as the licensee identified the cause and implemented corrective actions to preclude repetition.” In other words, one cannot violate a regulation when doing precisely what the regulation says to do.
  • Molehill #2: The inspection report included a GREEN finding for a violation of a technical specification requirement to provide evaluations of degraded components in a timely manner. The DPO Panel “…concluded that this issue does not represent a violation of regulatory requirements.” This is a slightly different molehill. Molehill #1 involved not violating a requirement when one does exactly what the requirements says. Molehill #2 involved not violating a requirement that simply does not exist. A different kind of molehill, but a molehill nonetheless.
  • Molehill #3: The inspection report included another GREEN finding for another violation of Criterion XVI in Appendix B to 10 CFR Part 50. Appendix B. This time, the report contended that the plant owner failed to promptly identify adverse quality trends. The DPO Panel “concluded that monitoring for trends is not a requirement of Criterion XVI,” reprising Molehill #2.
  • Mountain #1: The inspection report included another GREEN finding for failure to monitor emergency diesel generator performance shortcomings as required by the Maintenance Rule. The DPO Panel “…determined that the violation was correct as written and should not be retracted.” As my grandfather often said, even a blind squirrel finds an acorn every now and then.
  • Molehill #4: The inspection report included a Severity Level IV violation for violating 10 CFR Part 21 by not reporting a substantial safety hazard. The DPO Panel discovered that the substantial safety hazard was indeed reported to the NRC by the owner within specified time frames. The owner submitted a Licensee Event Report per 10 CFR 50.72. 10 CFR Part 21 and NRC’s internal procedures explicitly allows owners to forego submitting a duplicate report when they have reported the substantial safety hazard via 10 CFR 50.72. The DPO Panel recommended that “…consideration be given to retracting the violation … because it had no impact on the ability of the NRC to provide regulatory oversight.”

The DPO Panel wrote in the cover letter transmitting their report to the NRC Region IV Regional Administrator:

After considerable review effort, the Panel disagreed, at least in part, with the conclusions documented in the Cooper Nuclear Station Problem Identification and Resolution Inspection Report for four of the five findings.

The DPO Panel report was dated April 13, 2018. As of August 8, 2018, I could find no evidence that NRC Region IV has either remedied the miscues identified by the DPO originator and confirmed by the DPO Panel, or explained why sanctioning plant owners for following regulations is justified.

UCS Perspective

 At Columbia Generation Station, NRC Region IV made a molehill out of a mountain by finding, and then overlooking, that the plant owner’s efforts were “grossly inadequate” (quoting that DPO Panel’s conclusion).

At Cooper Nuclear Station, NRC Region IV made mountains out of molehills by sanctioning the owner for violating non-existent requirements or for doing precisely what the regulations required.

Two half-hearted (substitute any other body part desired, although “elbow” doesn’t work so well) efforts don’t make one whole-hearted outcome. These two wrongs do not average out to average just right regulation.

NRC Region IV must be fixed. It must be made to see mountains as mountains and molehills and molehills. Confusing the two is unacceptable.

Mountains and molehills (M&Ms). M&Ms should be a candy treat and not a regulatory trick.

NOTE: NRC Region IV’s deplorable performance at Columbia and Cooper might have remained undetected and uncorrected but for the courage and conviction of NRC staffer(s) who put career(s) on the line by formally contesting the agency’s actions. When submitting DPOs, the originators have the option of making the final DPO package publicly available or not. In these two cases, I received the DPO Panel reports before the DPOs were closed. I do not know the identity of the DPO originator(s) and do not know whether the person(s) opted to make the final DPO packages (which consist of the original DPO, the DPO Panel report, and the agency’s final decision on the DPO issues) public or not. If the DPO originator(s) wanted to keep the DPO packages non-public, I betrayed that choice by posting the DPO Panel reports. If that’s the case, I apologize to the DPO originator(s). While my intentions were good, I would have abided by personal choice had I had any way to discern what it was.

Either way, it is hoped that putting a spotlight on the issues has positive outcomes in these two DPOs as well as in lessening the need for future DPOs and posts about obstruction of injustice.

In the Final Stretch of the Farm Bill, Keep an Eye on Crop Insurance. (Crop Insurance?)

UCS Blog - The Equation (text only) -

A drought-stricken soybean field in Texas Photo: Bob Nichols, USDA/CC BY 2.0 (Flickr)

You’re not a farmer, but you’re invested in crop insurance.

The chances that you are a farmer are nil. After all, there are only 2.1 million farms in a nation of 323.1 million people. Yet, you are deeply invested in the nation’s farming enterprise. As a taxpayer, you back U.S. agriculture by financing a range of government programs that hover around $20 billion annually. Those tax dollars fund such things as price supports, research, marketing and crop insurance.

The case for crop insurance

It is in the interest of the 99% of us who don’t farm to help protect family farmers against two major hazards that are outside their control: market downturns and weather disasters. We do this through a “farm safety net” that consists of coupling price supports for agricultural commodities with crop and livestock insurance. Over 300 million acres are covered for $100 billion of insured liability annually. The legislative vehicle that authorizes these federal programs is a “farm bill” that is renewed every five years. The current iteration is due to be renewed by September 30 of this year.

It is a game of “Who is going to get your money?”

If—amid the current swirl of political news—you’ve not been following the scintillating path of the Farm Bill through Congress, the current status is that each chamber has passed dramatically different drafts of the bill. If Congress is to meet its deadline for reauthorization, it needs to reconcile the differing versions of the farm bill within the next few weeks. The two versions differ on whether to make the bill more equitable for family farmers and those seeking to get into farming, as the Senate version proposes, or to make it easier to abuse and defraud taxpayers to further enrich a very few already wealthy farmers, which the House version would enable. Specifically, the Senate version would set limits on the total subsidy payments that farms would be eligible to receive—at $250,000 per year per farm. Coupled with this is a measure to prevent the wealthiest of farmers from drawing on public support that they do not actually need. The cut-off for eligibility would be reduced from the present $900,000 annual Adjusted Gross Income (AGI) per farmer to $700,000. Additionally, the Senate version proposes tying eligibility for insurance benefits to the effectiveness of conservation practices.

These are welcome adjustments, even though they still fall short of the comprehensive reform needed to prevent open abuse of the farm safety net. For example, an earlier effort to reduce the insurance premium subsidy drawn by farmers with an AGI greater than $700,000 was defeated. Yes, the federal government doles out insurance payouts to farmers, plus the majority of the cost of their insurance premiums! More on the rationale for this in a bit, but the point here is that payment limit measures would level the playing field for small and medium family farms. This is just one of the issues that pits the interests of these farmers—and of taxpayers and fiscal conservatives—against the political power of large farmers and their agribusiness backers. As for the House version of the Farm Bill? Not only does it not include these sensible—if mild—reforms, it brazenly creates loopholes that would have non-farming relatives become eligible for “per farmer” benefits.

We’ve done that. It doesn’t work. Shall we try something different?

If we keep doing more of the same, the cost of insurance will balloon and make some wealthy people even richer—but it doesn’t have to. While the rationale for public support of family farmers is self-evident, in practice our crop insurance policies could be better. Over the past five years, federal crop insurance cost American tax payers an average of $9 billion annually, according to analysis from the Congressional Budget Office (CBO.) Drought and flood damage accounted for 72% of insurance payouts between 2001 and 2015, per accounting from the RMA. Climate change will only make this worse, as more frequent and extreme weather episodes drive up costs for the program. The CBO estimates—using scenarios developed by the Intergovernmental Panel on Climate Change—that crop insurance costs will increase by $1 billion annually through 2080.

This upward spiral is compounded by the fact that our current policy incentivizes waste—because it focuses on production regardless of environmental and other costs—instead of adoption of well-known, scientifically sound production practices that can minimize crop losses even under climate extremes. Adoption of the latter practices would result in a more resilient agricultural system that would reduce farm losses and the need for, and expense of, insurance to the public. We therefore should incentivize these kinds of scientifically informed and fiscally responsible systems. While the 2014 farm bill intended to do just this by requiring “conservation compliance,” the Office of the Inspector General has found that such compliance is weakly enforced.

What would make more sense?

It is reasonable for the public to expect the best farming practices in return for the farm safety net that their tax dollars provide. In fact, this could be done by connecting the different parts (“titles”) of the farm bill so they work together. For example, the Research Title generates information about the most sustainable farming practices, which are supported in large measure by the Conservation Title. Better coordination of the Crop Insurance Title with these two would make the entire farm bill more coherent and should reduce total costs to farmers, taxpayers and the environment.

To understand how we might do this, consider the nation’s “corn belt,” an expanse of 180 million acres dominated by a lawn of corn and soybeans, each grown in extensive “monocrops” (swaths of homogenous stands of a single crop.) As currently managed, these systems promote soil degradation and soil loss, water pollution and runaway pest crises. In turn, this exposes farmers (and all of us, as their underwriters) to the risk inherent in betting on a single system to be successful under all circumstances all the time. Every one of the environmental crises listed above can be mitigated, if not eliminated, by adoption of well-researched “agroecological” methods. We can drive this shift in farm management by tying eligibility for government programs, including crop insurance, to verified implementation of practices that conserve soil, build soil health, sequester carbon and increase biodiversity. These practices make farming systems more resilient to weather extremes and are more profitable to farmers because they reduce farmer reliance on purchased inputs. Further, more resilient farms would rely less on government supports like the federal crop and livestock insurance programs.

Perverse loopholes instead further enrich the largest farmers

At present, however, loopholes in our policies permit the largest and most profitable farms to receive both windfall payments and a disproportionate amount of farm bill subsidies. Because the current system rewards production, and not resilience, the result is that it is “large, very large and corporate farms,” just 4% of farms, that are the greatest beneficiaries of the public’s support. These farms account for 55% of US agricultural output and earn $1 M or more in gross farm cash income each year. Such farms face large risks, of course, but they don’t need public support to afford their insurance costs. It isn’t just that the public provides farmers insurance, but that we make it cheap insurance. The reason is that taxpayers subsidize 60% of crop insurance premiums. This is intended to incentivize farmers to buy insurance rather than force the government to come up with unbudgeted emergency payments every time major disasters strike. In practice, however, this has served to concentrate wealth. Those 4% of farms receiving the lion’s share of farm bill benefits have an operating profit margin greater than 10%. In contrast, the majority of small and midsize family farms—those which could readily adopt more diverse crop and livestock production methods, and which account for 45% of the nation’s farm assets—operate with a profit margin less than 10%. Those are the farmers who actually need the public’s support. It is a situation that clearly calls for payment limits to cap the amount of farm bill benefits that any one farm can receive.

Farmers can adopt and manage more resilient systems, and we should reward them for that

The 2014 Farm Bill—the most recent—introduced “Whole Farm Revenue” insurance for farmers wishing to diversify their farms (produce a variety of crops and livestock in integrated fashion.) Diversified farming systems protect farmers from catastrophic losses the same way diversified stock portfolios protect investors. Such systems tend to protect soil, filter and better store water, recycle and make better use of fertilizer nutrients, have fewer pest problems (and thereby require fewer pesticides), and result in lower costs and higher profits. Further, because fewer external inputs (such as chemical fertilizers and pesticides) are purchased, farmers earn more, and more of those earnings are recirculated in the local rural economy. However, under our existing risk management approach, these systems have proven more difficult to insure than large monocrops. The latter have long actuarial records, permitting insurers to set premiums with greater certainty, and are familiar to and therefore preferred by bankers and Farm Service Agency personnel. But this is counterproductive, as it discourages the best farming practices and encourages the worst. Barriers such as these, and those encountered by new and beginning farmers (who must establish a credit and cropping history to gain access to loans and insurance premium discounts), must instead be streamlined with more informed farm bill criteria. The Whole Farm Revenue insurance program is one step towards incentivizing resilient diversified systems.  Programs to support beginning and younger farmers, who are also more likely to use diversified systems, are another way to build more resilient farms. The Senate version of the current Farm Bill attempts to address these issues.

What you can do:

Demand That Members of Congress Who Will Reconcile the House and Senate Farm Bills Make Your Financial Backing of Farm Programs More Effective, Responsible and Equitable

Sign On: Even though the Farm Bill programs described above are directed to farmers, we all have a stake. As taxpayers, we back these programs and—as we’ve seen—it is important that the programs be equitable and balance production with environmental responsibility and resilience. You can help make it clear to Congress that you strongly support these goals by signing our statement urging farm bill conferees to adopt the Senate version of the bill. The “conferees” are the 47 members of Congress who will work with the currently disparate versions of the Farm Bill and decide the form of the final legislation. We will deliver this letter and your signatures to the chairs of the Senate and House Agriculture Committees as they begin deliberations.

Tell Conferees About the Farm Safety Net You Want: Members of Congress are visiting their districts right now! During the congressional recess that will last the remainder of this month, you can visit their offices, attend their town hall meetings, or call and write the offices of the Senate conferees, as well as of the Republican and Democrat House Farm Bill conferees. Remember that as a citizen and taxpayer your representatives are bound to take your calls and letters and consider your input. This is all the more important for direct constituents of Farm Bill conferees. When you call and write, be sure to make these particular points:

  • Adopt the Senate version of the Crop Insurance title (Title XI) because it improves and streamlines the Whole Farm Revenue Insurance program. Importantly, the Senate version recognizes the need to eliminate obstacles for new farmers and the “underserved” (in the Farm Bill this—tellingly—means farmers of color.) To this end, support the House measure that defines “Beginning Farmers” as those who have farmed less than 10 years.
  • Adopt the Senate recommendation to link crop insurance eligibility with the performance of adopted conservation practices.
  • Make the farm safety net more equitable by closing loopholes in the Commodity Title (Title I) that permit abuse. Specifically, restrict payment eligibility to individuals actually farming; establish an AGI limit of $700,000 for eligibility for commodity payments; and set maximum commodity payments per farmer to $250,000 per year.
Photo: Bob Nichols, USDA/CC BY 2.0 (Flickr)

Hitting 1 Trillion. Think Clean Electrons, Not Stylish Electronics

UCS Blog - The Equation (text only) -

Photo: Johanna Montoya/Unsplash

You may have heard that Apple just passed the $1 trillion mark in terms of its market capitalization, the first company ever to reach those lofty heights. Less ink has been spilled on a different 1 trillion figure, but it’s one that’s well worth noting, too. According to Bloomberg NEF (BNEF), we just shot past the headline-worthy figure of 1 trillion watts (that is, 1 million megawatts, or 1,000 gigawatts) worldwide. And you can bet there’ll be another trillion watts right behind.

1 trillion watts

According to BNEF, the tally by the end of second quarter of 2018 for wind and solar combined was 1,013 gigawatts (GW), or 1.013 million MW.

The path to 1 trillion (Source: Bloomberg NEF)

A few bonus noteworthy things about those data:

  • The new total is double what we had as of 2013, and more than quadruple 2010’s tally.
  • Wind has dominated the wind-solar pair for all of history (or at least since the data started in 2000), and accounts for 54% of the total, to solar’s 46%. But solar has come on so strong, and looks poised to be in the majority very soon.
  • Offshore wind is showing up! Particularly for those of us who have been tracking that technology for a long time, that light blue stripe on the graph is a beautiful thing to see.
Meanwhile, back at the ranch

For those of us in this country trying to understand the new trillion figure, one useful piece of context might be the total installed US power plant capacity, which, as it happens, is right around that 1-trillion-watt mark. According to the US Energy Information Administration, it’s about 1.1 million MW.

And, in terms of the wind and solar pieces of our own power mix:

Photo: PublicSource

The next trillion

Given how fortunes wax and wane, it’s tough to guess when Apple might be hitting the $2 trillion mark.

But for solar and wind it’s hard to imagine the number doing anything but growing. And, according to BNEF’s head of analysis, Albert Cheung, relative to the first trillion, the next trillion watts (1 terawatt) of wind and solar are going to come quick and cheap:

The first terawatt cost $2.3 trillion to build and took 40 years. The second terawatt, we reckon, will take five years and will cost half as much as the first one. So that’s how quickly this industry is evolving.

Imagine that: They’re projecting $1.23 trillion for the next trillion watts of solar and wind—barely more than an Apple’s worth.

Photo: Johanna Montoya/Unsplash Photo: PublicSource

24 Space-Based Missile Defense Satellites Cannot Defend Against ICBMs

UCS Blog - All Things Nuclear (text only) -

Articles citing a classified 2011 report by the Institute for Defense Analysis (IDA) have mistakenly suggested the report finds that a constellation of only 24 satellites can be used for space-based boost-phase missile defense.

This finding would be in contrast to many other studies that have shown that a space-based boost-phase missile defense system would require hundreds of interceptors in orbit to provide thin coverage of a small country like North Korea, and a thousand or more to provide thin coverage over larger regions of the Earth.

A 2011 letter from Missile Defense Agency (MDA) Director Patrick O’Reilly providing answers to questions by then-Senator Jon Kyl clarifies that the 24-satelllite constellation discussed in the IDA study is not a boost-phase missile defense system, but is instead a midcourse system designed to engage anti-ship missiles:

The system discussed by IDA appears to be a response to concerns about anti-ship ballistic missiles that China is reported to be developing. It would have far too few satellites for boost-phase defense against missiles from even North Korean, and certainly from a more sophisticated adversary.

The MDA letter says the 24 satellites might carry four interceptors each. Adding interceptors to the satellites does not fix the coverage problem, however: If one of the four interceptors is out of range, all the interceptors are out of range, since they move through orbit together. As described below, the coverage of a space-based system depends on the number of satellites and how they are arranged in orbit, as well as the ability of the interceptors they carry to reach the threat in time.

While this configuration would place four interceptors over some parts of the Earth, it would leave very large gaps in the coverage between the satellites. An attacker could easily track the satellites to know when none were overhead, and then launch missiles through the gaps. As a result, a defense constellation with gaps would realistically provide no defense.

(The IDA report is “Space Base Interceptor (SBI) Element of Ballistic Missile Defense: Review of 2011 SBI Report,” Institute for Defense Analyses, Dr. James D. Thorne, February 29, 2016.)

Why boost phase?

The advantage of intercepting during a ballistic missile’s boost phase—the first three to five minutes of flight when its engines are burning—is destroying the missile before it releases decoys and other countermeasures that greatly complicate intercepting during the subsequent midcourse phase, when the missile’s warhead is coasting through the vacuum of space. Because boost phase is short, interceptors must be close enough to the launch site of target missiles to be able to reach them during that time. This is the motivation for putting interceptors in low Earth orbits—with altitudes of a few hundred kilometers—that periodically pass over the missile’s launch site.

The fact that the interceptors must reach a boosting missile with a few minutes limits how far the interceptor can be from the launching missile and still be effective. This short time therefore limits the size of the region a given interceptor can cover to several hundred kilometers.

An interceptor satellite in low Earth orbit cannot sit over one point on the Earth, but instead circles the Earth on its orbit. This means an interceptor that is within range of a missile launch site at one moment will quickly move out of range. As a result, having even one interceptor in the right place at the right time requires a large constellation of satellites so that as one interceptor moves out of range another one moves into range.

Multiple technical studies have shown that a space-based boost phase defense would require hundreds or thousands of orbiting satellites carrying interceptors, even to defend against a few missiles. A 2012 study by the National Academies of Science and Engineering found that space-based boost phase missile defense would cost 10 times as much as any ground-based alternative, with a price tag of $300 billion for an “austere” capability to counter a few North Korean missiles.

Designing the system instead to attack during the longer midcourse phase significantly increases the time available for the interceptor to reach its target and therefore increases the distance the interceptor can be from a launch and still get there in time. This increases the size of the region an interceptor can cover—up to several thousand kilometers (see below). Doing so reduces the number of interceptors required in the constellation from hundreds to dozens.

However, intercepting in midcourse negates the rationale for putting interceptors in space in the first place, which is being close enough to the launch site to attempt boost phase intercepts. Defending ships against anti-ship missiles would be done much better and more cheaply from the surface.

Calculation of Constellation Size

Figure 1 shows how to visualize a system intended to defend against anti-ship missiles during their midcourse phases. Consider an interceptor designed for midcourse defense on an orbit (white curve) that carries it over China (the red curve is the equator). If the interceptor is fired out of its orbit shortly after detection of the launch of an anti-ship missile with a range of about 2,000 km, it would have about 13 minutes to intercept before the missile re-entered the atmosphere. In those 13 minutes, the interceptor could travel a distance of about 3,000 km, which is the radius of the yellow circle. (This assumes δV = 4 km/s for the interceptor, in line with the assumptions in the National Academies of Science and Engineering study.)

The yellow circle therefore shows the size of the area this space-based midcourse interceptor could in principle defend against such an anti-ship missile.

Fig. 1.  The yellow circle shows the coverage area of a midcourse interceptor, as described in the post; it has a radius of 3,000 km. The dotted black circle shows the coverage area of a boost-phase interceptor; it has a radius of 800 km.

However, the interceptor satellite must be moving rapidly to stay in orbit. Orbital velocity is 7.6 km/s at an altitude of 500 km. In less than 15 minutes the interceptor and the region it can defend will have moved more than 6,000 km along its orbit (the white line), and will no longer be able protect against missiles in the yellow circle in Figure 1.

To ensure an interceptor is always in the right place to defend that region, there must be multiple satellites in the same orbit so that one satellite moves into position to defend the region when the one in front of it moves out of position. For the situation described above and shown in Figure 1, that requires seven or eight satellites in the orbit.

At the same time, the Earth is rotating under the orbits. After a few hours, China will no longer lie under this orbit, so to give constant interceptor coverage of this region, there must be interceptors in additional orbits that will pass over China after the Earth has rotated. Each of these orbits must also contain seven or eight interceptor satellites. For the case shown here, only two additional orbits are required (the other two white curves in Figure 1).

Eight satellites in each of these three orbits gives a total of 24 satellites in the constellation to maintain coverage of one or perhaps two satellites in view of the sea east of China at all times. This constellation and could therefore only defend against a small number of anti-ship missiles fired essentially simultaneously. Defending against more missiles would require a larger constellation.

If the interceptors are instead designed for boost-phase rather than midcourse defense, the area each interceptor could defend is much smaller. An interceptor with the same speed as the one described above could only reach out about 800 km during the boost time of a long-range missile; this is shown by the dashed black circle in Figure 1.

In this case, the interceptor covering a particular launch site will move out range of that site very quickly—in about three and a half minutes. Maintaining one or two satellites over a launch site at these latitudes will therefore require 40 to 50 satellites in each of seven or eight orbits, for a total of 300 to 400 satellites.

The system described—40 to 50 satellites in each of seven or eight orbits—would only provide continuous coverage against launches in a narrow band of latitude, for example, over North Korea if the inclination of the orbits was 45 degrees (Fig. 2). For parts of the Earth between about 30 degrees north and south latitude there would be significant holes in the coverage. For areas above about 55 degrees north latitude, there would be no coverage. Broader coverage to include continuous coverage at other latitudes would require two to three times that many satellites—1,000 or more.

As discussed above, defending against more than one or two nearly simultaneous launches would require a much larger constellation.

Fig. 2. The figure shows the ground coverage (gray areas) of interceptor satellites in seven equally spaced orbital planes with inclination of 45°, assuming the satellites can reach laterally 800 km as they de-orbit. The two dark lines are the ground tracks of two of the satellites in neighboring planes. This constellation can provide complete ground coverage for areas between about 30° and 50° latitude (both north and south), less coverage below 30°, and no coverage above about 55°.

For additional comments on the IDA study, see Part 2 of this post.

More Comments on the IDA Boost-Phase Missile Defense Study

UCS Blog - All Things Nuclear (text only) -

Part 1 of this post discusses one aspect of the 2011 letter from Missile Defense Agency (MDA) to then-Senator Kyl about the IDA study of space-based missile defense. The letter raises several additional issues, which I comment on here.

  1. Vulnerability of missile defense satellites to anti-satellite (ASAT) attack

To be able to reach missiles shortly after launch, space-based interceptors (SBI) must be in low-altitude orbits; typical altitudes discussed are 300 to 500 km. At the low end of this range atmospheric drag is high enough to give very short orbital lifetimes for the SBI unless they carry fuel to actively compensate for the drag. That may not be needed for orbits near 500 km.

Interceptors at these low altitudes can be easily tracked using ground-based radars and optical telescopes. They can also be reached with relatively cheap short-range and medium-range missiles; if these missiles carry homing kill vehicles, such as those used for ground-based midcourse missile defenses, they could be used to destroy the space-based interceptors. Just before a long-range missile attack, an adversary could launch an anti-satellite attack on the space-based interceptors to punch a hole in the defense constellation through which the adversary could then launch a long-range missile.

Alternately, an adversary that did not want to allow the United States to deploy space-based missile defense could shoot space-based interceptors down shortly after they were deployed.

The IDA report says that the satellites could be designed to defend themselves against such attacks. How might that work?

Since the ASAT interceptor would be lighter and more maneuverable than the SBI, the satellite could not rely on maneuvering to avoid being destroyed.

A satellite carrying a single interceptor could not defend itself by attacking the ASAT, for two reasons. First, the boost phase of a short- or medium-range missile is much shorter than that of a long-range missile, and would be too short for an interceptor designed for boost-phase interception to engage. Second, even if the SBI was designed to have sensors to allow intercept in midcourse as well as boost phase, using the SBI to defend against the ASAT weapon would remove the interceptor from orbit and the ASAT weapon would have done its job by removing the working SBI from the constellation. A workable defensive strategy would require at least two interceptors in each position, one to defend against ASAT weapons and one to perform the missile defense mission.

The IDA report assumes the interceptor satellites it describes to defend ships would each carry four interceptors. If the system is meant to have defense against ASAT attacks, some of the four interceptors must be designed for midcourse intercepts. The satellite could carry at most three such interceptors, since at least one interceptor must be designed for the boost-phase mission of the defense. If an adversary wanted to punch a hole in the constellation, it could launch four ASAT weapons at the satellite and overwhelm the defending interceptors (recall that the ASAT weapons are launched on relatively cheap short- or medium-range missiles).

In addition, an ASAT attack could well be successful even if the ASAT was hit by an interceptor. If an interceptor defending the SBI hit an approaching ASAT it would break the ASAT into a debris cloud that would follow the trajectory of the original center of mass of the ASAT. If this intercept happened after the ASAT weapon’s course was set to collide with the satellite, the debris cloud would continue in that direction. If debris from this cloud hit the satellite it would very likely destroy it.

  1. Multiple interceptors per satellite

It is important to keep in mind that adding multiple interceptors to a defense satellite greatly increases the satellite’s mass, which increases its launch cost and overall cost.

The vast majority of the mass of a space-based interceptor is the fuel needed to accelerate the interceptor out of its orbit and to maneuver to hit the missile (the missile is itself maneuvering since it is during its boost phase, when it is accelerating and steering). For example, the American Physical Society’s study assumes the empty kill vehicle of the interceptor (the sensor, thrusters, valves, etc) is only 60 kg, but the fueled interceptor would have a mass of more than 800 kg.

Adding a second interceptor to the defense satellite would add another 800 kg to the overall mass. A satellite with four interceptors and a “garage” that included the solar panels and communication equipment could have a total mass of three to four tons.

  1. Space debris creation

Senator Kyl asked the MDA to comment on whether space-based missile defense would create “significant permanent orbital debris.” The MDA answer indicated that at least for one mechanism of debris creation (that of an intercept of a long-range missile), the system could be designed to not generate long-lived debris.

However, there are at least three different potential debris-creating mechanisms to consider:

  • Intercepting a missile with an SBI

When two compact objects collide at very high speed, the objects break into two expanding clouds of debris that follow the trajectories of the center of mass of the original objects. In this case the debris cloud from the interceptor will likely have a center of mass speed greater than Earth escape velocity (11.2 km/s) and most of the debris will therefore not go into orbit or fall back to Earth. Debris from the missile will be on a suborbital trajectory; it will fall back to Earth and not create persistent debris.

  • Using an SBI as an anti-satellite weapon

If equipped with an appropriate sensor, the space-based interceptor could home on and destroy satellites. Because of the high interceptor speed needed for boost phase defense, the SBI could reach satellites not only in low Earth orbits (LEO), but also those in semi-synchronous orbits (navigation satellites) and in geosynchronous orbits (communication and early warning satellites). Destroying a satellite on orbit could add huge amounts of persistent debris to these orbits.

At altitudes above about 800 km, where most LEO satellites orbit, the debris from a destroyed satellite would remain in orbit for decades or centuries. The lifetime of debris in geosynchronous and semi-synchronous orbits is essentially infinite.

China’s ASAT test in 2007 created more than 3,000 pieces of debris that have been tracked from the ground—these make up more than 20% of the total tracked debris in LEO. The test also created hundreds of thousands of additional pieces of debris that are too small to be tracked (smaller than about 5 cm) but that can still damage or destroy objects they hit because of their high speed.

Yet the satellite destroyed in the 2007 test had a mass of less than a ton. If a ten-ton satellite—for example, a spy satellite—were destroyed, it could create more than half a million pieces of debris larger than 1 cm in size. This one event could more than double the total amount of large debris in LEO, which would greatly increase the risk of damage to satellites.

  • Destroying an SBI with a ground-based ASAT weapon

As discussed above, an adversary might attack a space-based interceptor with a ground-based kinetic ASAT weapon. Assuming the non-fuel mass of the SBI (with garage) is 300 kg, the destruction of the satellite could create more than 50,000 orbiting objects larger than 5 mm in size.

If the SBI was orbiting at an altitude of between 400 and 500 km, the lifetime of most of these objects will be short so this debris would not be considered to be persistent. However, the decay from orbit of this debris would result in an increase in the flux of debris passing through the orbit of the International Space Station (ISS), which circles the Earth at an altitude of about 400 km. Because the ISS orbits at a low altitude, it is in a region with little debris since the residual atmospheric density causes debris to decay quickly. As a result, the additional debris from the SBI passing through this region can represent a significant increase.

In particular, if the SBI were in a 500-km orbit, the destruction of a single SBI could increase the flux of debris larger than 5 mm at the altitude of the ISS by more than 10% for three to four months (at low solar activity) or two to three months at high solar activity. An actual attack might, of course, involve destroying more than one SBI, which would increase this flux.


Subscribe to Union of Concerned Scientists aggregator - Combined UCS Blogs