Combined UCS Blogs

The Senate Will Accelerate Kelvin Droegemeier’s White House Science Advisor Nomination. That’s a Good Thing.

UCS Blog - The Equation (text only) -

Try not to breathe too easily, but the Senate is in fast drive mode to consider the nomination of Kelvin Droegemeier to lead the White House Office and Science and Technology Policy. And well it should. These days, this is one nomination we should all be excited about, as this Superman of science policy is sorely needed in the White House.

Many scientists cheered Dr. Droegemeier’s nomination after the White House went 19 months without a science advisor. I believe he would be a great pick for any administration, in any country.

The Office of Science and Technology Policy provides the president with advice on everything from energy to health care to pandemics. It needs a confirmed leader.

Feedback on his nomination has been almost universally positive.

A Senate committee will hold a confirmation hearing next Thursday, August 23 at 10:15 a.m. I hope that he will get up and not only talk about the passion that he has for scientific research, but also take a stand for the role of robust federal scientific workforce in informing public health and environmental policy. Historically, OSTP has helped ensure that federal agencies have both resources and independence to use the best available science to make policy. It can do so again.

It remains to be seen whether Dr. Droegemeier will be appointed to serve as science advisor to the president as well as OSTP director; the former doesn’t require Senate confirmation. And while some suspect that the president will simply provide his science advisor with a sword to fall on, methinks that it isn’t that simple. A lack of science advice is a disadvantage for any world leader. Pretend that you’re trying to negotiate a nuclear or climate agreement: you can’t get there from here without understanding the science.

It’s important for Dr. Droegemeier to make it out okay and help end the longest drought of science advice the White House has seen in modern times.

For Washington Voters, I-1631 is a Chance to Tackle Climate Change Head On

UCS Blog - The Equation (text only) -

Photo: Troye Owens/Flickr

The magnitude of the climate challenge is daunting; a constellation of causes and impacts, promising no simple fix.

But a new proposal in Washington state has identified a powerful place to start.

I-1631, on the ballot this November, is grounded in the reality that to truly address climate change today, it’s simply no longer enough to drive down carbon emissions—communities must now also be readied for climate impacts, including those already at hand, and all those still to come.

As a result, this community-oriented, solutions-driven carbon pricing proposal is generating enthusiastic support from a broad and growing coalition across the state.

No single policy can solve all climate challenges, but I-1631 presents a critically important start. And, because it was specifically designed to prioritize those most vulnerable to climate change and the inevitable transitions to come—through intersections with jobs, health, geography, and historical social and economic inequities—the policy stands to be a powerful change for good, and that is the very best metric we’ve got.

Here, a summary of what it’s all about.

Overarching framework

I-1631 is organized around a commonsense framework: charge a fee for carbon pollution to encourage the shift toward a cleaner economy, then accelerate that transition by investing the revenues in clean energy and climate resilience.

The Clean Air, Clean Energy Initiative states:

Investments in clean air, clean energy, clean water, healthy forests, and healthy communities will facilitate the transition away from fossil fuels, reduce pollution, and create an environment that protects our children, families, and neighbors from the adverse impacts of pollution.

Funding these investments through a fee on large emitters of pollution based on the amount of pollution they contribute is fair and makes sense.

I-1631 emerged as the result of a years-long collaboration between diverse stakeholders—including labor, tribal, faith, health, environmental justice, and conservation groups—leading to a proposal that’s deeply considerate of the many and varied needs of the peoples and communities caught in the climate crossfire. The Union of Concerned Scientists is proud to have been a part of this alliance and to now support I-1631.

How it works

There are two main components to I-1631—the investments and the fee. Let’s take them in turn.

Investing in a cleaner, healthier, and more climate-resilient world.

I-1631 prioritizes climate solutions by investing in the communities, workforces, and technologies that the state will need to thrive moving forward. This means identifying and overcoming the vulnerabilities these groups face, and re-positioning the state’s economic, health, and environmental priorities to achieve a resilient and robust future.

The policy proactively approaches this by assigning collected fees to one of three investment areas, guided by a public oversight board and content-specific panels:

  • Clean Air and Clean Energy (70 percent): Projects that can deliver tens of millions of tons of emissions reductions over time, including through renewables, energy efficiency, and transportation support. Within four years, would also create a $50 million fund to support workers affected by the transition away from fossil fuels, to be replenished as needed thereafter.
  • Clean Water and Healthy Forests (25 percent): Projects that can increase the resiliency of the state’s waters and forests to climate change, like reducing flood and wildfire risks and boosting forest health.
  • Healthy Communities (5 percent): Projects that can prepare communities for the challenges caused by climate change—including by developing their capacity to directly participate in the process—and to ensure that none are disproportionately affected.

Of these investments, the initiative further specifies a need to target well over a third of all funds to projects that benefit areas facing particularly high environmental burdens and population vulnerabilities, as well as projects supported by Indian tribes. This works to ensure that those who are most vulnerable are not left behind, but instead positioned to thrive in a changing world.

Another vital part of the proposal is that at least 15 percent of Clean Air and Clean Energy funds must be dedicated to alleviating increases in energy costs for low-income customers that result from pollution reduction initiatives. Without such a stipulation, the policy could lead lower-income households to feel its effects more. But instead, I-1631 directs funds to eliminate such cost increases. This could be through energy-saving investments, such as weatherizing a home, or by directly limiting costs, such as through bill assistance programs.

Qualifying light and power businesses and gas distribution businesses can, instead of paying a fee, claim an equivalent amount of credits and then directly invest in projects according to an approved clean energy investment plan.

Charging a fee for carbon pollution.

To pay for these investments, I-1631 would charge large emitters for the carbon emissions they release. In turn, the policy would send a signal to the market to spur innovation and investments in lower-carbon, less polluting alternatives.

The proposed fee begins at $15 per metric ton of carbon content in 2020 and proceeds to increase by $2 per metric ton each year thereafter, plus any necessary adjustments for inflation. This is estimated to generate hundreds of millions of dollars annually.

Notably, the price does not go up indefinitely. Instead, as a reflection of the intention of the price—to achieve a climate-relevant reduction in carbon emissions—once the state’s 2035 greenhouse gas reduction goal of 25 percent below 1990 levels is met, and the state’s emissions are on a trajectory that indicates compliance with the state’s 2050 goal of 50 percent below 1990 levels, the fee is to be fixed.

And just who is it that pays? Generally, the largest emitters in the state—fossil-fuel fired power plants, oil companies, and large industrial facilities.

However, the proposal also recognizes that Washington has some industries in direct competition with others in places without a comparable carbon fee, and thus a price on carbon could make them less competitive. As a result, the policy specifically provides select exemptions to these entities, including agriculture, pulp and paper mills, and others. The proposal also excludes coal-fired power plants that have committed to shutting down by 2025, in recognition of existing legal settlements and constraints.

Ultimately, the policy seeks to spur the state’s economy towards a forward-looking, carbon-considerate model, but to do it in such a way that workers and vulnerable communities do not end up bearing a disproportionate share of the costs.

Where it stands

Following months of organizing and signature gathering, on top of years of stakeholder engagement and collaboration, I-1631 will officially be put to vote in Washington this November.

This is not the first time carbon pricing has come up in the state; I-1631 builds from previous measures attempted in the legislature and on the ballot.

And this policy has the advantage of being designed from the ground up. It unites diverse stakeholders in common cause, and proactively addresses the fact that vulnerable communities are at risk of being hit first and worst.

What’s more, I-1631’s method of tackling the problem from both sides—charging a fee for pollution and investing funds in that which it aims to change—is an effective policy design, and a popular one at that. It wouldn’t be the first carbon pricing policy in the US, with cap-and-trade programs running in California and the Northeast, though it would be the first to employ an explicit price.

In the face of all this positive momentum, fossil fuel interests have been mounting an aggressive opposition campaign. But their desperate attempts at finding an objection that will stick—calling out threats to jobs and undue burdens on the poor—are undercut by the policy’s careful exemptions, sustained support for worker transitions, and significant direct attention paid to those who need it most.

The fact is, climate change is here, now, and communities are suffering the costs. I-1631 points a way forward, for all.

With this proposal, Washington is demonstrating that climate and community leadership can still be found if you let the people speak—heartening at a time when evidence of such leadership from the nation’s capital is itself sorely missed.

Anticipated Transient Without Scram

UCS Blog - All Things Nuclear (text only) -

Role of Regulation in Nuclear Plant Safety #8

In the mid-1960s, the nuclear safety regulator raised concerns about the reliability of the system relied upon to protect the public in event of a reactor transient. If that system failed—or failed again since it had already failed—the reactor core could be severely damaged (as it had during that prior failure.) The nuclear industry resisted the regulator’s efforts to manage this risk. Throughout the 1970s, the regulator and industry pursued non-productive exchange of study and counter-study. Then the system failed again—three times—in June 1980 and twice more in February 1983. The regulator adopted the Anticipated Transient Without Scram rule in June 1984. But it was too little, too late—the hazard it purported to manage had already been alleviated via other means.

Anticipated Transients

Nuclear power reactors are designed to protect workers and members of the public should anticipated transients and credible accidents occur. Nuclear Energy Activist Toolkit #17 explained the difference between transients and accidents. Anticipated transients include the failure of a pump while running and the inadvertent closure of a valve that interrupts the flow of makeup water to the reactor vessel.

The design responses to some anticipated transients involve automatic reductions of the reactor power level. Anticipated transients upset the balance achieved during steady state reactor operation—the automatic power reductions make it easier to restore balance and end the transient.


For other transients and for transients where power reductions do not successfully restore balance, the reactor protection system is designed to automatically insert control rods that stop the nuclear chain reaction. This rapid insertion of control rods is called “scram” or “reactor trip” in the industry. Nuclear Energy Activist Toolkit #11 described the role of the reactor protection system.

Scram was considered to be the ultimate solution to any transient problems. Automatic power reductions and other automatic actions might mitigate a transient such that scram is not necessary. But if invoked, scram ended any transient and placed the reactor in a safe condition—or so it was believed.

Anticipated Transient Without Scram (ATWS)

Dr. Stephen H. Hanauer, was appointed to the NRC’s Advisory Committee on Reactor Safeguards (ACRS) in 1965. (Actually, the ACRS was part of the Atomic Energy Commission (AEC) in those days. The Nuclear Regulatory Commission (NRC) did not exist until formed in 1975 when the Energy Reorganization Act split the AEC into the NRC and what today is the Department of Energy.) During reviews of applications for reactor operating licenses in 1966 and 1967, Hanauer advocated separating instrumentation systems used to control the reactor from the instrumentation systems used to protect it (i.e., trigger automatic scrams.) Failure of this common system caused an accident on November 18, 1958, at the High Temperature Reactor Experiment No. 3 in Idaho.

The nuclear industry and its proponents downplayed the concerns on grounds that the chances of an accident were so small and the reliability of the mitigation systems so high that safety was good enough. Dr. Alvin Weinburg, Director of the Oak Ridge National Laboratory, and Dr. Chauncey Starr, Dean of Engineering at UCLA, publicly contended that the chances of a serious reactor accident were similar to that of a jet airliner plunging into Yankee Stadium during a World Series game.

In February 1969, E. P. Epler, a consultant to the ACRS, pointed out that common cause failure could impair the reactor protection system and prevent the scram from occurring. The AEC undertook two efforts in response to the observation: (1) examine mechanisms and associated likelihoods that a scram would not happen when needed, and (2) evaluate the consequences of anticipated transients without scrams (ATWS).

The AEC published WASH-1270, “Technical Report on Anticipated Transients Without Scram,” in September 1973. Among other things, this report established the objective that the chances of an ATWS event leading to serious offsite consequences should be less than 1×10-7 per reactor-year. For a fleet of 100 reactors, meeting that objective translates into once ATWS accident every 100,000 years—fairly low risk.

The AEC had the equivalent of a speed limit sign but lacked speedometers or radar guns. Some argued that existing designs had failure rates as high as 1×10-3 per reactor-year—10,000 times higher than the safety objective. Others argued that the existing designs had failures rates considerably lower than 1×10-7 per reactor-year. The lack of riskometers and risk guns fostered a debate that pre-dated the “tastes great, less filling” debate fabricated years later to sell Miller Lite beer.

An article titled “ATWS—Impact of a Nonproblem,” that appeared in the March 1977 issue of the EPRI Journal summarized the industry’s perspective (beyond the clue in the title):

ATWS is an initialism for anticipated transient without scram. In Nuclear Regulatory Commissionese it refers to a scenario in which an anticipated incident causes the reactor to undergo a transient. Such a transient would require the reactor protection system (RPS) to initiate a scram (rapid insertion) of the control rods to shut down the reactor, but for some reason the scram does not occur. … Scenarios are useful tools. They are used effectively by writers of fiction, the media, and others to guide the thinking process.

Two failures to scram has already occurred (in addition to the HTRE-3 failure). The boiling water reactor at the Kahl nuclear plant in Germany experienced a failure in 1963 and the N-reactor at Hanford in Washington had a failure in 1970. The article suggested that scram failures should be excluded from the scram reliability statistical analysis, observing that “One need not rely on data alone to make an estimate of the statistical properties of the RPS.” As long as scenarios exist, one doesn’t need statistics getting in the way.

The NRC formed an ATWS task force in March 1977 to end, or at least focus, the non-productive debate that had been going on since WASH-1270 was published. The task force’s work was documented in NREG-0460, “Anticipated Transients Without Scram for Light Water Reactors,” issued in April 1978. The objective was revised from 1×10-7 per reactor-year to 1×10-6 per reactor-year.

Believe it or not, but somehow changing the safety objective without developing the means to objectively gauge performance towards meeting it did not end or even appreciably change it. Now, some argued that existing designs had failure rates as high as 1×10-3 per reactor-year—1,000 times higher than the safety objective. Others argued that the existing designs had failures rates considerably lower than 1×10-6 per reactor-year. The 1970s ended without resolution to the safety problem that arose more than a decade earlier.

The Browns Ferry ATWS, ATWS, and ATWS

On June 28, 1980, operators reduced the power level on the Unit 3 boiling water reactor (BWR) at the Browns Ferry Nuclear Plant in Alabama to 35 percent and depressed the two pushbuttons to initiate a manual scram. All 185 control rods should have fully inserted into the reactor core within seconds to terminate the nuclear chain reaction. But 76 control rods remained partially withdrawn and the reactor continued operating, albeit at an even lower power level. Six minutes later, an operator depressed the two pushbuttons again. But 59 control rods remained partially withdrawn after the second ATWS. Two minutes later, the operator depressed the pushbuttons again. But 47 control rods remained partially withdrawn after the third ATWS. Six minutes later, an automatic scram occurred that resulted in all 185 control rods being fully inserted into the reactor core. It took four tries and nearly 15 minutes, but the reactor core was shut down. Fission Stories #107 described the ATWSs in more detail.

In BWRs, control rods are moved using hydraulic pistons. Water is supplied to one side of the piston and vented from the other side with the differential pressure causing the control rod to move. During a scram, the water vents to a large metal pipe and tank called the scram discharge volume. While never proven conclusively, it is generally accepted that something blocked the flow of vented water into the scram discharge volume. Flow blockage would have reduced the differential pressure across the hydraulic pistons and impeded control rod insertions. The scram discharge volume itself drains into the reactor building sump. The sump was found to contain considerable debris. But because it collects water from many places, none of the debris could be specifically identified as having once blocked flow into the scram discharge volume.

Although each control rod had its own hydraulic piston, the hydraulic pistons for half the control rods vented to the same scram discharge volume. The common mode failure of flow blockage impaired the scram function for half the control rods.

The NRC issued Bulletin 80-17, “Failure of 76 of 185 Controls Rods to Fully Insert During a Scram at a BWR,” on July 3, 1980, with Supplement 1 on July 18, 1980, Supplement 2 on July 22, 1980, Supplement 3 on August 22, 1980, Supplement 4 on December 18, 1980, and Supplement 5 on February 2, 1981, compelling plant owners to take interim and long-term measures to prevent what didn’t happen at Browns Ferry Unit 3—a successful scram on the first try—from not happening at their facilities.

ATWS – Actual Tack Without Stalling

On November 19, 1981, the NRC published a proposed ATWS rule in the Federal Register for public comment. One could argue that the debates that filled the 1970s laid the foundation for this proposed rule and the June 1980 ATWSs at Browns Ferry played no role in this step or its timing. That’d be one scenario.

The Salem ATWS and ATWS

During startup on February 25, 1983, following a refueling outage, low water level in one of the steam generators on the Unit 1 pressurized water reactor at the Salem nuclear plant triggered an automatic scram signal to the two reactor trip breakers. Had either breaker functioned, all the control rods would have rapidly inserted into the reactor core. But both breakers failed. The operators manually tripped the reactor 25 seconds later. The following day, NRC inspectors discovered that an automatic scram signal had also happened during an attempted startup on February 22, 1983. The reactor trip breakers failed to function. The operators had manually tripped the reactor. The reactor was restarted two days later without noticing, and correcting, the reactor trip breaker failures. Fission Stories #106 described the ATWSs in more detail.

In PWRs, control rods move via gravity during a scram. They are withdrawn upward from the reactor core and held fully or partially withdrawn by electro-magnets. The reactor trip breakers stop the flow of electricity to the electro-magnets, which releases the control rods to allow gravity to drop them into the reactor core. Investigators determined that the proper signal went to the reactor trip breakers on February 22 and 25, but the reactor trip breakers failed to open to stop the electrical supply to the electro-magnets. Improper maintenance of the breakers essentially transformed oil used to lubricated moving parts into glue binding those parts in place—in the wrong places on February 22 and 25, 1983.

The Salem Unit 1 reactor had two reactor trip breakers. Opening of either reactor trip breaker would have scrammed the reactor. The common mode failure of the same improper maintenance practices on both breakers prevented them both from functioning when needed, twice.

The NRC issued Bulletin 83-01, “Failure of Reactor Trip Breakers (Westinghouse DB-50) to Open on Automatic Trip Signal,” on February 25, 1983, Bulletin 83-04, “Failure of Undervoltage Trip Function of Reactor Trip Breakers,” on March 11, 1983, and Bulletin 83-08, “Electrical Circuit Breakers with Undervoltage Trip in Safety-Related Applications Other Than the Reactor Trip System,” on December 28, 1983, compelling plant owners to take interim and long-term measures to prevent failures like those experienced on Salem Unit 1.

ATWS Scoreboard: Brown Ferry 3, Salem 2

ATWS – Actual Text Without Semantics

The NRC published the final ATWS rule adopted on June 26, 1984, or slightly over 15 years after the ACRS consultant wrote that scrams might not happen when desired due to common mode failures. The final rule was issued less than four years after a common mode failure caused multiple ATWS events at Browns Ferry and about 18 months after a common mode failure caused multiple ATWS events at Salem. The semantics of the non-productive debates of the Seventies gave way to actual action in the Eighties.

UCS Perspective

The NRC issued NUREG-1780, “Regulatory Effectiveness of the Anticipated Transient Without Scram Rule,” in September 2003. The NRC “concluded that the ATWS rule was effective in reducing ATWS risk and that the cost of implementing the rule was reasonable.” But that report relied on bona-fide performance gains achieved apart from the ATWS rule and which would have been achieved without the rule. For example, the average reactor scrammed 8 times in 1980. That scram frequency dropped to less than an average of two scrams per reactor per year by 1992.

Fig. 1 (Source: Nuclear Regulatory Commission)

The ATWS rule did not trigger this reduction or accelerate the rate of reduction. The reduction resulted from the normal physical process, often called the bathtub curve due to its shape. As procedure glitches, training deficiencies, and equipment malfunctions were weeded out, their fixes lessened the recurrence rate of problems resulting in scrams. I bought a Datsun 210 in 1980. That acquisition had about as much to do with the declining reactor scram rate since then as the NRC’s ATWS rule had.

There has been an improvement in the reliability of the scram function since 1980. But again, that improvement was achieved independently from the ATWS rule. The Browns Ferry and Salem ATWS event prompted the NRC to mandate via a series of bulletins that owners take steps to reduce the potential for common mode failures. Actions taken in response to those non-rule-related mandates improved the reliability of the scram function more than the ATWS rule measures.

If the AWTS rule had indeed made nuclear plants appreciably safer, then it would represent under-regulation by the NRC. After all, the question of the need for additional safety arose in the 1960s. If the ATWS rule truly made reactors safer, then the “lost decade” of the 1970s is inexcusable. The ATWS rule should have been enacted in 1974 instead of 1984 if it was really needed for adequate protection of public health and safety.

But the ATWS rule enacted in 1984 did little to improve safety that wasn’t been achieved via other means. The 1980 and 1983 ATWS near-miss events at Browns Ferry and Salem might have been averted by an ATWS rule enacted a decade earlier. Once they happened, the fixes they triggered fleet-wide precluded the need for an ATWS rule. So, the ATWs rule was too little, too late.

The AEC/NRC and nuclear industry expended considerable effort during the 1970s not resolving the AWTS issue—effort that could better have been applied resolving other safety issues more rapidly.

ATWS becomes the first Role of Regulation commentary to fall into the “over-regulation” bin. UCS has no established plan for how this series will play out. ATWS initially appeared to be an “under-regulation” case, but research steered it elsewhere.

* * *

UCS’s Role of Regulation in Nuclear Plant Safety series of blog posts is intended to help readers understand when regulation played too little a role, too much of an undue role, and just the right role in nuclear plant safety.

Strong Leadership Makes for Satisfied Federal Scientists: A Case Study at the FDA

UCS Blog - The Equation (text only) -

As our research team was analyzing the results of our newest federal scientist survey that was released earlier this week, it was heartening to see that at some agencies, like at the U.S. Food & Drug Administration (FDA), the job satisfaction and ability to work appear to be even better than in years past. One of the best characterizations of the sentiments expressed by FDA scientists is this quote from a respondent: “The current administration has overall enforced certain science policies which harm the public in general. However, the current commissioner is fantastic and committed to the FDA’s mission. He is consistently involved in policy development which allows the protection and promotion of public health.”

We sent 9,378 FDA scientists and scientific experts a survey; of which 354 responded, yielding an overall response rate of 3.8 percent. Overall, our findings suggest that scientists at the FDA are faring better than their colleagues at the other 16 federal agencies surveyed. FDA scientists overall appeared to have faith in FDA leadership, including the FDA commissioner Dr. Scott Gottlieb.

So what is FDA doing right?

Commissioner Gottlieb visits the agency’s Center for Devices and Radiological Health (CDRH) in Silver Spring, MD in November 2017 (Photo credit: Flickr/US FDA)

A genuine interest in getting the science right

Encouragingly, and as in previous UCS surveys, FDA scientists called attention to efforts by the agency to protect scientific integrity, with some responses indicating a strong sense of trust in supervisors and leadership. Most FDA scientists reported no change in personal job satisfaction or perception of office effectiveness; some respondents noted increased job satisfaction during the past year. 25 percent (87 respondents) said that the effectiveness of their division or office has increased compared with one year ago. Part of the reason for the agency’s effectiveness is its ability to collect the scientific and monitoring information needed to meet its mission, a metric that has significantly improved between 2015 and 2018. (See figure below). Further, 65 percent (222 respondents) felt that their direct supervisors consistently stand behind scientists who put forth scientifically defensible positions that may be politically contentious.

In 2018, the majority of FDA respondents felt that the agency frequently collected the information needed to meet its mission. When compared with previous results, the most significant differences were found between the 2015 and 2018 surveys (p<0.0001).

Perhaps it is because Gottlieb is a medical doctor who seems genuinely interested in evidence-based policies that we have not been bombarded with policy proposals that sideline science from the FDA since he began leading the agency in July 2017. FDA scientists who took the survey have corroborated this. One respondent wrote that that “the Commissioner’s office is tirelessly upholding best practices in various scientific fields such as smoking cessation, opioid/addiction crisis, generic drug manufacturing, sustainable farming practices.” Another respondent wrote, “FDA has a proactive Commissioner who —so far—has consistently followed science-based information and promoted science-based initiatives in the interest of public health.”

He has encouraged the work of FDA’s advisory committees like the Drug Safety and Risk Management Advisory Committee and the Anesthetic and Analgesic Drug Products Advisory Committee that recently met to make recommendations to the FDA on its regulation of transmucosal immediate-release fentanyl (TIRF) products, which were being prescribed for off-label uses for years. Gottlieb does not get defensive about weak spots in FDA’s portfolio. According to one respondent, “I’ve been pleasantly surprised by Commissioner Gottlieb’s knowledge and focus on FDA science. I was at a brief with him and he was interested in the science and less focused on the legal and political effects than I would have guessed. He was open-minded and curious, and asked questions when he didn’t understand the issue fully. It improved my outlook on my Agency’s future.” The ability for Gottlieb to ask questions and listen to agency scientists as well as outside experts is an important quality for a Commissioner making decisions that impact our health and safety.

I took this photo of an advertisement on a DC metro train this summer, revealing that Gottlieb is serious about recruitment to the agency.

Taking action to improve hiring practices and retention of staff

Soon after Commissioner Gottlieb was confirmed, the FDA took steps to examine its own hiring practices to identify improvements that could be made to build and keep a stronger workforce. The agency wrote a report, held a public meeting, and received feedback from FDA staff throughout the process because according to Gottlieb,  “The soul of FDA and our public health mission is our people. Retaining the people who help us achieve our successes is as important as recruiting new colleagues to help us meet our future challenges.” For scientific staff, the agency plans to do more outreach to scientific societies and academic institutions for recruitment and to reach out to early career scientists and make them aware that public service at the FDA is a viable career option.

A commitment to transparency

Not only have there been some encouraging policies put in place by the FDA, but Gottlieb seems committed to informing the public about these decisions. He is very active on twitter and issues so many public statements that reporters feel almost overwhelmed by his updates. This is in contrast of course to leaders like former EPA Administrator Scott Pruitt who seldom announced his whereabouts in advance and was openly hostile to reporters.

Still room for improvement at the FDA and across the government

To be sure, there have been some bumps along the road. Last year, Gottlieb disbanded the FDA’s Food Advisory Committee, which was the only federal advisory committee focused entirely on science-based recommendations on food safety, and he delayed implementation of changes to the nutrition facts label that would have included a line for added sugars by this summer.

Further, survey respondents noted that inappropriate outside influence, such as from regulated industries, is apparent and stymies science-based decisionmaking at the agency. 22 percent (70 respondents) felt that the presence of senior decisionmakers from regulated industries or with financial interest in regulatory outcomes inappropriately influences FDA decisionmaking. Nearly a third (101 respondents) cited the consideration of political interests as a barrier to science-based decisionmaking and 36 percent (114 respondents) felt that the influence of business interests hinders the ability of the agency to make science-based decisions. In addition, respondents reported workforce reductions at the agency and said these lessened their ability to fulfill FDA’s science-based mission.

One thing became very clear as we reviewed the results of UCS’ seventh federal scientist survey that closed this spring: scientists across many federal agencies have been unable to do their jobs to the best of their ability under the Trump administration. Since the start of 2017, agencies have been hollowed out and there has been a sharp decline in expertise and capacity. Reduced staff capacity combined with political interference and the absence of leadership in some cases has made it harder for scientists to carry out important work. As the threat of political influence looms large over the government, much of federal scientists’ ability to do their work to advance the mission of the agencies has to do with the quality of leadership and the administrator or commissioner’s commitment to evidence over politics as a basis for decisionmaking.

Is Scientific Integrity Safe at the USDA?

UCS Blog - The Equation (text only) -

U.S. Department of Agriculture (USDA) Agricultural Research Service (ARS) plant physiologist Franck Dayan observes wild-type and herbicide-resistant biotypes of Palmer Amaranth (pigweed) as Mississippi State University graduate student, Daniela Ribeiro collects samples for DNA analysis at the ARS Natural Products Utilization Research Unit in Oxford, MS on July 20, 2011. USDA photo by Stephen Ausmus. Photo: Stephen Ausmus, USDA/CC BY 2.0 (Flickr)

Science is critical to everything the US Department of Agriculture does—helping farmers produce a safe, abundant food supply, protecting our soil and water for the future, and advising all of us about good nutrition to stay healthy. I recently wrote about the Trump administration’s new USDA chief scientist nominee, Scott Hutchins, and the conflicts he would bring from a career narrowly focused on developing pesticides for Dow.

But meanwhile, Secretary of Agriculture Sonny Perdue last week abruptly announced a proposed reorganization of the USDA’s research agencies. This move has implications for whoever takes up the post of chief scientist—as do new survey findings released yesterday, which suggest that the Trump administration is already having detrimental effects on science and scientists at the USDA.

An attack on science, and a shrinking portfolio for the next chief scientist

The job for which Scott Hutchins (and this guy before him) has been nominated is actually a multi-pronged position. The under secretary is responsible for overseeing the four agencies that currently make up the USDA’s Research, Education, and Economics (REE) mission area: the Agricultural Research Service, the Economic Research Service (ERS), the National Agricultural Statistics Service, and the National Institute for Food and Agriculture (NIFA). Collectively, these agencies carry out or facilitate nearly $3 billion worth of research on food and agriculture topics every year. In addition, the REE under secretary is the USDA’s designated chief scientist, overseeing the Office of the Chief Scientist, established by Congress in 2008 to “provide strategic coordination of the science that informs the Department’s and the Federal government’s decisions, policies and regulations that impact all aspects of U.S. food and agriculture and related landscapes and communities.” OCS and the chief scientist are also responsible for ensuring scientific integrity across the department.

Altogether, it’s no small job, but it may soon get smaller. Secretary Perdue’s unexpected reorganization proposal last week would pluck ERS figuratively from within REE and place it in the Secretary’s office. Perdue’s announcement also included a plan to literally move ERS, along with NIFA, to as-yet-undetermined locations outside the DC area.

Perdue’s proposal cited lower rents and better opportunities to recruit agricultural specialists. But that rationale sounds fishy to UCS and other observers, as well as former USDA staff (the most recent NIFA administrator had this unvarnished reaction) and current staff who were caught by surprise. The move looks suspiciously like subordinating science to politics, likely giving big agribusiness and its boosters in farm-state universities ever more influence over the direction of USDA research that really should be driven by the public interest. Moreover, on the heels of a White House proposal earlier this year to cut the ERS budget in half—which Congress has thus far ignored—Perdue’s “relocate or leave” plan for ERS staff sure seems like a back-door way to gut the agency’s capacity.

New USDA scientist survey findings give more cause for concern

Even before announcements of a conflicted chief scientist nominee and ill-conceived reorganization, things weren’t exactly rosy for those working within REE agencies. In a survey conducted in February and March and released by UCS yesterday, scientists and economists in ARS, ERS, NASS, and NIFA raised concerns about the effects of political interference, budget cuts, and staff reductions. In partnership with Iowa State University’s Center for Survey Statistics and Methodology, we asked more than 63,000 federal scientists across 16 government agencies about scientific integrity, agency effectiveness, and the working environment for scientists in the first year of the Trump administration. At the USDA, we sent the survey to more than 3,600 scientists, economists, and statisticians we identified in the four REE agencies; about 7 percent (n=258) responded.

Among the findings summarized in our USDA-specific fact sheet are that scientists:

  • Face restrictions on communicating their work—78 percent said they must obtain agency preapproval to communicate with journalists; and
  • Report workforce reductions are a problem—90 percent say they’ve noticed such reductions in their agencies. And of those, 92 percent say short-staffing is making it harder for the USDA to fulfill its science-based mission.

To sum up: the next USDA chief scientist will lead a shrinking, under-resourced, and somewhat demoralized cadre of scientists facing political interference and possibly increased influence from industry (a trend we are already seeing in the Trump/Perdue USDA). All this at a time when the department really needs to advance research that can help farmers meet the myriad challenges they face and safeguard the future of our food system.

Soon, I’ll follow up with questions the Senate might want to ask Scott Hutchins—in light of all this and his own chemical industry baggage—when they hold his confirmation hearing.

We Surveyed Thousands of Federal Scientists. Here are Some Potential Reasons Why the Response Rate Was Lower than Usual

UCS Blog - The Equation (text only) -

In February and March of this year, the Union of Concerned Scientists, in partnership with Iowa State University’s Center for Survey Statistics and Methodology, sent a survey to over 63,000 federal career staff across 16 federal agencies, offices, and bureaus. Our goal was to give scientists a voice on the state of science under the Trump administration as we had during previous administrations.

We worked diligently to maintain the anonymity of the federal scientists taking our survey, providing three different methods for participants to take the survey (online, phone, and a mail-in option). Scientists took advantage of all three methods.

We followed up with reminders nearly weekly. Some scientists who were invited to take the survey did reach out to confirm that UCS and Iowa State University were conducting a legitimate survey, and the link that we sent them was safe to click on. In addition, some agencies communicated to their staff that the survey was legitimate and that experts were free to take it on their own time.

And while we received enough responses for the results to be valid, the final overall response rate on this years’ federal scientists survey sits at 6.9%. Compared to response rates on prior surveys conducted by UCS over the past 13 years, which have typically ranged from 15-20%, this year’s rate is lower. Let’s unpack some potential reasons why, and what the impact may be on interpreting results.

Reasons Why the Response Rate was Low
  1. Fear

It is possible that federal scientists and scientific experts were fearful or reluctant to comment on the state of science under the Trump administration. This may be borne from some political appointees reprimanding career staff for speaking publicly about their work.

Additionally, it is possible that given the heightened threat of cyber-attacks in the modern era, scientists were afraid their information might be monitored or leaked. Survey respondents were given a unique identifier to ensure the integrity of the survey, and while these identifiers were deleted before the survey results were prepared for release, we heard reports that simply being associated with that unique identifier was too much of a barrier.

  1. Discouragement from Senior Leadership

At some offices within the Environmental Protection Agency (EPA) as well as at the Fish and Wildlife Service (FWS), senior leadership sent emails to employees that discouraged them from taking the 2018 UCS survey. FWS emails stated “Requests for service employees to participate in surveys, from both internal and external sources, must be approved in advance of the issuance of the survey.” But this is only true of surveys issued through the agency. Federal employees are not required to receive an ethics clearance to take an outside survey if they take it on their own time and with their own equipment. On the other hand, other offices within the EPA as well as the National Oceanic and Atmospheric Administration (NOAA) and the US Department of Agriculture (USDA) sent emails reminding employees that they were welcome to take the survey given that they took it using their own time and equipment.

  1. Larger Survey Sample

This is the largest survey that UCS has ever conducted. Our prior surveys have been administered to up to 4 agencies, whereas we surveyed 16 agencies, offices, and bureaus this year. It may be easier to achieve higher response rates with smaller survey samples because it is possible for researchers to devote more time to working with the survey sample and building trust.

  1. Lack of Public Directory and/or Job Descriptions

UCS can survey federal scientists because their name, email address, and job title are publicly available, or at least they should be. For some agencies that we surveyed, like the National Highway Traffic and Safety Administration (NHTSA) and the Department of Energy (DOE) who do not have public directories available, we submitted Freedom of Information Act (FOIA) requests for this information (it’s been a year and half, and we still don’t have the directory from DOE). For other agencies, such as the EPA, a public directory was available but didn’t have complete information (e.g., job titles). Having the job title of the career staffer is important as it allows us to narrow down our survey sample to those who are likely to be a scientist or scientific expert. In the case of the EPA, Census Bureau, and DOE’s Office of Energy Efficiency and Renewable Energy (EERE), we did not have this information, so we had to administer the survey to the entire agency, or to only offices that we assumed would do scientific work. This greatly increases the number of individuals in an agency sample such that response rates are likely skewed lower relative to other agencies.

Does this low response rate matter in the interpretation of survey results?

A low response rate can give rise to sampling bias, meaning that some individuals in our survey sample are less likely to be included than others (some suggest that only the most disgruntled employees would respond). However, there is a growing body of literature that suggests that this may not be the case. Counterintuitively, it’s possible that surveys with lower response rates may yield more accurate results compared to those with higher response rates. Another study showed that administering the same survey for only 5 days (achieving a 25% response rate) versus weeks (achieving a 50% response rate) largely did not result in statistically different results. Results that were significantly different across these surveys only differed between 4-8 percentage points.

Further, we have never suggested that the responses received at an agency represent the agency as a whole. Rather, the responses represent the experiences of those who chose to respond. And when hundreds or thousands of federal scientists report censorship, political influence on their work, or funding being distributed away from work just because the issue is viewed as politically contentious…well, we have a problem.

I’m very happy that we gave these scientists a voice, because they had a lot to say and it’s time that they’re heard.

Trump Administration Takes Aim at Public Health Protections

UCS Blog - The Equation (text only) -

Photo: Daniels, Gene/ The U.S. National Archives

In a new regulatory effort, the Trump Administration’s Environmental Protection Agency (EPA) claims to be working to increase consistency and transparency in how it considers costs and benefits in the rulemaking process.

Don’t be fooled.

Under the cover of these anodyne goals, the agency is in fact trying to pursue something far more nefarious. Indeed, what the EPA is actually working to do is formalize a process whereby the decision of whether or not to go ahead with a rule is permanently tilted in industry’s favor. How? By slashing away at what the agency can count as “benefits,” resulting in a full-on broadside to public health.

EPA handcuffs itself to let industry roam free

Though it may seem obscure, the implications of this fiddling are anything but.

That’s because EPA regularly engages in what’s known as “cost-benefit analysis,” or a comparison of the costs of implementing a rule to the benefits that are expected to result. This doesn’t always shape how a standard gets set—for some air pollutants, for example, Congress actually requires the agency to specifically not develop standards based on cost, but rather based on health, to ensure that the public stays sufficiently protected. Other regulations weigh costs at varying levels of import, related to the specifics of the issue at hand.

Still, cost-benefit analysis is widely used, even when it describes rather than informs. The process lends context to rulemaking efforts, though it certainly isn’t perfect: cost-benefit analysis faces challenges, especially in quantifying those impacts that don’t lend themselves well to quantitative reductions. But on either side serious practitioners agree: this new effort by EPA is ill-conceived.

And the consequence of EPA’s proposed manipulations? Well, when the agency next goes to tally up the impacts of a rule, the traditionally towering benefits of its regulations could suddenly be cut way down in size. Not because public health is suddenly fixed, but just because it’s the only way to get the equation to solve in favor of industry time after time.

What’s more, alongside this effort EPA is simultaneously endeavoring to place untenable restrictions on the data and research the agency can consider in its rulemaking process, effectively hamstringing its own ability to fully and adequately evaluate impacts to public health.

Together, the net result would be a regulatory framework aggressively biased in industry’s favor, and a Trump Administration suddenly able to claim that public health protections are just not worth the cost.

To industry, with love

The good news is that this nascent proposal is incredibly hard to defend—on morals, and on merits.

The bad news is that the Trump Administration is highly motivated to do everything it can to find in favor of industry, so it’s still sure to be a fight.

Here, three key points to note:

  1. Ignoring co-benefits would permanently tilt the scales—and just does not make sense. One of the primary ways EPA is looking to shirk its regulatory responsibilities is by attempting to exclude the consideration of “co-benefits,” or those that arise as a result of a rule but not from the target pollutant itself, during its cost-benefit evaluations. Absurd. Although these indirect benefits—the avoided ER visits, the precluded asthma attacks, the workdays still in play—are just as real as indirect costs, under this proposal only the latter would continue to stay in the ledger.


  1. Requiring consistency across agency actions goes against EPA’s statutory requirements. The EPA is suggesting that cost-benefit methodologies should be applied uniformly across rulemaking efforts. This not only fails to recognize that not all protections should be evaluated in the same ways, but also that Congress itself outlined differences in how the agency should evaluate proposals depending on specific circumstances. As a result, the agency isn’t even allowed to do what it’s trying to do. And even worse than this nonsense standardization? The fact that the agency is trying to implement the requirement at the level least protective of public health.


  1. EPA already tried this out, and those efforts were roundly denounced. Prior to this proposal, EPA actually made a preliminary attempt at using a co-benefits-limited approach in its proposed repeal of the Clean Power Plan. There, it attempted to separate out and consider only the benefits that accrued from carbon dioxide emissions reductions, despite the billions of dollars of additional health benefits anticipated to come from indirect benefits of the rule. This action was taken alongside a slew of other discriminatory accounting maneuvers, revealing an agency desperately doing anything it could to deliver for industry, including by tipping the scales.

This regulatory effort was carefully constructed to conceal intentions and motivations, but it’s clear from the agency’s surrounding narrative and parallel policy initiatives that it is being advanced in strict pursuit of an industry-favored finding.

Where to next?

Let’s not forget the mission of the EPA: to protect human health and the environment.

From that frame, it’s hard to see what good this effort would do. It doesn’t bring EPA closer to an objective analytical truth, it doesn’t elevate and further that which is in the public’s interest, and it certainly doesn’t suggest an agency doing everything it can to advance its one core mission.

Instead, what we see is EPA displaying shockingly overt piety to industry over public, and in the process, failing to defend the very thing the agency was created to protect.

We’ve filed comments with EPA to call this rigged process out, and we’ll continue to stand up for the mission of the agency even when EPA lets it slide.

Because this demands a fight.

A fight for an agency that fights for the public, and a fight for a ledger that pulls people and places out of the red, not permanently cements them in it.

Photo: Daniels, Gene/ The U.S. National Archives

UCS Survey Shows Interior Department is Worse Than We Thought—And That’s Saying Something

UCS Blog - The Equation (text only) -

Photo: US Geological Survey

Can scientific staff at the US Department of the Interior rest easy knowing that their colleagues at other agencies have it worse when it comes to political interference?

Survey says: Nope.

Today the Union of Concerned Scientists (UCS) released the results from their periodic survey of scientific professionals at federal agencies, and the results from the Department of Interior (DOI) are damning. Not only do the responses indicate plummeting morale, job satisfaction, and agency effectiveness, but politics is now being felt significantly at the US Geological Survey, a non-regulatory scientific bureau at DOI that has historically operated without substantial political interference. In all, concerns about political interference, censorship of politically contentious issues, and workforce reductions at DOI are higher than most other agencies.

The comments from the survey read like an organizational leadership seminar’s list of fatal flaws: Hostile workplace, check; fear of retaliation and discrimination, check; self-censorship, check; poor leadership, check; chronic understaffing, check. To make matters worse, the political leadership at Interior, led by Secretary Ryan Zinke, has a deserved reputation for barring career staff from decision-making processes.

In addition to the undue influence of political staff, the top concern from DOI scientific staff was lack of capacity. One respondent commented: “Many key positions remain unfulfilled, divisions are understaffed, and process has slowed to a crawl.”

As a former career civil servant at Interior I can attest to the plummeting morale at the agency—even before I resigned in October 2017 there was a pall over every office and bureau and career staff were feeling completely ignored by Trump administration officials. This led to some very bad decisions from Zinke, but that has not led to greater inclusion—in fact, team Zinke has continued to alienate career staff and seems to be betting that they will remain silent.

Some good investigative journalism and a lot of Freedom of Information Act disclosures have shown that only industry representatives get meetings with the top brass, decisions are made without input from career staff, censorship (especially of climate change related science) is on the upswing, science is routinely ignored or questioned, and expert advisory boards are being ignored, suspended, or disbanded.

All of this adds up to an agency that is being intentionally hollowed out, with consequences for American health and safety and for our nation’s treasured lands and wildlife. Americans are clamoring for more information on how their businesses, lands, and communities can address the climate impacts they see all year round—but DOI scientists responding to the survey pointed to how Zinke is slowly shutting down the Landscape Conservation Cooperatives (LCC) that deliver that information. Congress provided Zinke with the money to keep growing the LCC’s, but he continues to let them wither on the vine just as they are providing important and timely support for communities in need.

As the Federal Trustee for American Indians and Alaska Natives, Interior should be expected to support tribes and villages in need of resources and capacity for relocating or addressing dramatic climate change impacts, but Zinke is leaving them to fend for themselves despite a bipartisan call to get them out of harm’s way.

As the land manager for America’s most treasured landscapes, Interior is expected to be an effective steward of our National Parks and other areas dedicated to conservation, recreation, and the protection of wildlife habitat. Instead, Zinke ordered the largest reduction in conservation lands in our nation’s history when he shrunk Bears Ears National Monument by 85% and Grand Staircase Escalante National Monument by nearly half. Scientists responding to the survey referred to these decisions as lacking scientific justification. Thanks to recently disclosed documents and emails, we now know that science was pushed aside and the real reason for shrinking the Monuments was to encourage oil and gas extraction in those locations, despite Zinke’s emphatic statements to the contrary. The most damning evidence? The new maps for these shrunken Monuments match the maps that industry lobbyists provided for him. This is yet another insult to the American Indians for whom this area is sacred.

While this is consistent with the Administration’s goal of hobbling federal agencies and opening the door for industry donors, it is not consistent with the use of taxpayer dollars to protect national assets and address health and safety needs, and it is not consistent with the role of public servant. The UCS survey results are a damning indication of the depth of dysfunction that Ryan Zinke has fostered at Interior, and it is essential that Congress implement its important oversight role to prevent the rot from spreading still further.

Happy 10th Birthday to the Consumer Product Safety Improvement Act!

UCS Blog - The Equation (text only) -

Photo: Valentina Powers/CC BY 2.0 (Flickr)

Since the Consumer Product Safety Improvement Act (CPSIA) became law, it has done a number of things to protect children from exposure to lead in toys and other items, improved the safety standards for cribs and other infant and toddler products, and created the database so that consumers have a place to go for research on certain products or reporting safety hazards and negative experiences. Today, along with a group of other consumer and public health advocacy organizations, we celebrate the 10th anniversary of the passage of this law. I am especially grateful that this act was passed a decade ago, as both a consumer advocate and an expecting mom.

Most of us might not realize it, but being a consumer now is a lot better than it would have been ten years ago.

When I sat down to begin the process of making a baby registry several months back, I didn’t know quite what to expect. With so many decisions to make about products that were going to be used by the person I already hold most dear in this world, I felt the anxiety begin to build. Perhaps I knew a little bit too much about how chemicals can slip through the regulatory cracks and end up on the market or how some companies deliberately manipulate the science in order to keep us in the dark about the safety of their products. But as I began to do research on children’s products, I ran into some pretty neat bits of information and have the Consumer Product Safety Improvement Act to thank.

First, cribs all have to meet conformity standards that were developed by the CPSC in 2011. The rule requires that all crib manufacturers cannot sell drop-side cribs, and must strengthen crib slates and mattress supports, improve the quality of hardware, and require more rigorous testing of cribs before sale. This means if a crib is for sale anywhere in the US, it has been accredited by a CPSC-approved body and meets distinct safety requirements so that not only can your baby sleep safely but parents can sleep soundly (insert joke about parents and lack of sleep here). Between 2006-2008 and 2012-2014, the percentage of deaths associated with cribs attributed to crib integrity vs. hazardous crib surroundings has decreased from 32 percent to 10 percent.

This isn’t the only product type for which CPSC has created standards in the past 10 years. So far, CPSC has written rules for play yards, baby walkers, baby bath seats, children’s portable bed rails, strollers, toddler beds, infant swings, handheld infant carriers, soft infant carriers, framed infant carriers, bassinets, cradles, portable hook-on chairs, infant sling carriers, infant bouncer seats, high chairs, and most recently it approved standards for baby changing tables this summer.

Next, I can rest assured that no baby products contain dangerous levels of the reproductive toxins, phthalates, because of a provision in CPSIA that restricted a total of eight types of phthalates in children’s toys and child care articles to a very strict standard of 0.1% on a permanent basis. It also established a Chronic Hazard Advisory Panel of experts to review the science on phthalates that would eventually inform a CPSC final rule. This rule was issued in October 2017 and became effective beginning in April 2018.

I can also be sure that the toys purchased for my child will not contain unsafe levels of the developmental toxin, lead, as long as they were tested and accredited by a CPSC-approved entity. As of 2011, the CPSIA limited the amount of lead that can be in children’s products to 100 ppm. And once we found that perfect paint color for the walls after hours of staring at violet swatches, I didn’t need to worry about its lead content considering that the CPSIA set the limit at 0.009 percent or 90 ppm for paint and some furniture that contains paint.

Finally, when in doubt, I discovered I can query the database to check whether there have been reports of a product’s hazard or head over to to double check that a product I’m planning on buying doesn’t have any recall notices on it.

There’s clearly been a lot of progress since the CPSIA was passed a decade ago, and I have to say, I feel fortunate that I’m beginning the parenting stage of my life as many of its provisions are being fully implemented. In all my reading on pregnancy and parenting, I’ve learned that there are only so many things you can control before your child arrives. The safety of my home is one of those things, so I’m thankful that the CPSIA has given me the ability to make informed decisions about the products with which I’m furnishing my child’s room.

And as I wear my Union of Concerned Scientists hat, I’m also encouraged that the CPSIA gave the agency the space to ensure that its scientists were able to do their work without fear of interference, including whistleblower protections. As the CPSC embarks upon its next ten years of ensuring the goals of the CPSIA are fully realized, we urge the agency to continue to enforce its safety standards, ensure that manufacturers of recalled products are held accountable, and educate the public about its product hazard database and other tools for reporting and researching harmful products. Unrelatedly, the agency should also continue to stay weird on twitter, because its memes bring joy to all. Case in point below.

Photo credit: twitter/US CPSC

The Good, the Bad, and the Ugly: The Results of Our 2018 Federal Scientists Survey

UCS Blog - The Equation (text only) -

Photo: Virginia State Parks/CC BY 2.0 (Flickr)

In February and March of this year, the Union of Concerned Scientists (UCS) conducted a survey of federal scientists to ask about the state of science over the past year, and the results are in. Scientists and their work are being hampered by political interference, workforce reductions, censorship, and other issues, but the federal scientific workforce is resilient and continuing to stand up for the use of science in policy decisions.

This survey was conducted in partnership with Iowa State University’s Center for Survey Statistics and Methodology building upon prior surveys conducted by UCS since 2005. However, this year’s survey is unique in that it is the largest that UCS has ever conducted to date (sent to over 63,000 federal employees across 16 federal agencies, offices, and bureaus), and it is the first survey to our knowledge to gauge employee’s perceptions of the Trump administration’s use of science in decisionmaking processes.

The Trump administration’s record on science on a number of issues in multiple agencies is abysmal. Anyone who has paid attention to the news even slightly will know this. Therefore, my expectations were that the surveyed scientists and scientific experts would report out that they were working in a hostile work environment, that they are encountering numerous barriers to doing and communicating science, and that too many scientists are leaving the federal workforce. And while many of the respondents reported out on these negative issues, many respondents also reported out a lot of good work that is happening.

To be certain, some agencies seem to be faring better than others. Respondents from the National Oceanic and Atmospheric Administration (NOAA), Centers for Disease Control (CDC), and the Food and Drug Administration (FDA) reported better working environments and leadership that were conducive to continuing science-based work that informs decisionmaking at their agencies. However, respondents from bureaus at the Department of Interior (DOI) as well as the Environmental Protection Agency (EPA) seem to be having a difficult time with political interference, maintaining professional development, and censorship, to name a few issues illustrated by this survey. This agency-level variation, as well as variation in response rates  across surveyed agencies, should be considered when interpreting results across all agencies.

Below, I highlight some results of this year’s survey, but you can also find all of the results, methodology, quotes from surveyed scientists, and more at

The Ugly: Political interference in science-based decisionmaking

The Trump administration has been no stranger to interfering with science-based processes at federal agencies. For example, both Ryan Zinke and Scott Pruitt changed the review processes of science-based grants such that they are critiqued based on how well they fit the administration’s political agenda instead of their intellectual merit. UCS also discovered through a Freedom of Information Act (FOIA) request that the White House interfered in the publication of a study about the health effects of a group of hazardous chemicals found in drinking water and household products throughout the United States.

Surveyed scientists and scientific experts in our 2018 survey noted that political interference is one of the greatest barriers to science-based decisionmaking at their agency. In a multiple response survey question in which respondents chose up to three barriers to decisionmaking, those ranked at the top were: Influence of political appointees in your agency or department, influence of the White House, limited staff capacity, delay in leadership making a decision, and absence of leadership with needed scientific expertise. This result was different as compared to our 2015 survey in which respondents reported that limited staff capacity and complexity of the scientific issue were the top barriers—influence of other agencies or the administration, as it was phrased in our 2015 survey, was not identified as a top barrier. One respondent from the EPA noted that political interference is undoing scientific processes: “…efforts are being made at the highest levels to unwind the good work that has been done, using scientifically questionable approaches to get answers that will support the outcomes desired by top agency leadership.”

Many respondents also reported issues of censorship, especially in regard to climate change science. In total, 631 respondents reported that they have been asked or told to omit the phrase “climate change” from their work. A total of 703 respondents reported that they had avoided working on climate change or using the phrase “climate change” without explicit orders to do so. But it is not only climate change—over 1,000 responding scientists and scientific experts reported that they have been asked or told to omit certain words in their scientific work because they are viewed as politically contentious. One respondent from the US Department of Agriculture (USDA) noted that scientists studying pollinator health are being scrutinized: “We have scientists at my location that deal with insect pollinator issues, and there appears to be some suppression of work on that topic, in that supervisors question the contents of manuscripts, involvement in certain types of research, and participation in public presentation of the research. It has not eliminated the work of those scientists, but their involvement in those areas is highly scrutinized.”

The Bad: The scientific workforce is likely dwindling

Nearly 80% of respondents (3,266 respondents in total) noticed workforce reductions either due to staff departures, hiring freezes, and/or retirement buyouts. Of those respondents who noticed workforce reductions, nearly 90% (2,852 respondents in total) reported that these reductions make it difficult for them to fulfill their agency’s science-based missions. A respondent from the Fish and Wildlife Service summed up the issue: “Many key positions remain unfulfilled, divisions are understaffed, and process has slowed to a crawl.”

As of June 2018, the 18th month of his administration, President Trump had filled 25 of the 83 government posts that the National Academy of Sciences designates as “scientist appointees.” Maybe now that President Trump has nominated meteorologist Kelvin Droegemeier to lead the White House’s Office of Science and Technology Policy, we will see other scientific appointments as well. For now, agencies that are understaffed and that do not have leadership with needed scientific expertise will likely continue to have a difficult time getting their scientific work completed.

The Good:  The scientific workforce is resilient

While 38% of those surveyed (1628 respondents in total) reported that the effectiveness of their division or offices has decreased over the past year, 15% reported an increase in effectiveness (643 respondents total) and 38% (1567 respondents total) reported no change in effectiveness over the past year. It is still not a good sign that over 1,000 scientists and scientific experts are reporting that the effectiveness of their office/division has decreased under the Trump administration, but it is also good to see that there are still a number of scientists and scientific experts being able to continue to do their important work.

Further, a majority of respondents (64%; 2452 respondents in total) reported that their agencies are adhering to their scientific integrity policies and that they are receiving adequate training on them. While those surveyed reported on barriers to science-based decisionmaking such as those described above and more that fall outside of the scope of these policies, it is still a step forward to see that the federal scientific workforce knows about the policies and perceives them to be followed. Many responding scientists reported that they are doing the best work they can under this administration. As one respondent from the US Geological Survey (USGS) said, “USGS scientific integrity guidelines are among the best in the federal service. They are robust and followed by the agency. What happens at the political level is another story.”

There is still work to do

Some scientists are continuing to get their work done and others are having a difficult time. Many scientists see their leadership as a barrier to their science-based work, whereas some scientists think their leadership recognizes the importance of science to their agency’s mission.

However, when hundreds to thousands of scientists are reporting that there is political interference in their work, that they fear using certain terms like “climate change,” or that they are seeing funds being distributed away from work viewed as politically contentious – this is an ugly side of this administration’s treatment of science. Those numbers should be as close to zero as possible because when science takes a back seat to political whims, the health and safety of the American people loses.

What’s New with NextGrid?

UCS Blog - The Equation (text only) -

Photo: UniEnergy Technologies/Wikimedia

Last year, the Illinois Commerce Commission (ICC) launched NextGrid,  a collaboration between key stakeholders to create a shared base of information on electric utility industry issues and opportunities around grid modernization. NextGrid is the Illinois Utility of the Future Study, which is being managed by the University of Illinois and consists of seven working groups comprised of subject matter experts, utilities, business interests, and environmental organizations. The Union of Concerned Scientists is a member of two of these working groups.

The working groups have been tasked with identifying solutions to address challenges facing Illinois as it moves into the next stage of electric grid modernization, including the use of new technologies and policies to improve the state’s electric grid. The groups’ work will culminate in a draft report to be released in late 2018.

So, what is grid modernization? And what’s at stake with the NextGrid process in Illinois?

Illinois’ energy challenges

Our current grid was built decades ago and designed primarily for transmitting electricity from large, centralized power plants like coal and natural gas.  There are now new technologies, like wind and solar, that are making this approach to electricity transmission and its related infrastructure outdated. If we don’t modernize the grid now, we risk over-relying on natural gas, when we should be taking advantage of renewable energy sources that are cleaner and more affordable.

As a result, utilities and states around the country are embarking on grid modernization processes. There are two main components of a modern grid: the first is data communication, which Illinois has addressed with the rapid deployment of smart meters over the last several years. Smart meters give customers access to more information about their energy use, and allow utilities to offer different programs and options to customers as well as to more efficiently address outages.

A second key component of a modern grid is the incorporation of higher levels of renewables and energy efficiency. The Future Energy Jobs Act (FEJA), which became law in 2016, fixed flaws in the state’s Renewable Portfolio Standard (RPS) by ensuring stable and predictable funding for renewable development, and that new solar and wind power will be built in Illinois. FEJA also greatly increased the state’s energy efficiency targets. With respect to solar in particular, FEJA directed the state to create a community solar program, and also the Illinois Solar for All program, which will enable many more people to participate in solar power who may not be in a position to install panels on their own rooftops. Overall, FEJA is moving Illinois towards a more modern grid.

NextGrid builds on these Illinois clean energy efforts. The Next Grid study will examine trends in electricity production, usage, and emerging technologies on the customer and utility sides of the meter that drive the need to consider changes in policy and grid technology.

What has been discussed so far?

To ensure clean energy is a prominent part of the solutions being discussed in NextGrid, UCS is participating in two of the seven working groups: Regulatory and Environmental Policy Issues and Ratemaking.

 Some of the key topics discussed so far include:

  • The increased adoption of distributed energy resources (DER), which include solar, storage and demand management. DER adoption is increasing due to the Future Energy Jobs Act (FEJA). There will be a significant change in the electricity load from DER, and as a result, utilities need to engage in planning and investment that incorporates them.
  • Energy storage has been highlighted for its ability to increase grid reliability and resilience. As the costs of energy storage technology such as batteries continue to fall, they are becoming a viable answer to many grid modernization challenges.
  • Time-of-use pricing programs that have fewer daily price fluctuations allow users more consistency in making consumption decisions. Our Flipping the Switch Report outlines the benefits of time-varying rates.

The NextGrid process has the potential to shape Illinois’s energy future and serve as a roadmap to different options. We need to ensure that clean energy plays a central role in this roadmap.

How can you get involved?

On June 14, the ICC held a public comment session in Chicago to provide stakeholders and the public with information on the progress of the study.  UCS Science Network Member Warren Lavey provided public comment at the session noting that Illinois should explore additional time-varying pricing options and energy storage. Pursuing these policies would save money for customers and providers, enhance grid reliability and flexibility, and protect human health and the environment. More time-varying pricing options and cost-effective energy storage would build on Illinois’ investments in and policies supporting renewable energy systems and smart meters. These reforms would also strengthen the state’s infrastructure for electric vehicles and other developments.

Two more public comment sessions will be held this week, August 15 in Urbana and August 16 in Carbondale. Participants will have the opportunity to ask questions and offer written and verbal comments to be considered by the commission as they develop the NextGrid report. The draft report is set for release this fall, and the public has the opportunity to weigh in again by commenting on draft working group chapters as they are posted on the NextGrid website.

What UCS wants to see in the final report

The final report should include an actionable roadmap of clean energy options that builds on Illinois’ successes to date.

The final report should also elevate the need for an equitable transition away from fossil fuels, and the benefits of expanding equitable access to solar and energy storage technologies. Ideally, the report will identify ways to increase the deployment of energy storage across Illinois, with the goal of integrating higher levels of renewable energy onto the grid.

Finally, the report should include a discussion of additional opportunities for user-friendly time-varying rates that could be considered, which will benefit the grid operator, consumers, and the environment. We want the NextGrid process to provide a clear pathway for Illinois to continue being a leader in clean energy and modern grid development.

Photo: UniEnergy Technologies/Wikimedia

Obstruction of Injustice: Making Mountains out of Molehills at the Cooper Nuclear Plant

UCS Blog - All Things Nuclear (text only) -

The initial commentary in this series of posts described how a three-person panel formed by the Nuclear Regulatory Commission (NRC) to evaluate concerns raised by an NRC worker concluded that the agency violated its procedures, policies, and practices by closing out a safety issue and returning the Columbia Generating Station to normal regulatory oversight without proper justification.

I had received the non-public report by the panel in the mail. That envelope actually contained multiple panel reports. This commentary addresses a second report from another three-person panel. None of the members of this panel served on the Columbia Generating Station panel. Whereas that panel investigated contentions that NRC improperly dismissed safety concerns, this panel investigated contentions that the NRC improperly sanctioned Cooper for issues that did not violate any federal regulations or requirements. This panel also substantiated the contentions and concluded that the NRC lacked justification for its actions. When will the injustices end?

Mountains at Cooper

The NRC conducted its Problem Identification and Resolution inspection at the Cooper nuclear plant in Brownville, Nebraska June 12 through June 29, 2017. The report dated August 7, 2017, for this inspection identified five violations of regulatory requirements.

An NRC staffer subsequently submitted a Differing Professional Opinion (DPO) contending that the violations were inappropriate. The basis for this contention was that there were no regulatory requirements applicable to the issues; thus, an owner could not possibly violate a non-existent requirement.

Molehills at Cooper

Per procedure, the NRC formed a three-person panel to evaluate the contentions raised in the DPO. The DPO Panel evaluated the five violations cited in the August 7, 2017, inspection report.

Fig. 1 (Source: Unknown)

  • Molehill #1: The inspection report included a GREEN finding for a violation of Criterion XVI in Appendix B to 10 CFR Part 50. Appendix B contains 18 quality assurance requirements. Criterion XVI requires owners to identify conditions adverse to quality (e.g., component failures, procedure deficiencies, equipment malfunctions, material defects, etc.) and fix them in a timely and effective manner. The DPO Panel “…determined that this issue does not represent a violation of 10 CFR 50 Appendix B, Criterion XVI, inasmuch as the licensee identified the cause and implemented corrective actions to preclude repetition.” In other words, one cannot violate a regulation when doing precisely what the regulation says to do.
  • Molehill #2: The inspection report included a GREEN finding for a violation of a technical specification requirement to provide evaluations of degraded components in a timely manner. The DPO Panel “…concluded that this issue does not represent a violation of regulatory requirements.” This is a slightly different molehill. Molehill #1 involved not violating a requirement when one does exactly what the requirements says. Molehill #2 involved not violating a requirement that simply does not exist. A different kind of molehill, but a molehill nonetheless.
  • Molehill #3: The inspection report included another GREEN finding for another violation of Criterion XVI in Appendix B to 10 CFR Part 50. Appendix B. This time, the report contended that the plant owner failed to promptly identify adverse quality trends. The DPO Panel “concluded that monitoring for trends is not a requirement of Criterion XVI,” reprising Molehill #2.
  • Mountain #1: The inspection report included another GREEN finding for failure to monitor emergency diesel generator performance shortcomings as required by the Maintenance Rule. The DPO Panel “…determined that the violation was correct as written and should not be retracted.” As my grandfather often said, even a blind squirrel finds an acorn every now and then.
  • Molehill #4: The inspection report included a Severity Level IV violation for violating 10 CFR Part 21 by not reporting a substantial safety hazard. The DPO Panel discovered that the substantial safety hazard was indeed reported to the NRC by the owner within specified time frames. The owner submitted a Licensee Event Report per 10 CFR 50.72. 10 CFR Part 21 and NRC’s internal procedures explicitly allows owners to forego submitting a duplicate report when they have reported the substantial safety hazard via 10 CFR 50.72. The DPO Panel recommended that “…consideration be given to retracting the violation … because it had no impact on the ability of the NRC to provide regulatory oversight.”

The DPO Panel wrote in the cover letter transmitting their report to the NRC Region IV Regional Administrator:

After considerable review effort, the Panel disagreed, at least in part, with the conclusions documented in the Cooper Nuclear Station Problem Identification and Resolution Inspection Report for four of the five findings.

The DPO Panel report was dated April 13, 2018. As of August 8, 2018, I could find no evidence that NRC Region IV has either remedied the miscues identified by the DPO originator and confirmed by the DPO Panel, or explained why sanctioning plant owners for following regulations is justified.

UCS Perspective

 At Columbia Generation Station, NRC Region IV made a molehill out of a mountain by finding, and then overlooking, that the plant owner’s efforts were “grossly inadequate” (quoting that DPO Panel’s conclusion).

At Cooper Nuclear Station, NRC Region IV made mountains out of molehills by sanctioning the owner for violating non-existent requirements or for doing precisely what the regulations required.

Two half-hearted (substitute any other body part desired, although “elbow” doesn’t work so well) efforts don’t make one whole-hearted outcome. These two wrongs do not average out to average just right regulation.

NRC Region IV must be fixed. It must be made to see mountains as mountains and molehills and molehills. Confusing the two is unacceptable.

Mountains and molehills (M&Ms). M&Ms should be a candy treat and not a regulatory trick.

NOTE: NRC Region IV’s deplorable performance at Columbia and Cooper might have remained undetected and uncorrected but for the courage and conviction of NRC staffer(s) who put career(s) on the line by formally contesting the agency’s actions. When submitting DPOs, the originators have the option of making the final DPO package publicly available or not. In these two cases, I received the DPO Panel reports before the DPOs were closed. I do not know the identity of the DPO originator(s) and do not know whether the person(s) opted to make the final DPO packages (which consist of the original DPO, the DPO Panel report, and the agency’s final decision on the DPO issues) public or not. If the DPO originator(s) wanted to keep the DPO packages non-public, I betrayed that choice by posting the DPO Panel reports. If that’s the case, I apologize to the DPO originator(s). While my intentions were good, I would have abided by personal choice had I had any way to discern what it was.

Either way, it is hoped that putting a spotlight on the issues has positive outcomes in these two DPOs as well as in lessening the need for future DPOs and posts about obstruction of injustice.

In the Final Stretch of the Farm Bill, Keep an Eye on Crop Insurance. (Crop Insurance?)

UCS Blog - The Equation (text only) -

A drought-stricken soybean field in Texas Photo: Bob Nichols, USDA/CC BY 2.0 (Flickr)

You’re not a farmer, but you’re invested in crop insurance.

The chances that you are a farmer are nil. After all, there are only 2.1 million farms in a nation of 323.1 million people. Yet, you are deeply invested in the nation’s farming enterprise. As a taxpayer, you back U.S. agriculture by financing a range of government programs that hover around $20 billion annually. Those tax dollars fund such things as price supports, research, marketing and crop insurance.

The case for crop insurance

It is in the interest of the 99% of us who don’t farm to help protect family farmers against two major hazards that are outside their control: market downturns and weather disasters. We do this through a “farm safety net” that consists of coupling price supports for agricultural commodities with crop and livestock insurance. Over 300 million acres are covered for $100 billion of insured liability annually. The legislative vehicle that authorizes these federal programs is a “farm bill” that is renewed every five years. The current iteration is due to be renewed by September 30 of this year.

It is a game of “Who is going to get your money?”

If—amid the current swirl of political news—you’ve not been following the scintillating path of the Farm Bill through Congress, the current status is that each chamber has passed dramatically different drafts of the bill. If Congress is to meet its deadline for reauthorization, it needs to reconcile the differing versions of the farm bill within the next few weeks. The two versions differ on whether to make the bill more equitable for family farmers and those seeking to get into farming, as the Senate version proposes, or to make it easier to abuse and defraud taxpayers to further enrich a very few already wealthy farmers, which the House version would enable. Specifically, the Senate version would set limits on the total subsidy payments that farms would be eligible to receive—at $250,000 per year per farm. Coupled with this is a measure to prevent the wealthiest of farmers from drawing on public support that they do not actually need. The cut-off for eligibility would be reduced from the present $900,000 annual Adjusted Gross Income (AGI) per farmer to $700,000. Additionally, the Senate version proposes tying eligibility for insurance benefits to the effectiveness of conservation practices.

These are welcome adjustments, even though they still fall short of the comprehensive reform needed to prevent open abuse of the farm safety net. For example, an earlier effort to reduce the insurance premium subsidy drawn by farmers with an AGI greater than $700,000 was defeated. Yes, the federal government doles out insurance payouts to farmers, plus the majority of the cost of their insurance premiums! More on the rationale for this in a bit, but the point here is that payment limit measures would level the playing field for small and medium family farms. This is just one of the issues that pits the interests of these farmers—and of taxpayers and fiscal conservatives—against the political power of large farmers and their agribusiness backers. As for the House version of the Farm Bill? Not only does it not include these sensible—if mild—reforms, it brazenly creates loopholes that would have non-farming relatives become eligible for “per farmer” benefits.

We’ve done that. It doesn’t work. Shall we try something different?

If we keep doing more of the same, the cost of insurance will balloon and make some wealthy people even richer—but it doesn’t have to. While the rationale for public support of family farmers is self-evident, in practice our crop insurance policies could be better. Over the past five years, federal crop insurance cost American tax payers an average of $9 billion annually, according to analysis from the Congressional Budget Office (CBO.) Drought and flood damage accounted for 72% of insurance payouts between 2001 and 2015, per accounting from the RMA. Climate change will only make this worse, as more frequent and extreme weather episodes drive up costs for the program. The CBO estimates—using scenarios developed by the Intergovernmental Panel on Climate Change—that crop insurance costs will increase by $1 billion annually through 2080.

This upward spiral is compounded by the fact that our current policy incentivizes waste—because it focuses on production regardless of environmental and other costs—instead of adoption of well-known, scientifically sound production practices that can minimize crop losses even under climate extremes. Adoption of the latter practices would result in a more resilient agricultural system that would reduce farm losses and the need for, and expense of, insurance to the public. We therefore should incentivize these kinds of scientifically informed and fiscally responsible systems. While the 2014 farm bill intended to do just this by requiring “conservation compliance,” the Office of the Inspector General has found that such compliance is weakly enforced.

What would make more sense?

It is reasonable for the public to expect the best farming practices in return for the farm safety net that their tax dollars provide. In fact, this could be done by connecting the different parts (“titles”) of the farm bill so they work together. For example, the Research Title generates information about the most sustainable farming practices, which are supported in large measure by the Conservation Title. Better coordination of the Crop Insurance Title with these two would make the entire farm bill more coherent and should reduce total costs to farmers, taxpayers and the environment.

To understand how we might do this, consider the nation’s “corn belt,” an expanse of 180 million acres dominated by a lawn of corn and soybeans, each grown in extensive “monocrops” (swaths of homogenous stands of a single crop.) As currently managed, these systems promote soil degradation and soil loss, water pollution and runaway pest crises. In turn, this exposes farmers (and all of us, as their underwriters) to the risk inherent in betting on a single system to be successful under all circumstances all the time. Every one of the environmental crises listed above can be mitigated, if not eliminated, by adoption of well-researched “agroecological” methods. We can drive this shift in farm management by tying eligibility for government programs, including crop insurance, to verified implementation of practices that conserve soil, build soil health, sequester carbon and increase biodiversity. These practices make farming systems more resilient to weather extremes and are more profitable to farmers because they reduce farmer reliance on purchased inputs. Further, more resilient farms would rely less on government supports like the federal crop and livestock insurance programs.

Perverse loopholes instead further enrich the largest farmers

At present, however, loopholes in our policies permit the largest and most profitable farms to receive both windfall payments and a disproportionate amount of farm bill subsidies. Because the current system rewards production, and not resilience, the result is that it is “large, very large and corporate farms,” just 4% of farms, that are the greatest beneficiaries of the public’s support. These farms account for 55% of US agricultural output and earn $1 M or more in gross farm cash income each year. Such farms face large risks, of course, but they don’t need public support to afford their insurance costs. It isn’t just that the public provides farmers insurance, but that we make it cheap insurance. The reason is that taxpayers subsidize 60% of crop insurance premiums. This is intended to incentivize farmers to buy insurance rather than force the government to come up with unbudgeted emergency payments every time major disasters strike. In practice, however, this has served to concentrate wealth. Those 4% of farms receiving the lion’s share of farm bill benefits have an operating profit margin greater than 10%. In contrast, the majority of small and midsize family farms—those which could readily adopt more diverse crop and livestock production methods, and which account for 45% of the nation’s farm assets—operate with a profit margin less than 10%. Those are the farmers who actually need the public’s support. It is a situation that clearly calls for payment limits to cap the amount of farm bill benefits that any one farm can receive.

Farmers can adopt and manage more resilient systems, and we should reward them for that

The 2014 Farm Bill—the most recent—introduced “Whole Farm Revenue” insurance for farmers wishing to diversify their farms (produce a variety of crops and livestock in integrated fashion.) Diversified farming systems protect farmers from catastrophic losses the same way diversified stock portfolios protect investors. Such systems tend to protect soil, filter and better store water, recycle and make better use of fertilizer nutrients, have fewer pest problems (and thereby require fewer pesticides), and result in lower costs and higher profits. Further, because fewer external inputs (such as chemical fertilizers and pesticides) are purchased, farmers earn more, and more of those earnings are recirculated in the local rural economy. However, under our existing risk management approach, these systems have proven more difficult to insure than large monocrops. The latter have long actuarial records, permitting insurers to set premiums with greater certainty, and are familiar to and therefore preferred by bankers and Farm Service Agency personnel. But this is counterproductive, as it discourages the best farming practices and encourages the worst. Barriers such as these, and those encountered by new and beginning farmers (who must establish a credit and cropping history to gain access to loans and insurance premium discounts), must instead be streamlined with more informed farm bill criteria. The Whole Farm Revenue insurance program is one step towards incentivizing resilient diversified systems.  Programs to support beginning and younger farmers, who are also more likely to use diversified systems, are another way to build more resilient farms. The Senate version of the current Farm Bill attempts to address these issues.

What you can do:

Demand That Members of Congress Who Will Reconcile the House and Senate Farm Bills Make Your Financial Backing of Farm Programs More Effective, Responsible and Equitable

Sign On: Even though the Farm Bill programs described above are directed to farmers, we all have a stake. As taxpayers, we back these programs and—as we’ve seen—it is important that the programs be equitable and balance production with environmental responsibility and resilience. You can help make it clear to Congress that you strongly support these goals by signing our statement urging farm bill conferees to adopt the Senate version of the bill. The “conferees” are the 47 members of Congress who will work with the currently disparate versions of the Farm Bill and decide the form of the final legislation. We will deliver this letter and your signatures to the chairs of the Senate and House Agriculture Committees as they begin deliberations.

Tell Conferees About the Farm Safety Net You Want: Members of Congress are visiting their districts right now! During the congressional recess that will last the remainder of this month, you can visit their offices, attend their town hall meetings, or call and write the offices of the Senate conferees, as well as of the Republican and Democrat House Farm Bill conferees. Remember that as a citizen and taxpayer your representatives are bound to take your calls and letters and consider your input. This is all the more important for direct constituents of Farm Bill conferees. When you call and write, be sure to make these particular points:

  • Adopt the Senate version of the Crop Insurance title (Title XI) because it improves and streamlines the Whole Farm Revenue Insurance program. Importantly, the Senate version recognizes the need to eliminate obstacles for new farmers and the “underserved” (in the Farm Bill this—tellingly—means farmers of color.) To this end, support the House measure that defines “Beginning Farmers” as those who have farmed less than 10 years.
  • Adopt the Senate recommendation to link crop insurance eligibility with the performance of adopted conservation practices.
  • Make the farm safety net more equitable by closing loopholes in the Commodity Title (Title I) that permit abuse. Specifically, restrict payment eligibility to individuals actually farming; establish an AGI limit of $700,000 for eligibility for commodity payments; and set maximum commodity payments per farmer to $250,000 per year.
Photo: Bob Nichols, USDA/CC BY 2.0 (Flickr)

Hitting 1 Trillion. Think Clean Electrons, Not Stylish Electronics

UCS Blog - The Equation (text only) -

Photo: Johanna Montoya/Unsplash

You may have heard that Apple just passed the $1 trillion mark in terms of its market capitalization, the first company ever to reach those lofty heights. Less ink has been spilled on a different 1 trillion figure, but it’s one that’s well worth noting, too. According to Bloomberg NEF (BNEF), we just shot past the headline-worthy figure of 1 trillion watts (that is, 1 million megawatts, or 1,000 gigawatts) worldwide. And you can bet there’ll be another trillion watts right behind.

1 trillion watts

According to BNEF, the tally by the end of second quarter of 2018 for wind and solar combined was 1,013 gigawatts (GW), or 1.013 million MW.

The path to 1 trillion (Source: Bloomberg NEF)

A few bonus noteworthy things about those data:

  • The new total is double what we had as of 2013, and more than quadruple 2010’s tally.
  • Wind has dominated the wind-solar pair for all of history (or at least since the data started in 2000), and accounts for 54% of the total, to solar’s 46%. But solar has come on so strong, and looks poised to be in the majority very soon.
  • Offshore wind is showing up! Particularly for those of us who have been tracking that technology for a long time, that light blue stripe on the graph is a beautiful thing to see.
Meanwhile, back at the ranch

For those of us in this country trying to understand the new trillion figure, one useful piece of context might be the total installed US power plant capacity, which, as it happens, is right around that 1-trillion-watt mark. According to the US Energy Information Administration, it’s about 1.1 million MW.

And, in terms of the wind and solar pieces of our own power mix:

Photo: PublicSource

The next trillion

Given how fortunes wax and wane, it’s tough to guess when Apple might be hitting the $2 trillion mark.

But for solar and wind it’s hard to imagine the number doing anything but growing. And, according to BNEF’s head of analysis, Albert Cheung, relative to the first trillion, the next trillion watts (1 terawatt) of wind and solar are going to come quick and cheap:

The first terawatt cost $2.3 trillion to build and took 40 years. The second terawatt, we reckon, will take five years and will cost half as much as the first one. So that’s how quickly this industry is evolving.

Imagine that: They’re projecting $1.23 trillion for the next trillion watts of solar and wind—barely more than an Apple’s worth.

Photo: Johanna Montoya/Unsplash Photo: PublicSource

24 Space-Based Missile Defense Satellites Cannot Defend Against ICBMs

UCS Blog - All Things Nuclear (text only) -

Articles citing a classified 2011 report by the Institute for Defense Analysis (IDA) have mistakenly suggested the report finds that a constellation of only 24 satellites can be used for space-based boost-phase missile defense.

This finding would be in contrast to many other studies that have shown that a space-based boost-phase missile defense system would require hundreds of interceptors in orbit to provide thin coverage of a small country like North Korea, and a thousand or more to provide thin coverage over larger regions of the Earth.

A 2011 letter from Missile Defense Agency (MDA) Director Patrick O’Reilly providing answers to questions by then-Senator Jon Kyl clarifies that the 24-satelllite constellation discussed in the IDA study is not a boost-phase missile defense system, but is instead a midcourse system designed to engage anti-ship missiles:

The system discussed by IDA appears to be a response to concerns about anti-ship ballistic missiles that China is reported to be developing. It would have far too few satellites for boost-phase defense against missiles from even North Korean, and certainly from a more sophisticated adversary.

The MDA letter says the 24 satellites might carry four interceptors each. Adding interceptors to the satellites does not fix the coverage problem, however: If one of the four interceptors is out of range, all the interceptors are out of range, since they move through orbit together. As described below, the coverage of a space-based system depends on the number of satellites and how they are arranged in orbit, as well as the ability of the interceptors they carry to reach the threat in time.

While this configuration would place four interceptors over some parts of the Earth, it would leave very large gaps in the coverage between the satellites. An attacker could easily track the satellites to know when none were overhead, and then launch missiles through the gaps. As a result, a defense constellation with gaps would realistically provide no defense.

(The IDA report is “Space Base Interceptor (SBI) Element of Ballistic Missile Defense: Review of 2011 SBI Report,” Institute for Defense Analyses, Dr. James D. Thorne, February 29, 2016.)

Why boost phase?

The advantage of intercepting during a ballistic missile’s boost phase—the first three to five minutes of flight when its engines are burning—is destroying the missile before it releases decoys and other countermeasures that greatly complicate intercepting during the subsequent midcourse phase, when the missile’s warhead is coasting through the vacuum of space. Because boost phase is short, interceptors must be close enough to the launch site of target missiles to be able to reach them during that time. This is the motivation for putting interceptors in low Earth orbits—with altitudes of a few hundred kilometers—that periodically pass over the missile’s launch site.

The fact that the interceptors must reach a boosting missile with a few minutes limits how far the interceptor can be from the launching missile and still be effective. This short time therefore limits the size of the region a given interceptor can cover to several hundred kilometers.

An interceptor satellite in low Earth orbit cannot sit over one point on the Earth, but instead circles the Earth on its orbit. This means an interceptor that is within range of a missile launch site at one moment will quickly move out of range. As a result, having even one interceptor in the right place at the right time requires a large constellation of satellites so that as one interceptor moves out of range another one moves into range.

Multiple technical studies have shown that a space-based boost phase defense would require hundreds or thousands of orbiting satellites carrying interceptors, even to defend against a few missiles. A 2012 study by the National Academies of Science and Engineering found that space-based boost phase missile defense would cost 10 times as much as any ground-based alternative, with a price tag of $300 billion for an “austere” capability to counter a few North Korean missiles.

Designing the system instead to attack during the longer midcourse phase significantly increases the time available for the interceptor to reach its target and therefore increases the distance the interceptor can be from a launch and still get there in time. This increases the size of the region an interceptor can cover—up to several thousand kilometers (see below). Doing so reduces the number of interceptors required in the constellation from hundreds to dozens.

However, intercepting in midcourse negates the rationale for putting interceptors in space in the first place, which is being close enough to the launch site to attempt boost phase intercepts. Defending ships against anti-ship missiles would be done much better and more cheaply from the surface.

Calculation of Constellation Size

Figure 1 shows how to visualize a system intended to defend against anti-ship missiles during their midcourse phases. Consider an interceptor designed for midcourse defense on an orbit (white curve) that carries it over China (the red curve is the equator). If the interceptor is fired out of its orbit shortly after detection of the launch of an anti-ship missile with a range of about 2,000 km, it would have about 13 minutes to intercept before the missile re-entered the atmosphere. In those 13 minutes, the interceptor could travel a distance of about 3,000 km, which is the radius of the yellow circle. (This assumes δV = 4 km/s for the interceptor, in line with the assumptions in the National Academies of Science and Engineering study.)

The yellow circle therefore shows the size of the area this space-based midcourse interceptor could in principle defend against such an anti-ship missile.

Fig. 1.  The yellow circle shows the coverage area of a midcourse interceptor, as described in the post; it has a radius of 3,000 km. The dotted black circle shows the coverage area of a boost-phase interceptor; it has a radius of 800 km.

However, the interceptor satellite must be moving rapidly to stay in orbit. Orbital velocity is 7.6 km/s at an altitude of 500 km. In less than 15 minutes the interceptor and the region it can defend will have moved more than 6,000 km along its orbit (the white line), and will no longer be able protect against missiles in the yellow circle in Figure 1.

To ensure an interceptor is always in the right place to defend that region, there must be multiple satellites in the same orbit so that one satellite moves into position to defend the region when the one in front of it moves out of position. For the situation described above and shown in Figure 1, that requires seven or eight satellites in the orbit.

At the same time, the Earth is rotating under the orbits. After a few hours, China will no longer lie under this orbit, so to give constant interceptor coverage of this region, there must be interceptors in additional orbits that will pass over China after the Earth has rotated. Each of these orbits must also contain seven or eight interceptor satellites. For the case shown here, only two additional orbits are required (the other two white curves in Figure 1).

Eight satellites in each of these three orbits gives a total of 24 satellites in the constellation to maintain coverage of one or perhaps two satellites in view of the sea east of China at all times. This constellation and could therefore only defend against a small number of anti-ship missiles fired essentially simultaneously. Defending against more missiles would require a larger constellation.

If the interceptors are instead designed for boost-phase rather than midcourse defense, the area each interceptor could defend is much smaller. An interceptor with the same speed as the one described above could only reach out about 800 km during the boost time of a long-range missile; this is shown by the dashed black circle in Figure 1.

In this case, the interceptor covering a particular launch site will move out range of that site very quickly—in about three and a half minutes. Maintaining one or two satellites over a launch site at these latitudes will therefore require 40 to 50 satellites in each of seven or eight orbits, for a total of 300 to 400 satellites.

The system described—40 to 50 satellites in each of seven or eight orbits—would only provide continuous coverage against launches in a narrow band of latitude, for example, over North Korea if the inclination of the orbits was 45 degrees (Fig. 2). For parts of the Earth between about 30 degrees north and south latitude there would be significant holes in the coverage. For areas above about 55 degrees north latitude, there would be no coverage. Broader coverage to include continuous coverage at other latitudes would require two to three times that many satellites—1,000 or more.

As discussed above, defending against more than one or two nearly simultaneous launches would require a much larger constellation.

Fig. 2. The figure shows the ground coverage (gray areas) of interceptor satellites in seven equally spaced orbital planes with inclination of 45°, assuming the satellites can reach laterally 800 km as they de-orbit. The two dark lines are the ground tracks of two of the satellites in neighboring planes. This constellation can provide complete ground coverage for areas between about 30° and 50° latitude (both north and south), less coverage below 30°, and no coverage above about 55°.

For additional comments on the IDA study, see Part 2 of this post.

More Comments on the IDA Boost-Phase Missile Defense Study

UCS Blog - All Things Nuclear (text only) -

Part 1 of this post discusses one aspect of the 2011 letter from Missile Defense Agency (MDA) to then-Senator Kyl about the IDA study of space-based missile defense. The letter raises several additional issues, which I comment on here.

  1. Vulnerability of missile defense satellites to anti-satellite (ASAT) attack

To be able to reach missiles shortly after launch, space-based interceptors (SBI) must be in low-altitude orbits; typical altitudes discussed are 300 to 500 km. At the low end of this range atmospheric drag is high enough to give very short orbital lifetimes for the SBI unless they carry fuel to actively compensate for the drag. That may not be needed for orbits near 500 km.

Interceptors at these low altitudes can be easily tracked using ground-based radars and optical telescopes. They can also be reached with relatively cheap short-range and medium-range missiles; if these missiles carry homing kill vehicles, such as those used for ground-based midcourse missile defenses, they could be used to destroy the space-based interceptors. Just before a long-range missile attack, an adversary could launch an anti-satellite attack on the space-based interceptors to punch a hole in the defense constellation through which the adversary could then launch a long-range missile.

Alternately, an adversary that did not want to allow the United States to deploy space-based missile defense could shoot space-based interceptors down shortly after they were deployed.

The IDA report says that the satellites could be designed to defend themselves against such attacks. How might that work?

Since the ASAT interceptor would be lighter and more maneuverable than the SBI, the satellite could not rely on maneuvering to avoid being destroyed.

A satellite carrying a single interceptor could not defend itself by attacking the ASAT, for two reasons. First, the boost phase of a short- or medium-range missile is much shorter than that of a long-range missile, and would be too short for an interceptor designed for boost-phase interception to engage. Second, even if the SBI was designed to have sensors to allow intercept in midcourse as well as boost phase, using the SBI to defend against the ASAT weapon would remove the interceptor from orbit and the ASAT weapon would have done its job by removing the working SBI from the constellation. A workable defensive strategy would require at least two interceptors in each position, one to defend against ASAT weapons and one to perform the missile defense mission.

The IDA report assumes the interceptor satellites it describes to defend ships would each carry four interceptors. If the system is meant to have defense against ASAT attacks, some of the four interceptors must be designed for midcourse intercepts. The satellite could carry at most three such interceptors, since at least one interceptor must be designed for the boost-phase mission of the defense. If an adversary wanted to punch a hole in the constellation, it could launch four ASAT weapons at the satellite and overwhelm the defending interceptors (recall that the ASAT weapons are launched on relatively cheap short- or medium-range missiles).

In addition, an ASAT attack could well be successful even if the ASAT was hit by an interceptor. If an interceptor defending the SBI hit an approaching ASAT it would break the ASAT into a debris cloud that would follow the trajectory of the original center of mass of the ASAT. If this intercept happened after the ASAT weapon’s course was set to collide with the satellite, the debris cloud would continue in that direction. If debris from this cloud hit the satellite it would very likely destroy it.

  1. Multiple interceptors per satellite

It is important to keep in mind that adding multiple interceptors to a defense satellite greatly increases the satellite’s mass, which increases its launch cost and overall cost.

The vast majority of the mass of a space-based interceptor is the fuel needed to accelerate the interceptor out of its orbit and to maneuver to hit the missile (the missile is itself maneuvering since it is during its boost phase, when it is accelerating and steering). For example, the American Physical Society’s study assumes the empty kill vehicle of the interceptor (the sensor, thrusters, valves, etc) is only 60 kg, but the fueled interceptor would have a mass of more than 800 kg.

Adding a second interceptor to the defense satellite would add another 800 kg to the overall mass. A satellite with four interceptors and a “garage” that included the solar panels and communication equipment could have a total mass of three to four tons.

  1. Space debris creation

Senator Kyl asked the MDA to comment on whether space-based missile defense would create “significant permanent orbital debris.” The MDA answer indicated that at least for one mechanism of debris creation (that of an intercept of a long-range missile), the system could be designed to not generate long-lived debris.

However, there are at least three different potential debris-creating mechanisms to consider:

  • Intercepting a missile with an SBI

When two compact objects collide at very high speed, the objects break into two expanding clouds of debris that follow the trajectories of the center of mass of the original objects. In this case the debris cloud from the interceptor will likely have a center of mass speed greater than Earth escape velocity (11.2 km/s) and most of the debris will therefore not go into orbit or fall back to Earth. Debris from the missile will be on a suborbital trajectory; it will fall back to Earth and not create persistent debris.

  • Using an SBI as an anti-satellite weapon

If equipped with an appropriate sensor, the space-based interceptor could home on and destroy satellites. Because of the high interceptor speed needed for boost phase defense, the SBI could reach satellites not only in low Earth orbits (LEO), but also those in semi-synchronous orbits (navigation satellites) and in geosynchronous orbits (communication and early warning satellites). Destroying a satellite on orbit could add huge amounts of persistent debris to these orbits.

At altitudes above about 800 km, where most LEO satellites orbit, the debris from a destroyed satellite would remain in orbit for decades or centuries. The lifetime of debris in geosynchronous and semi-synchronous orbits is essentially infinite.

China’s ASAT test in 2007 created more than 3,000 pieces of debris that have been tracked from the ground—these make up more than 20% of the total tracked debris in LEO. The test also created hundreds of thousands of additional pieces of debris that are too small to be tracked (smaller than about 5 cm) but that can still damage or destroy objects they hit because of their high speed.

Yet the satellite destroyed in the 2007 test had a mass of less than a ton. If a ten-ton satellite—for example, a spy satellite—were destroyed, it could create more than half a million pieces of debris larger than 1 cm in size. This one event could more than double the total amount of large debris in LEO, which would greatly increase the risk of damage to satellites.

  • Destroying an SBI with a ground-based ASAT weapon

As discussed above, an adversary might attack a space-based interceptor with a ground-based kinetic ASAT weapon. Assuming the non-fuel mass of the SBI (with garage) is 300 kg, the destruction of the satellite could create more than 50,000 orbiting objects larger than 5 mm in size.

If the SBI was orbiting at an altitude of between 400 and 500 km, the lifetime of most of these objects will be short so this debris would not be considered to be persistent. However, the decay from orbit of this debris would result in an increase in the flux of debris passing through the orbit of the International Space Station (ISS), which circles the Earth at an altitude of about 400 km. Because the ISS orbits at a low altitude, it is in a region with little debris since the residual atmospheric density causes debris to decay quickly. As a result, the additional debris from the SBI passing through this region can represent a significant increase.

In particular, if the SBI were in a 500-km orbit, the destruction of a single SBI could increase the flux of debris larger than 5 mm at the altitude of the ISS by more than 10% for three to four months (at low solar activity) or two to three months at high solar activity. An actual attack might, of course, involve destroying more than one SBI, which would increase this flux.

Rick Perry Rejects Facts in Favor of Coal and Nuclear Bailouts

UCS Blog - The Equation (text only) -

Photo: Greg Goebel/Wikimedia Commons

Much has been written on coal and coal miners since the president began campaigning in earnest in 2016. Since taking office, he has continued that dishonest and dangerous rhetoric—and has directed his agencies to do something. Anything. Except, of course, anything that represents real solutions for coal miners and their communities, instead proposing (initially at least) to cut federal programs that invest in those communities.

The president continues to push for a misguided federal bailout of the coal industry—a blatant political payoff to campaign donors using taxpayer money with no long-term solutions for coal workers. The latest shiny object masquerading as reasoning? National security. But as we know, bailing out uneconomic coal plants only exacerbates the real national security issues brought on by climate change, while continuing to saddle our country with the public health impacts of coal-fired electricity—which hurt real people in real communities.

As is typical with this administration, substance and science and evidence are inconsequential compared to ideology, and their attempts to bail out money-losing coal and nuclear plants are no exception. Here’s a quick take on how we got here and what to expect next.

Let’s see what sticks…

The administration didn’t exactly hit the ground running after the 2016 election—no one bothered to show up at the Department of Energy until after Thanksgiving of 2016, even though career staff were readily available and prepared to brief the incoming administration on the important work of the agency. But by the spring, it had become clear that Energy Secretary Rick Perry would be the front-man in leading the charge for a federal bailout of coal and nuclear plants. His shifting rhetoric and poor justifications for using consumers’ money to prop up uneconomic coal plants suggests that he and his inner circle are desperate to find an argument that sticks and survives legal challenges.


So, the game of whack-a-mole continues.

False arguments

In short, the administration is proposing to use emergency authorities to force grid operators and consumers to buy electricity from uneconomic coal and nuclear plants. Let’s break down the arguments one by one.

Reliability. Despite claims to the contrary, there is no reliability crisis. Lost in the rhetoric around the need for baseload resources is the fact that grid operators already have systems in place to ensure there is adequate supply of electricity when needed. The North American Electricity Reliability Corporation (NERC) projects more-than-adequate reserve margins in almost every region of the country—a few areas of concern but certainly no crisis. PJM Interconnection has repeatedly stated there is no threat to reliability from plant retirements (because they study the impacts on the system of every proposed plant retirement before approval):

“Our analysis of the recently announced planned deactivations of certain nuclear plants has determined that there is no immediate threat to system reliability. Markets have helped to establish a reliable grid with historically low prices. Any federal intervention in the market to order customers to buy electricity from specific power plants would be damaging to the markets and therefore costly to consumers.”

Resilience. Despite claims to the contrary, coal and nuclear plants do not offer additional system resilience because of onsite fuel stockpiles. It turns out, coal piles and equipment can freeze in cold weather, and flooding from extreme rain can affect power plant operations and prevent delivery of coal. A new report suggests that the administration is focusing on the wrong thing (fuel stockpiles) and should instead be focusing on performance-based metrics looking at the overall electricity system. For starters, between 2012 and 2016, a whopping 0.00007 percent of electricity disruptions were caused by fuel supply problems.

National Security. Arguments are now being couched in terms of national security and cyberattacks on the grid. The thing is, coal and nuclear facilities are vulnerable to cyberattacks just like other parts of the electricity grid, a fact completely absent from the leaked memo. Obviously, everyone cares about national security, but there is zero evidence to support the idea that keeping uneconomic power plants online will make us safer.

What we don’t know

Much uncertainty remains about Perry’s latest attempt to make something stick. UCS will keep an eye out for answers to important questions that remain, like:

  • Who will pay? Will DOE ask Congress to authorize funding for the bailout, meaning taxpayers get to foot the bill? Or will DOE ask FERC to use its power under the Federal Power Act, implying an additional cost to consumers? In either case, hold on to your wallet.
  • How much will it cost? In short, no one really knows, because DOE’s plan is light on details. Estimates range from $17 to $35 billion (with a “b”) per year according to recent studies.
  • Which power plants will qualify? Will every single coal and nuclear plant qualify for a handout? Only those that are “near” military installations and could somehow be tied to the administration’s national security rationale? Only the ones that are losing money? Only the ones that donated to the president’s campaign?
  • How will qualifying plants get paid? How would the bailout be structured and how exactly would owners of money-losing plants get compensated?
Perry charges ahead—and we must be relentless in our opposition

As we continue to wait for additional details about DOE’s bailout proposal, we are gearing up for a fight. Led by Secretary Perry, the administration continues to make false and misleading arguments about the purported need for keeping uneconomic plants from retiring early—and this issue will be with us as long as the current president is in office. Perry has long since dropped any pretext for caring about market economics or actual information to inform his proposals. In response to Congressional questioning last fall, Perry remarked:

“I think you take costs in to account, but what’s the cost of freedom? What does it cost to build a system to keep America free?” -Secretary Rick Perry, 12 October 2017

When the facts don’t support your argument, you’re forced to rely on empty bumper-sticker statements like this one to make your point.

And stack the deck by putting biased people who support your ideas in decision-making positions. We’ll be watching closely as the process unfolds for nominating someone to fill the vacant seat at FERC.

At UCS, we’re going to continue the fight to hold the administration accountable and stop this misguided and disastrous proposal from being implemented. The facts are on our side—there is no grid reliability crisis and no grid resiliency crisis, but there is a climate crisis, and bailing out coal plants will only add to the climate crisis with real adverse consequences to the economy and public health. Stand with us.


Photo: Greg Goebel

Science Prevails in the Courts as Chlorpyrifos Ban Becomes Likely

UCS Blog - The Equation (text only) -

Photo: Will Fuller/CC BY-NC-ND 2.0 (Flickr)

Today, children, farmworkers, and the rest of us won big in the Ninth Circuit Court of Appeals, as the court ordered EPA to finalize its proposed ban of the insecticide chlorpyrifos. Ultimately, the judge determined that EPA’s 2017 decision to refuse to ban the chemical was unlawful because it failed to justify keeping chlorpyrifos on the market, while the scientific evidence very clearly pointed to the link between chlorpyrifos exposure and neurodevelopmental damage to children, and further risks to farmworkers and users of rural drinking water.

Under the Federal Food, Drug, and Cosmetic Act (FFDCA), the EPA is required to remove pesticide tolerances (effectively banning them) when it cannot find they are safe with a “reasonable certainty.” The judge found that when former Administrator Pruitt’s refused to ban the chemical, he contradicted the work of the agency’s own scientists, who found the chemical posed extensive health risks to children. His failure to act accordingly violated the agency’s mandate under the FFDCA.

This attack on science was fueled by close relationships that Scott Pruitt and President Trump have with Dow Chemical Company, which makes chlorpyrifos. Unfortunately, this was just one of many recent EPA actions that not only lack justification and supporting analysis, but actively undermine the agency’s ability to protect public health—and in this case specifically, the health of children. Acting Administrator Wheeler should learn from this particular case that EPA’s decisions must be grounded in evidence, and that the public will continue to watch and demand as much.

The petition was filed by a coalition of environmental, labor, and health organizations. The EPA now has 60 days to ban chlorpyrifos.

Photo: Will Fuller/CC BY-NC-ND 2.0 (Flickr)

Massachusetts Clean Energy Bill 2018: Continuing the Journey

UCS Blog - The Equation (text only) -

Photo: J. Rogers

On Massachusetts’ journey toward a clean, fair, and affordable energy future, the energy bill that just passed is an important waystation. But it can’t be an endpoint—not by a long shot, and not even for the near term; we need to get right back on the trail. So here are the successes to celebrate, the shortcomings to acknowledge, and why we need to saddle up for next year.

The good

The Massachusetts legislature ended its two-year session last week with a flurry of activity. So, where’d we end up, in terms of clean energy? The best news is that we got a bill (even that was in doubt). And there’s certainly stuff to celebrate in An Act to advance clean energy:

The bill also includes various other pieces. Some clarifications about what kinds of charges Massachusetts electric utilities are (or aren’t) allowed to hit solar customers with (given bad decisions earlier this year). A new “clean peak standard” aimed at bringing clean energy to bear (and avoiding the use of dirty fossil fuel-fired peaking plants) to address the highest demand times of the year. A requirement that gas companies figure out how much natural gas is leaking out of their systems.

An RPS increase and another move on offshore wind—two pieces worth celebrating (Credit: Derrick Z. Jackson)

The… less good

Equally notable, though, is what’s not in the final bill. And particularly stuff that was in the bills passed by one chamber or the other:

  • A strong RPS – The leading states are a whole lot more ambitious, RPS-wise, than Massachusetts, even with the bump-up: California, New York, and New Jersey all have requirements of 50% renewables by 2030. In Massachusetts, the senate’s version had a strong target, increasing the requirement 3% per year, which would have surpassed even those states’. But the final ended up basically where the house was: 2% per year. (And, though 2030 is a long way away, having the “compromise” bump the annual increase back down to 1% after 10 years is particularly irksome.)
  • A strong storage requirement – It’s good to have something storage-related on the books, something stronger than “aspirational”. The top states for energy storage, though, are up at 1,500 megawatts (MW) by 2025 (New York) or 2,000 MW by 2030 (New Jersey), and the NJ level of ambition is what the Massachusetts senate’s bill would have gotten us. Note, too, the units in the final Massachusetts bill: megawatts vs. megawatt-hours. If we’re looking at storage being available for something like several hours at a pop, simply dropping the “-hours” piece—having the requirement be in MW, not MWh—would have made the 1,000 number a much stronger target.
  • Solar fixes – Rooftop solar systems larger than residential ones (think businesses, municipalities) are stuck in much of the state because of the caps placed on how much can “net meter”, set as a percentage of each electric utility’s peak customer demand. There are also issues and opportunities around expanding solar access to lower-income households. Here again, the senate included great language on both… and that’s where it ended: with various barriers to solar firmly in place.
  • Appliance efficiency standards – Helping our appliances do more with less was the subject of a good bill that passed the house, but also fell by the wayside en route to the negotiated final.

Then there’s the fact that the clean peak standard is an untested concept, toyed with in a couple of other states but never actually implemented. Trailblazing isn’t always bad, and dirty peaks are an issue, but there are probably better/simpler ways to tackle the problem (see, e.g., “A strong storage requirement”, above).

And there’s all kinds of important stuff around carbon pollution in sectors other than electricity, and around climate change more broadly, that didn’t make it. The new clean energy bill missed the chance to tackle transportation, for example, which accounts for 40% of our state’s emissions.

Why put the brakes on solar (and solar jobs)? (Credit: Audrey Eyring/UCS)

When measures come up short

There are certainly things to celebrate in the “good” list above. But it’s also true, as the “less good” list presents, that Massachusetts could have done much better than that. UCS’s president Ken Kimmell said, “Massachusetts scored with the energy bill passed today, but this game is far from over.” Other reactions were less favorable (see here and here, for example).

Ben Downing, the former state senator who was an architect of Massachusetts’s impressive 2016 energy bill, had some choice words on the process itself, the end-of-session crunch that he points out gives “special interests defending the status quo… an outsized voice.” Those special interests’ interests were certainly reflected in the bill’s “measured approach” to energy progress.

It’s telling that the chair of the house’s climate change committee, Rep. Frank Smizik, who has been a solid voice for climate action for at least the dozen years I’ve known him (but is retiring), couldn’t bring himself to vote for the bill, and cast the lone dissenting vote the bill received in either chamber, and was less than flattering in his characterization of the bill.

But the senate climate champions who worked out the compromise (and were clearly not pleased with what got left on the cutting room floor) had comments that were particularly on point in terms of next steps.

Sen. Michael Barrett, a champion of carbon pricing, made it clear that this bill isn’t the endgame—and that energy now needs to be an every-session kind of thing.

And Sen. Marc Pacheco, the lead author of the very strong senate bill that fed into the compromise, promised that “The day after the session ends, my office will be beginning again to pull together clean energy legislation for the next session.”

Next stop? (Don’t stop.)

And those points should be the main takeaway. We’ve got what we’re going to get from the Massachusetts legislature for the 2017-2018 session, and we’re glad for what did make it through the sausage-making. The successes are a testament to UCS supporters, who sent thousands of messages to their legislators, and to the work of our many allies in this push, in the State House and far beyond.

But we should all be hungry for a whole lot more. Every single time the legislature meets. Including next year.

The energy sector is evolving so quickly, and climate impacts are too (including in Massachusetts), that if we’re standing still, we’re losing ground. There’s no way we should accept any suggestion that, because the legislature dealt with energy in one term, they shouldn’t the next term.

At the state level, as elsewhere, progress on climate and clean energy is a journey, not a destination. There’ll be waypoints along the way, steps forward—like the new Massachusetts energy bill. But none of those should be invitations to take off our boots and kick back. This stuff is too important to leave for later.

We’re not done till we’re done, and there’s no sign of doneness here. Saddle up.

Pipe Rupture at Surry Nuclear Plant Kills Four Workers

UCS Blog - All Things Nuclear (text only) -

Role of Regulation in Nuclear Plant Safety #7

Both reactors at the Surry nuclear plant near Williamsburg, Virginia operated at full power on December 9, 1986. Around 2:20 pm, a valve in a pipe between a steam generator on Unit 2 and its turbine inadvertently closed due to a re-assembly error following recent maintenance. The valve’s closure resulted in a low water level inside the steam generator, which triggered the automatic shutdown of the Unit 2 reactor. The rapid change from steady state operation at full power to zero power caused a transient as systems adjusted to the significantly changed conditions. About 40 seconds after the reactor trip, a bend in the pipe going to one of the feedwater pumps ruptured. The pressurized water jetting from the broken pipe flashed to steam. Several workers in the vicinity were seriously burned by the hot vapor. Over the next week, four workers died from the injuries.

Fig. 1 (Source: Washington Times, February 3, 1987)

While such a tragic accident cannot yield good news, the headline for a front-page article in the Washington Times newspaper about the accident (Fig. 1) widened the bad news to include the Nuclear Regulatory Commission (NRC), too.

The Event

The Surry Power Station has two pressurized water reactors (PWRs) designed by Westinghouse. Each PWR had a reactor vessel, three steam generators, and three reactor coolant pumps located inside a large, dry containment structure. Unit 1 went into commercial operation in December 1972 and Unit 2 followed in June 1973.

Steam flowed through pipes from the steam generators to the main turbine shown in the upper right corner of Figure 2. Steam exited the main turbine into the condenser where it was cooled down and converted back into water. The pumps of the condensate and feedwater systems recycled the water back to the steam generators.

Fig. 2 (Source: Nuclear Regulatory Commission NUREG-1150)

Figure 2 also illustrates the many emergency systems that are standby mode during reactor operation. On the left-hand side of Figure 2 are the safety systems that provide makeup water to the reactor vessel and cooling water to the containment during an accident. In the lower right-hand corner is the auxiliary feedwater (AFW) system that steps in should the condensate and feedwater systems need help.

The condensate and feedwater systems are non-safety systems. They are needed for the reactor to make electricity. But the AFW system and other emergency systems function during accidents to cool the reactor core. Consequently, these are safety systems.

Both reactors at Surry operated at full power on Tuesday December 9, 1986. At approximately 2:20 pm that afternoon, the main steam trip valve (within the red rectangle in Figure 2) in the pipe between steam generator 2C inside containment and the main turbine closed unexpectedly.

Subsequent investigation determined that the valve had been improperly re-assembled following recent maintenance, enabling it to close without either a control signal nor need to do so.

The valve’s closure led to a low water level inside steam generator 2C. By design, this condition triggered the automatic insertion of control rods into the reactor core. The balance between the steam flows leaving the steam generators and feedwater flows into them was upset by the stoppage of flow through one steam line and the rapid drop from full power to zero power. The perturbations from that transient caused the pipe to feedwater pump 2A to rupture (location approximated by the red cross in Figure 1) about 40 seconds later.

Figure 3 shows a closeup of the condensate and feedwater systems showing where the pipe ruptured. The condensate and condensate booster pumps are off the upper right side of the figure. Water from the condensate system flowed through feedwater heaters where steam extracted from the main turbine pre-warmed it to about 370°F en route to the steam generators. This 24-inch diameter piping (called a header) supplied the 18-in diameter pipes to feedwater pumps 2A and 2B. The supply pipe to feedwater pump 2A featured a T-connection to the header while a reducer connected the header to the 18-inch supply line to feedwater pump 2B. Water exiting the feedwater pumps passed through feedwater heaters for additional pre-warming before going to the steam generators inside containment.

Fig 3 (Source: Nuclear Regulatory Commission NUREG/CR-5632)

Water spewing from the broken pipe had already passed through the condensate and condensate booster pumps and some of the feedwater heaters. Its 370°F temperature was well above 212°F, but the 450 pounds per square inch pressure inside the pipe kept it from boiling. As this hot pressurized water left the pipe, the lower pressure let it flash to steam. The steam vapor burned several workers in the area. Four workers died from their injuries over the next week.

As the steam vapor cooled, it condensed back into water. Water entered a computer card reader controlling access through a door about 50 feet away, shorting out the card reader system for the entire plant. Security personnel were posted at key doors to facilitate workers responding to the event until the card reader system was restored about 20 minutes later.

Water also seeped into a fire protection control panel and caused short circuits. Water sprayed from 68 fire suppression sprinkler heads. Some of this water flowed under the door into the cable tray room and leaked through seals around floor penetrations to drip onto panels in the control room below.

Water also seeped into the control panel to actuate the carbon dioxide fire suppression system in the cable tray rooms. An operator was trapped in the stairwell behind the control room. He was unable to exit the area due to doors locked closed by the failed card reader system. Experiencing trouble breathing as carbon dioxide filled the space, he escaped when an operator inside the control room heard his pounding on the door and opened it.

Figure 4 shows the section of piping that ruptured. The rupture occurred at a 90-degree bend in the 18-inch diameter pipe. Evaluations concluded that years of turbulent water flow through the piping gradually wore away the pipe’s metal wall, thinning it via a process called erosion/corrosion to the point where it was no longer able to withstand the pressure pulsations caused by the reactor trip. The plant owner voluntarily shut down the Unit 1 reactor on December 10 to inspect its piping for erosion/corrosion wear.

Fig. 4 (Source Nuclear Regulatory Commission 1987 Annual Report

Pre-Event Actions (and Inactions?)

The article accompanying the darning headline above described how the NRC staff produced a report in June 1984—more than two years before the fatal accident—warning about the pipe rupture hazard and criticizing the agency for taking no steps to manage the known risk. The article further explained that the NRC’s 1984 report was in response to a 1982 event at the Oconee nuclear plant in South Carolina where an eroded steam pipe had ruptured.

Indeed, the NRC’s Office for Analysis and Evaluation of Operational Data (AEOD) issued a report (AEOD/EA 16) titled “Erosion in Nuclear Power Plants” on June 11, 1984. The last sentence on page two stated “Data suggest that pipe ruptures may pose personnel (worker) safety issues.”

Indeed, a 24-inch diameter pipe that supplied steam to a feedwater heater on the Unit 2 reactor at Oconee had ruptured on June 28, 1982. Two workers in the vicinity suffered steam burns which required in hospitalization overnight. Like at Surry, the pipe ruptured at a 90-degree bend (elbow) due to erosion of the metal wall over time. There was a maintenance program at Oconee that periodically examined the piping ultrasonically.

That monitoring program identified pipe wall thinning of two elbows on Unit 3 in 1980 that were replaced. Monitoring performed in March 1982 on Unit 2 identified substantial erosion in the piping elbow that ruptured three months later. But the thinning was accepted because it was less than the company’s criterion for replacement. It’s not been determined whether prolonged operation at reduced power between March and June 1982 caused more rapid wear than anticipated or whether the ultrasonic inspection in March 1982 may have missed the thinnest wall thickness.

Post-Event Actions

The NRC dispatched an Augmented Inspection Team (AIT) to the Surry site to investigate the causes, consequences, and corrective actions. The AIT included a metallurgist and a water-hammer expert. Seven days after the fatal accident, the NRC issued Information Notice 86-106, “Feedwater Line Break,” to plant owners. The NRC issued the AIT report on February 10, 1987. The NRC issued Supplement 1 on February 13, 1987, and Supplement 2 on March 18, 1987, to Information Notice 86-108.

The NRC did more than warn owners about the safety hazard. On July 9, 1987, the NRC issued Bulletin 87-01, “Thinning of Pipe Walls in Nuclear Power Plants,” to plant owners. The NRC required owners to respond within 60 days about the codes and standards which safety-related and non-safety-related piping in the condensate and feedwater systems were designed and fabricated to as well as the programs in place to monitor this piping for wall thinning due to erosion/corrosion.

And the NRC issued Information Notice 88-17 to plant owners on April 22, 1988, summarizing the responses the agency received in response to Bulletin 87-01

UCS Perspective

Eleven days after a non-safety-related pipe ruptured on Oconee Unit 2, the NRC issued Information Notice 82-22, “Failures in Turbine Exhaust Lines,” to all plant owners about that event.

The June 1984 AEOD report was released publicly. The NRC’s efforts did call the nuclear industry’s attention to the matter as evidenced by a report titled “Erosion/Corrosion in Nuclear Plant Steam Piping: Causes and Inspection Program Guidelines” issued in April; 1985 by the Electric Power Research Institute.

Days before the NRC issued the AEOD report, the agency issued Information Notice 84-41, “IGSCC [Intragranular Stress Corrosion Cracking] in BWR [Boiling Water Reactor] Plants,” to plant owners about cracks discovered in safety system piping at Pilgrim and Browns Ferry.

As the Washington Times accurately reported, the NRC knew in the early 1980s that piping in safety and non-safety systems was vulnerable to degradation. The NRC focused on degradation of safety system piping, but also warned owners about degradation of non-safety system piping. The fatal accident at Surry in December 1986 resulted in the NRC expanding efforts it had required owners take for safety system piping to also cover piping in non-safety systems.

The NRC could have required owners fight the piping degradation in safety systems and non-safety systems concurrently. But history is full of wars fought on two fronts being lost. Instead of undertaking this risk, the NRC triaged the hazard. It initially focused on safety system piping and then followed up on non-safety system piping.

Had the NRC totally ignored the vulnerability of non-safety system piping to erosion/corrosion until the accident at Surry, this event would reflect under-regulation.

Had the NRC compelled owners to address piping degradation in safety and non-safety systems concurrently, this event would reflect over-regulation.

By pursuing resolution of all known hazards in a timely manner, this event reflects just right regulation.

Postscript: The objective of this series of commentaries is to draw lessons from the past that can, and should, inform future decisions. Such a lesson from this event involves the distinction between safety and non-safety systems. The nuclear industry often views that distinction as also being a virtual wall between what the NRC can and cannot monitor.

As this event and others like it demonstrate, the NRC must not turn its back on non-safety system issues. How non-safety systems are maintained can provide meaningful insights on maintenance of safety systems. Unnecessary or avoidable failures of non-safety systems can challenge performance of safety systems. So, while it is important that the NRC not allocate too much attention to non-safety systems, driving that attention to zero will have adverse nuclear safety implications. As some wise organization has suggested, the NRC should not allocate too little attention or too much attention to non-safety systems, but the just right amount.

* * *

UCS’s Role of Regulation in Nuclear Plant Safety series of blog posts is intended to help readers understand when regulation played too little a role, too much of an undue role, and just the right role in nuclear plant safety.


Subscribe to Union of Concerned Scientists aggregator - Combined UCS Blogs