Combined UCS Blogs

What Happened During the Hasty White House Review of EPA’s Science Restriction Rule?

UCS Blog - The Equation (text only) -

We already know that the production of Administrator Scott Pruitt’s rule to restrict science at the EPA was purely political, but it’s possible that there’s a whole new layer of politics that went on at the White House level as well.

Source: reginfo.gov

On Thursday April 19, the White House Office of Management and Budget’s (OMB) Office of Information and Regulatory Affairs (OIRA) received a draft proposed rule from the EPA titled “Strengthening Transparency and Validity in Regulatory Science.” It was signed promptly by Administrator Pruitt on Tuesday. At first, the OMB’s website showed its review completion on Wednesday (meaning that Pruitt had signed the document before it was cleared by the White House), but then later in the week OMB backdated its review completion date to Monday. That means that not only is there likely some funny business going in between the EPA and OIRA, but OIRA had four days (and little more than one to two full work days) to review a proposed rule that would dramatically impact the way that EPA uses science in future rulemakings.

In just a couple days of OIRA review, a UCS analysis of the rule before and after review shows that it grew by four pages and was narrowed to include rules considered to be “significant regulatory actions” and those with dose response data and models that underlie “pivotal regulatory science.” While the docket does not currently include details on who made those changes, if OIRA staff was responsible for changing the scientific basis of this rule, there is certainly reason to be concerned. White House review under Executive Order 12866 is supposed to be limited to cost benefit analysis and overlap with other agencies and should in no way change the scientific content of the agency’s work. Interference from the White House in this area doubles down on the already implicit affront to scientific integrity at EPA that this rule represents.

We compared the start and conclusion documents from OIRA’s EO 128666 review of EPA’s science restriction policy and noticed that post-review changes (those in blue) included narrowing its scope to cover the dose response data and models underlying “pivotal regulatory science.” Source: regulations.gov

 

The policy post-OIRA review also included definitions for “dose response data and models” and “pivotal regulatory science.” Source: regulations.gov

Not only are there questions about OIRA’s role in changing the content of the rule, but the rule’s mad dash through White House review is not normal even by Administrator Pruitt’s standards. OIRA review of proposed rulemakings, required under President Bill Clinton’s Executive Order 12866, is supposed to take under 90 days with the possibility to extend to 120 days if it is absolutely needed. If we operate under the assumption that a 90-day review is an adequate amount of time for OIRA to review a rule, make sure the costs and benefits have been thoroughly analyzed, allow time for interagency review and meetings with stakeholders, and then suggest changes to the agency, exactly how inadequate is a 1 to 2 day review? According to OIRA’s regulatory review data, since the time that Pruitt has been at EPA, the agency has reviewed 41 rules that were not economically significant (including the policy in question). The average review time for those rules? 52 days. In fact, only 6 other rules have gone through review in less than a week at the EPA in this period, several of which were addendums to rules (like definitions, delays, or stays).

So what’s the problem with such a quick turnaround from the White House?

UCS has in the past taken issue with extensive delays in OIRA review, especially under the Obama administration, that have held up important science-based public health protections in regulatory limbo. While an overly long OIRA review period bogs down the regulatory process, a dramatically swift review process may allow rules to be proposed without the proper analysis to back it up. This is precisely what we’re now seeing with the EPA’s proposal to restrict science. In it, the EPA claims it’s not an economically significant rule, citing no analysis on the potential costs and benefits of the rule. It calls for a system to make scientific data publicly available but cites no existent database that would be able to handle all of EPA’s “pivotal regulatory science.” It does not include protections for privacy for confidential business information and it gives the EPA administrator the ability to waive rules from the requirements on a case by case basis. And finally, it reveals just how feeble the rule is by posing 25 substantive questions (almost four pages-worth) to commenters for them to answer in 30 days— the shortest comment period window possible for a rule.

UCS has submitted a comment to the EPA asking for an extension to this woefully insufficient comment period for such a sweeping rule and we are joined by many other organizations who are doing the same. House Science Committee Ranking Member Eddie Bernice Johnson and Energy & Commerce Environment Subcommittee Ranking Member Tonko were joined by 63 democratic colleagues on a letter calling for a 90-day comment period because “regardless of viewpoint, there is agreement that the proposed rule would be a significant change in how the agency considers science in policymaking.” It is imperative that Administrator Pruitt heeds this call to ensure all stakeholders have a chance to meaningfully participate in this process, which has been bungled in a variety of ways since this rule began as just a twinkle in Lamar Smith’s eye.

Photo: Matthew Platt/CC BY-SA (Flickr)

What is the Connection Between New Mobility and Transportation Equity?

UCS Blog - The Equation (text only) -

My name is Richard Ezike, and I work at the interface between new mobility and transportation equity. When I talk about “new mobility” in my research, I refer to what is arguably the most disruptive technology in transportation in the last century: the autonomous vehicle (AV). Already these cars are being tested on America’s roadways in Chandler, Arizona; Pittsburgh, Pennsylvania; and Silicon Valley, Companies like Uber, Lyft, Waymo, Ford, and General Motors are investing billions of dollars to bring this technology quickly to market. These companies are touting widespread adoption in less than 5-10 years.

However, more discussion is needed on the impacts of these cars on transportation equity because this nexus is often ignored in the spaces where AVs are being debated and discussed. The million-dollar question is: Will AVs help or hurt the mobility of low-income people and people of color? The pursuit to tackle that question has led me here to the Union of Concerned Scientists (UCS).

My project works to address this question from two angles. First, we are working with a transportation consulting firm to study the potential impact of self-driving technology on access, equity, congestion, and transit utilization in the DC Metro Area, where I personally live and work.  They are using a travel demand model developed by the area metropolitan planning organization (MPO), the National Capital Region Transportation Planning Board, to predict the impacts of vehicle miles traveled, vehicle trips, and transit trips by AVs in 2040. By modifying the inputs to the model, we can simulate the impacts of self-driving cars on the future transportation network performance. The detailed nature of the model allows us insight into specific neighborhoods that may gain or lose under a variety of future scenarios.

Second, we are engaging stakeholders to learn their thoughts and concerns about AVs. To date, I have interviewed over 40 stakeholders including local government officials, car dealers, community leaders, and policy makers. I have asked them about the potential impacts of AVs on traffic, labor, the environment, and the economy. In early 2019, we plan to convene stakeholders to discuss our research findings, get feedback, and generate policy recommendations to share with local leaders and community groups.

Using this two-pronged approach will provide our community with both technical and community-based knowledge that will assist in the planning of how AVs can be deployed safely and equitably.

Defining transportation equity

Historically, members from disadvantaged groups (low-income residents, minorities, children, persons with disabilities, and older adults) have experienced the most negative impacts of the transportation system. These groups have lower car ownership levels, the longest commute times and the highest costs for transportation. These same groups also live near inadequate infrastructure, which results in unsafe conditions for cycling and walking and therefore an increased number of fatalities involving pedestrians and cyclists.

Low-income and minority communities are also more likely to be located near highways and other transportation facilities that produce local air pollution; to suffer from negative health effects such as asthma; and to have the least accessibility to key destinations such as parks, hospitals, and grocery stores selling healthy food. Addressing these issues requires a dedicated effort to address equity in the transportation system to provide equal access for all people.

Equity is defined as the fairness, impartiality, and justness of outcomes, including how positive and negative impacts are distributed. Within transportation and infrastructure, the decisions made in the planning stages can significantly affect the level of equity achieved in communities.

Depending on how it is deployed, autonomous vehicle technology could improve transportation inequities; but without guidance, the same detrimental effects to disadvantaged groups may only get worse. Moreover, solving these problems is not a purely technical challenge, but requires meaningful engagement and input from communities with a stake in the outcomes, so they can have a voice in the way their city is developed. Historically, public engagement has been a secondary consideration, although many MPOs are stepping up their efforts. Based on work by Dr. Alex Karner, effective engagement can be broken into three steps:

  1. Identify current unmet needs from the communities this requires engaging with community groups to learn how MPOs can best serve residents.
  2. Provide funds to assist community groups in engagement – Engagement can be time consuming and expensive, and often community groups do not have the bandwidth in time or in funding for outreach. Therefore, the MPOs should provide resources to assist. Karner suggested raising money through state taxes or allocating from available transportation funds
  3. Measure progress of outreach using relevant metrics – MPOs must track how effectively they are engaging communities. They need to know how many people they talked with and if they understood the material being discussed. By tracking that information, MPOs will know if their message is getting across.

Through the duration of my fellowship I have had the opportunity to interview several stakeholders to learn about how they see autonomous vehicles impacting equity. Across the board, there is a definite interest in how the broad impacts of AVs will manifest themselves in society, and at UCS my research will help to bring these various groups together. My engagement with these groups is helping to identify unmet needs, identify relevant metrics from stakeholders, and stress the importance of safe and equitable AV deployment. 

Why new mobility and equity must function together

I have talked with transit advocates who are concerned about the impacts on transit agency jobs and public transit options in general, as they are concerned that AVs will replace public transit but may not meet the needs of transit dependent communities while eliminating thousands of transit worker jobs.

I have spoken to business owners who believe the benefits of autonomous vehicles, such as increased access to the transportation system for the disabled and senior citizens, outweigh any potential pitfalls.  I have heard varying viewpoints from several local government officials from very concerned to “we have not thought about AVs yet,” and some state departments of transportation are taking a hands-off approach.

These discussions reiterate that the paradigm shift is happening. Autonomous car technology is here, and billions of dollars are being spent to put these cars on the roads as fast as possible. However, the conversations that are most needed –potential impacts on transportation equity and accessibility, the effects on public transit, and the environmental considerations – are not happening quickly enough. They need to happen more often, and soon. Through my fellowship at UCS, I aim to increase this awareness and provide new research, analysis and recommendations to advance equitable transportation outcomes.

NRC Cherry-Picking in the Post-Fukushima Era: A Case Study

UCS Blog - All Things Nuclear (text only) -

In the late 1960s, the Atomic Energy Commission (AEC), the forerunner of the NRC, paid the very companies that designed nuclear reactors—Westinghouse and General Electric (GE)—to test the efficacy of their own emergency cooling systems.

In the event of an accident in which a reactor loses water, uncovering the fuel rods—called a “loss-of-coolant accident”—these systems inject water back into the reactor in an attempt to prevent a meltdown. The tests that Westinghouse and GE performed were named the Full Length Emergency Cooling Heat Transfer (FLECHT) tests. The FLECHT tests simulated fuel rods undergoing a loss-of-coolant accident. The tests were intended to be as realistic as possible: bundles of 12-foot-tall rods, simulating fuel rods, were electrically heated up to reactor-accident temperatures and then inundated with cooling water.

Several of the tests were geared toward assessing how well the outer casing of fuel rods, called “cladding,” would endure in accident conditions. The cladding of fuel rods is primarily zirconium, a silver-colored metal. After the injection of water in an accident, hot-zirconium cladding is intended to endure the thermal shock of swift re-submergence and cooling. The cladding must not be stressed to its failure point. It is crucial that the fuel cladding perform well in an accident because it is a barrier preventing the release of highly radioactive materials into the exterior environment.

Figure 1. Source: Westinghouse)

Robert Leyse, my father, a nuclear engineer employed by Westinghouse, conducted a number of the FLECHT tests. On December 11, 1970, one of those tests, designated as Run 9573, had unexpected results. In Run 9573, a section of the test bundle’s zirconium cladding essentially caught on fire. The cladding burned in steam—then, when cooled, shattered like overheated glass doused with cold water.

Mr. Leyse instructed a lab assistant to take photographs of the destroyed test bundle, one of which is displayed as Figure 1. In a report on the FLECHT tests that Mr. Leyse coauthored, Westinghouse referred to the severely burnt, shattered section as the “severe damage zone” and noted that “the remainder of the [test] bundle was in excellent condition.”

Westinghouse’s FLECHT data is nearly 50 years old yet it is still highly regarded. The AEC used some of the FLECHT data to establish regulations that remain in place to this day. Westinghouse’s report on the FLECHT tests states that data from the first 18 seconds of Run 9573—before the cladding caught fire—is valid.

Concern over the extent zirconium burns in reactor accidents

In 2009, I submitted a rulemaking petition (PRM-50-93), requesting new regulations intended to improve public and plant worker safety. PRM-50-93 contends industry and NRC computer safety models under-predict the extent zirconium fuel cladding burns in steam. In more technical terms, the petition alleges models under-predict the rates at which zirconium chemically reacts with steam in a reactor accident. I buttressed my claims by citing data from FLECHT Run 9573 and other experiments conducted with bundles of zirconium cladding.

The zirconium-steam reaction produces zirconium dioxide, hydrogen, and heat. In a serious accident, the rate of the zirconium-steam reaction increases as local cladding temperatures increase within the reactor core. As the reaction speeds up, more and more heat is generated; in turn, the additional heat increases the rate of the reaction, potentially leading to thermal runaway and a meltdown.

It is problematic that the zirconium-steam reaction generates hundreds of kilograms of explosive hydrogen gas in a meltdown. In the Fukushima Daiichi accident—in which three reactors melted down—hydrogen leaked out of reactors’ containments and detonated, blowing apart reactor buildings. The release of radioactive material prompted the evacuation of tens of thousands of people and rendered a large area of land uninhabitable.

A “high priority”

In 2010, the NRC said its technical analysis of my 2009 rulemaking petition (PRM-50-93) was a “high priority.” Then, in 2011, the agency issued a press release announcing it intended to “increase transparency” in its petition review process by releasing preliminary evaluations of PRM-50-93. The announcement said the final decision on the petition would “not be issued until after the NRC Commissioners…considered all staff recommendations and evaluations.”

As part of the preliminary technical analysis of PRM-50-93, the NRC staff conducted computer simulations of FLECHT Run 9573. They compared the results of their simulations to data Westinghouse reported. However, there is a major problem with the staff’s simulations. They did not simulate the section of the test bundle that ignited. (Or if they did simulate that section, they decided not to release their findings.)

By way of an analogy: what the NRC staff did would be like simulating a forest fire and omitting trees reduced to ash and only simulating those that had been singed. After doing such a bogus simulation one might try to argue that trees actually do not burn down in forest fires. The staff basically did just that. They used the results of their simulations to argue that models of the zirconium-steam reaction are not flawed—that reaction rates are not under-predicted.

On January 31, 2013, I gave a presentation to the five commissioners who were heading the NRC at the time. They invited me to present my views in a meeting addressing public participation in the NRC’s rulemaking process. They apparently wanted my insights, because, in 2007, I raised a safety issue in a rulemaking petition (PRM-50-84) that they decided to incorporate into one of their regulations. I had pointed out that computer safety models neglected to simulate a phenomenon affecting the performance of fuel rods in a loss-of-coolant accident.

In my presentation, I criticized the staff’s computer simulations of FLECHT Run 9573. I said: “You cannot do legitimate computer simulations of an experiment that [caught on fire] by not actually modeling the section of the test bundle that [caught on fire].” In the Q and A session, Commissioner William Magwood assured me that he and the other commissioners would instruct the staff “to follow up on” my comments, including my criticism of the staff’s simulations of Run 9573. Then, five weeks after the meeting, Annette Vietti-Cook, Secretary of the Commission, instructed the staff to “consider and respond” to my comments on its review of PRM-50-93.

I hoped the staff would promptly conduct and report on legitimate computer simulations of FLECHT Run 9573. Instead, in March 2013, the staff restated that their prior, incomplete simulations of Run 9573 over-predicted the extent that zirconium burns in steam, indicating computer safety models are beyond adequate.

In November 2015, after I made a series of additional complaints, with help from Dave Lochbaum of the Union of Concerned Scientists, Aby Mohseni, Deputy Director of the NRC’s Division of Policy and Rulemaking, disclosed results of computer simulations of FLECHT Run 9573 including the section of the test bundle that ignited. The simulations drastically under-predict temperatures Westinghouse reported for that section.

The NRC’s severe-damage-zone computer simulations of Run 9573

The NRC’s severe-damage-zone computer simulations predicted cladding and steam temperatures for the FLECHT Run 9573 test bundle, at the 7-foot elevation, at 18 seconds into the experiment. (The severe damage zone was approximately 16 inches long, centered at the 7-foot elevation of the 12-foot-tall test bundle.)

The highest cladding temperature the severe-damage-zone simulations of Run 9573 predicted is 2,350°F, at the 7-foot elevation, at 18 seconds. Westinghouse reported that at 18.2 seconds into Run 9573, cladding temperatures by the 7-foot elevation exceeded 2,500°F. Cladding temperatures by the 7-foot elevation were not directly measured by thermocouples (temperature-measuring devices); however, Westinghouse reported that electrical heaters installed in the cladding began to fail at 18.2 seconds, by the 7-foot elevation, after local cladding temperatures reached higher than 2,500°F. Hence, even considering the time difference of a 0.2 second, one can infer that the severe-damage-zone simulations of Run 9573 under-predicted the cladding temperature by a margin of more than 100°F (at the section of the test bundle that ignited).

(Note that there is a time difference of a 0.2 second between the time the NRC picked for its simulations of Run 9573 and the time that the electrical heaters began to fail in the experiment. In the staff’s incomplete simulations of Run 9573—reported in the staff’s preliminary evaluations of PRM-50-93—the highest predicted cladding temperature is 2,417.5°F, at the 6-foot elevation, at 18 seconds. And the highest predicted cladding temperature increase rate is 29°F per second, at the 6-foot elevation, at 18 seconds. From these predictions we can infer that—although the value has not been reported—the highest predicted cladding temperature increase rate would be approximately 29°F per second or less, at the 7-foot elevation, at 18 seconds.)

In Run 9573, at the 7-foot elevation, the heat generated by the zirconium-steam reaction radiated to the local environment, heating the steam in proximity. The highest steam temperature the NRC’s severe-damage-zone simulations of Run 9573 predicted is 2,055°F, at the 7-foot elevation, at 18 seconds. Westinghouse reported that at 16 seconds into Run 9573, a steam-probe thermocouple mounted at the 7-foot elevation directly recorded steam temperatures that exceeded 2,500°F. And a Westinghouse memorandum (included as Appendix I of PRM-50-93) stated that after 12 seconds, the steam-probe thermocouple recorded “an extremely rapid rate of temperature rise (over 300°F/sec).” (Who knows how high the local steam temperatures actually were at 18 seconds; they were likely hundreds of degrees Fahrenheit higher than 2,500°F.) Hence, the severe-damage-zone simulations of Run 9573 under-predicted the steam temperature by a margin of more than 400°F (by the section of the test bundle that ignited).

The fact the NRC’s severe-damage-zone simulations under-predict cladding and steam temperatures that occurred in Run 9573 is powerful evidence indicating models under-predict the zirconium-steam reaction rates that occur in reactor accidents.

Qualifying power level increases for reactors

Since the 1970s, the NRC has approved more than 150 power level increases (termed “power uprates”) for reactors in the US fleet, enabling them to generate more and more electricity. An important part of qualifying a power uprate is to provide assurance with computer simulations that emergency systems would be able to prevent a meltdown if there were a loss-of-coolant accident at the proposed, higher power level.

A computer simulation is supposed to over-predict the severity of a potential nuclear accident. A margin of safety is established when a reactor’s power level is qualified by a “conservative” simulation—one that overcompensates. Meltdowns are less likely to occur if the reactor operates at a safe power level, providing a sufficient safety margin.

The extent zirconium burns at high temperatures has a major impact on the progression and outcome of a reactor accident. If zirconium-steam reaction rates are under-predicted by computer safety models, they will also under-predict the severity of potential reactor accidents. And, if power uprates have been qualified by models under-predicting the severity of potential accidents, it is likely power levels of reactors have been set too high and emergency cooling systems might not be able to prevent a meltdown in the event of a loss-of-coolant accident.

A petition review process of beyond eight years (with cherry-picking)

The NRC staff’s technical analysis of my 2009 rulemaking petition (PRM-50-93) was completed on March 18, 2016, but was not made publicly available until March 5, 2018, nearly two years later. The technical analysis signals an intention to deny PRM-50-93. It concludes with the statement: “Each of the petition’s key presumptions was investigated in detail. … The petition fails to provide any new information that supports a rule change. The NRC staff does not agree with the petition’s assertions, and concludes that revisions to [NRC regulations] or other related guidance are not necessary.”

Interestingly, a NRC staff e-mail, released in response to a Freedom of Information Act request, reveals that in August 2015—seven months before their technical analysis was completed—the staff already planned to deny PRM-50-93. At that time, the staff intended to announce their denial in August 2016.

The 2016 technical analysis of PRM-50-93 fails to discuss or even mention the results of the computer simulation of FLECHT Run 9573 that Mr. Mohseni disclosed in November 2015. Certain staff members appear intent on denying PRM-50-93 to the extent that they’re willing to make false statements and omit evidence lending support to the petition’s allegations. They appear determined to bury the fact their own computer simulation underpredicts, by a large margin, temperatures Westinghouse reported for the section of the Run 9573 test bundle that ignited.

The staff members who conducted the 2016 technical analysis of PRM-50-93 did not comply with the commissioners who directed them, in January 2013, to “consider and respond” to my criticisms of their simulation of Run 9573. The 2016 technical analysis has a section titled “Issues Raised at the Public Commission Meeting in January 2013;” however, that section fails to discuss the simulation results Mr. Mohseni disclosed in November 2015.

In April 2014, I submitted over 50 pages of comments alleging the staff’s preliminary evaluations of PRM-50-93 have numerous errors as well as misrepresentations of material I discussed to support my arguments. In my opinion, the 2016 technical analysis has the same shortcomings. I suspect that portions of the technical analysis have been conducted in bad faith. Perhaps certain staff members fear enacting the regulations I requested would force utilities to lower the power levels of reactors.

As a member of the public, who spent months writing PRM-50-93, I personally resent the way certain staff members disrespect science and efforts of the public to participate in the NRC’s rulemaking process. (The NRC gives lip service to encouraging public participation. Its website boasts that the agency is “committed to providing opportunities for the public to participate meaningfully in the NRC’s decision-making process.”) Even worse, much worse, their cynical actions undermine public safety.

In a written decision, D.C. Circuit appeals court judges said it was “nothing less than egregious” when a federal agency took longer than six years to review a rulemaking petition. The NRC has been reviewing PRM-50-93 for longer than eight years—procrastinating as well as cherry-picking.

UCS perspective

[What follows was written by Dave Lochbaum, Director of the Nuclear Safety Project at the Union of Concerned Scientists]

I (Dave Lochbaum) invited Mark Leyse to prepare this commentary. I more than monitored Mark’s efforts—I had several phone conversations with him about his research and its implications. I also reviewed and commented on several of his draft petitions and submissions.

Mark unselfishly devoted untold hours researching this safety issue and painstakingly crafting his petition. He did not express vague safety concerns in his petition. On the contrary, his concerns were described in excruciating detail with dozens of citations to source documents. (Reflective of that focused effort, Mark’s draft of this commentary contained 33 footnotes citing sources and page numbers, supporting his 2,300-plus words of text. I converted the footnotes to embedded links, losing chapter and verse in the process. Anyone wanting the specific page numbers can email me for them.)

Toward the end of his commentary, Mark expresses his personal resentment over the way the NRC handled his concerns. It is not my petition, but I also resent how the NRC handled, or mis-handled, Mark’s sincere safety concerns. He made very specific points that are solidly documented. The NRC refuted his concerns with vague, ill-supported claims. If Mark’s safety concerns are unfounded, the NRC must find a way to conclusively prove it. “Nuh-uh” is an unacceptable way to dismiss a nuclear safety concern.

In addition to handling Mark’s safety concerns shoddily from a technical standpoint, the NRC mistreated his concerns process-wise. Among other things, Mark asked the NRC staff to explain why it had not conducted a complete computer simulation of Westinghouse’s experiment, FLECHT Run 9573. The NRC refused to answer his questions, contending that its process did not allow it to release interim information to him. I protested to the NRC on Mark’s behalf, pointing out case after case where the NRC had routinely provided interim information about rulemaking petitions to plant owners. I asked why the NRC’s process treated members of the public one way and plant owners a completely different way. Their subterfuge exposed, the NRC “suddenly” found itself able to provide Mark with interim information, or at least selective portions of that information.

The NRC completed its technical analysis of Mark’s petition in March 2016 but withheld that information from him and the public for two years. The NRC would not withhold similar information from plant owners for two years. The NRC must play fair and stop being so cozy with the industry it sometimes regulates.

If how the NRC handled Mark’s petition is the agency at its best, we need a new agency. These antics are simply unacceptable.

Regulators Should Think Twice Before Handing Out Pollution Credits for Self-Driving Cars

UCS Blog - The Equation (text only) -

A new report out by Securing America’s Future Energy (SAFE) suggests that automakers should get credits towards meeting emission and fuel economy standards for connected and automated vehicles (AVs) and related advanced driver assist systems—technologies that may or may not save any fuel. Doing so would not only increase pollution and fuel use, but would seriously undermine the integrity and enforceability of regulations that have delivered enormous benefits to our environment, our pocketbooks, and our national security.  The tens of thousands of traffic related fatalities every year in the U.S. demands that automakers and regulators must continue to make our cars safer.  But trying to encourage greater deployment of safety technologies by undermining pollution standards is the wrong approach.

Here’s why regulators should reject giving emissions credits to manufacturers for deploying safety and self-driving technologies.

Including emissions credits for safety and self-driving technologies in 2022-2025 vehicle standards would be a windfall for automakers, resulting in less deployment of proven efficiency technologies and more pollution.

There are more questions than answers about the potential impacts of various safety technologies and self-driving capabilities on vehicle and overall transportation system emissions, which I’ll get into more below.  But for now, let’s just take a big leap of faith and assume that some safety technologies actually do lower an individual vehicle’s emissions.

One example is adaptive cruise control.  This technology automatically adapts a vehicle’s speed to keep a safe distance from a vehicle ahead and theoretically could perform more efficiently than a human driver.  It is widely available and featured on vehicles like the Toyota Camry, Honda Accord and Ford Fusion.  One study examined this technology and found changes in efficiency could range from +3 to -5 percent during various types of driving. While there is some evidence that under certain conditions there might be a slight fuel economy benefit from this technology when it is in use, that same evidence indicates that increased fuel use and emissions are also possible.

In another recent study of self-driving cars, researchers found that while eco-driving capabilities could potentially provide savings, the increase in electric power demand, added weight, and aerodynamic impacts of sensors and computers would increase fuel use and emissions.  Both of these examples demonstrate the importance of testing and verifying any assumed change in emissions from the deployment of safety and self-driving technology as emissions reductions are anything but certain.

But even if credible testing and data were available, giving off-cycle credits for this technology within existing standards would be a giveaway to the auto industry.

Why? Adaptive cruise control is already being deployed on millions of cars – 1 in 5 new vehicles produced for the US market in model year 2017 were equipped with adaptive cruise control. Automatic emergency braking is another example, where automakers have already made commitments to make it standard on nearly all cars by 2022. Giving credits for these technologies would be a windfall for manufacturers and result in less deployment of proven fuel efficiency technologies.

The ICCT also identified this issue of providing credits for tech deployment that is already occurring in their review of the current off-cycle credit program and concluded that the program greatly reduces the deployment of other efficiency technology. They also identified the lack of empirical evidence to validate claimed fuel economy and emissions benefits from several technologies already included in the program as another big problem. And currently there is little empirical data to validate any efficiency benefits of safety and self-driving technologies.

Providing credits for emissions and fuel consumption impacts that are difficult to measure and not directly related to a vehicle – like possible impacts on traffic congestion—would increase pollution and undermine the standards.

Expanding the off-cycle program for safety technologies that might directly impact a vehicle’s emissions is just the tip of the iceberg.   The off-cycle credit program, like the vehicle standards in general, is limited to emissions directly related to the performance of a vehicle. But some automakers, and SAFE, are interested in allowing credits based on potential changes in emissions from the transportation system as whole. For example, automakers could earn credits toward compliance with vehicle standards for some future changes in traffic congestion that might result from the deployment of improved vehicle safety technologies. This would be a major change to the per-vehicle basis of the fuel economy regulations that were established in the 1970’s.

There are several serious problems with including speculative, indirect emissions impacts in existing vehicle standards.

1. Providing credits for emissions reductions that may or may not ever happen in the future will increase pollution in the short term and may never result in emission reductions in the long term

We only need to look back at the flex fuel vehicle (FFV) loophole to find an example of this kind of failed policy. Automakers were given fuel economy credits for selling cars capable of running on fuel that is 85 percent ethanol (known as E85), under the theory that this would help drive E85 to market and we would use less oil. Several automakers used it as a compliance strategy and avoided investing in other fuel efficiency technologies. But the cars almost never actually used E85, which means instead of getting more efficient vehicles, we got more oil use. The increased fuel consumption resulting from the FFV loophole is estimated to be in the billions of gallons.

Crediting future emissions reductions based on hopes and dreams has been tried before and doesn’t work.

2. Ignoring the potential negative impacts from self-driving technologies is a HUGE problem.

Self-driving cars have the potential for both positive AND negative impacts on pollution and energy use.

The biggest X-factor is how drivers will respond to these new technologies, which make vehicles safer, but also makes them easier to drive (or not drive at all as the case may be). A paper by Wadud et. al examined a range of direct and indirect impacts self-driving vehicles could potentially have on emissions.  And there are several possibilities, some of which could reduce emissions while others could increase emissions dramatically (see figure).   Increased emissions could result from higher highways speeds enabled by increased vehicle safety, increased vehicle size or features as drivers expect more features in their vehicles while their car drives them around, and most importantly, increases in the amount of vehicle travel overall.  Combined, these effects could increase emissions by more than 100% according to the study.

Automated vehicles could have both positive and negative impacts on energy consumption and emissions. Wadud et al.

We’ve already experienced increased highway speeds as vehicles have become safer with seatbelts, air bags and a host of other safety technologies.  And it’s not hard to imagine increases in vehicle miles traveled as cars take over the task of driving so we can do other things.  Just think about for a minute—what different choices might you make if you didn’t have to drive your own car?  Living farther from work or taking that extra trip during Friday rush hour might not seem so bad anymore when you can read a book or watch a movie while your car chauffeurs you to wherever you want to go.

Based on the current scientific literature, SAFE’s estimate of potential efficiency improvements from automated vehicles is misleading at best. Their analysis ignores any possible disbenefits, like increased vehicle travel, even while specifically acknowledging AVs “can also give drivers one thing of tremendous value to most Americans – an increase in personal or productive time”. The analysis also uses the upper range of efficiency benefits from a handful of studies estimated over limited driving situations, and inappropriately applies them to all driving.  The conclusion that a handful of safety technologies could reduce emissions 18-25%  across the entire vehicle fleet is not supported by current evidence, ignores any other effects of self-driving cars, and is not a sound basis for policymaking decisions.

My point isn’t that we should prevent self-driving technology and the many potential benefits it could deliver if done responsibly.

But vehicle standards aimed at reducing emissions and fuel consumption shouldn’t include credits for potential positive changes to transportation system emissions while ignoring the negative ones.

3. Finally, regulatory enforceability and accountability—the key to the success of today’s vehicle standards—would be severely undermined

The effectiveness of vehicle standards, any standards for that matter, is having effective enforcement which ensures regulated entities are all participating on a level playing field and that the actual benefits of the standards are realized.  We’ve seen the importance of enforcement over the decades as automakers have been held accountable for the performance of their products. Think ‘VW diesel scandal’ for one, and the numerous examples of erroneous fuel economy labels (Ford and Hyundai-Kia to name just two). These enforcement actions have one important thing in common: regulators were able to perform tests on the vehicles to determine if they were performing as the automakers claimed, and demonstrate that they were not.

Current vehicle standards are robust because they are predicated on direct emissions and fuel savings benefits that are verifiable on a vehicle level. An automaker makes a car, it’s tested, and they are held accountable for the results. How might a regulator, or an automaker, test and verify the congestion impacts of an individual Cadillac STS with Super Cruise?

Providing credits to automakers for emission reduction benefits that cannot be verified or attributed to an individual manufacturer, nevermind an individual vehicle make or model would be a massive change in approach to the program introduced through a mechanism – the off-cycle credit provisions – which was never intended to be more than small part of automaker compliance.

Where’s our insurance policy?

SAFE makes the case that giving away credits to automakers now, even without proof that these technologies reduce fuel use and emissions, is worth it because it would allow EPA and NHTSA to run a research program to understand the impacts on fuel economy of self-driving technology. But why should we accept increased pollution for collecting information? A better path forward for regulators is to indicate their intention to consider the direct vehicle emissions and fuel economy impacts of safety and self-driving technology in setting post-2025 vehicle standards and implement a testing program now to collect the necessary data to see whether giving credits for these technologies is appropriate. This would motivate automakers to do their own testing and to work with EPA and NHTSA to develop appropriate test procedures for ensuring the claimed benefits are actually occurring.

If safety and self-driving technology off-cycle credits are a proposed solution to the current impasse over 2022-2025 vehicle standards between federal regulators, the auto industry, and California, then we all need to be clear about the costs. They would provide windfall credits to auto companies for something they are already doing, while stalling deployment of proven efficiency technologies and increasing emissions.  If indirect changes in transportation system emissions and fuel consumption are included, such as some theoretical impacts on congestion sometime in the future that may or may not happen, the move would risk undermining the foundation of the standards themselves.

We should not be forced to make a choice between improving vehicle safety and reducing emissions. We need to protect the public from vehicle crashes and protect the public from pollution. If there is proven safety technology that is saving lives, automakers should deploy it and safety regulators should require it. But moving from a regulatory structure that is built on verifiable and enforceable emission reductions to one that is based on speculation and indirect impacts is a dangerous move that should be avoided.

 

The Health and Safety of America’s Workers Is at Risk

UCS Blog - The Equation (text only) -

Saturday, April 28, may have seemed like just another Saturday. Some of us likely slept a little later and then got on to those household chores and tasks we couldn’t get to during the week. Some of us enjoyed some leisure time with family and friends. Many of us got up and went to work—maybe even to a second or third job.

But April 28 is not just another day. Here in the US and around the world, it’s Workers’ Memorial Day—the day each year that recognizes, commemorates and honors workers who have suffered and died of work-related injuries and illnesses. It is also a day to renew the fight for safe workplaces. Because too many workers lose their lives, their health, their livelihoods or their ability to fully engage in the routine activities of daily living because of hazards, exposures and unsafe conditions at work.

Unless you know someone who was killed or seriously injured on the job, you probably don’t give workplace safety much thought. Perhaps you think work-related deaths, injuries and illnesses are infrequent, or only affect workers in demonstrably risky jobs—like mining or construction. The actual statistics, however, tell a different story. (For a more detailed and visual look, see this Bureau of Labor Statistics [BLS] charts package.)

Fatalities: In 2016 the number of recorded fatal work injuries was 5,190. On average, that’s 14 people dying every day. In the United States. It’s also 7 percent more than the number of fatal injuries reported in 2015 and the highest since 2008. Most of these deaths were the result of events involving transportation, workplace violence, falls, equipment, toxic exposures, and explosions. And the 2016 data reveal increases in all but one of these event categories. That’s not going in the right direction.

Non-fatal cases: According to the BLS, private industry employers reported 2.9 million non-fatal workplace injuries and illnesses in 2016, nearly one third of which were serious enough to result in days away from work—the median being 8 days. For public sector workers, state and local governments reported another 752,600 non-fatal injuries and illnesses for 2016.

Costs: And then there’s the enormous economic toll that these events exact on workers, their families and their employers. According to 2017 Liberty Mutual Workplace Safety Index, the most serious workplace injuries cost US companies nearly $60 billion per year.

But that’s just a drop in the bucket. The National Safety Council estimates the larger economic costs of fatal and non-fatal work injuries in 2015 at $142.5 billion. Lost time estimates are similarly staggering: 99 million production days lost in 2015 due to work injuries (65 million of which occurred in 2015), with 50 million estimated days lost in future years due to on-the-job deaths and permanently disabling injuries that occurred in 2015.

And even these costs don’t come close to revealing the true burden, as they do not include the costs of care and losses due to occupational illness and disease. A noteworthy and widely cited 2011 study estimated the number of fatal and non-fatal occupational illnesses in 2007 at more than 53,000 and nearly 427,000, respectively, with cost estimates of $46 billion and $12 billion, respectively.

Who foots the bill and bears these enormous costs? Primarily injured workers, their families, and tax-payer supported safety net programs. Workers’ compensation programs cover only a fraction. See more here and here.

The other part of the story

As sobering as these data and statistics are, they tell only part of the story; the true burden of occupational injury and illness is far higher. Numerous studies find significant under-reporting of workplace injuries and illnesses (see hereherehereherehere). Reporting of occupational disease is particularly fraught, as many if not most physicians are not trained to recognize or even inquire about the hazards and exposures their patients may have encountered on their jobs.

Nor do the statistics reveal the horror, loss, pain, and suffering these injuries and diseases entail. In the words of Dr. Irving Selikoff, a tireless physician advocate for worker health and safety, “Statistics are people with the tears wiped away.”

Just imagine having to deal with the knowledge that a loved one was suffocated in a trench collapse; asphyxiated by chemical fumes; shot during a workplace robbery; seriously injured while working with a violent patient or client; killed or injured from a fall or a scaffolding collapse; or living with an amputation caused by an unguarded machine.

Or the heartache of watching a loved one who literally can’t catch a breath because of work-related respiratory disease. Or is incapacitated by a serious musculoskeletal injury. Or has contracted hepatitis B or HIV because of exposure to a blood-borne pathogen at work.

And here’s the kicker: virtually all work-related injuries and illnesses are preventable. There’s no shortage of proven technologies, strategies and approaches to preventing them. From redesign, substitution and engineering solutions that eliminate or otherwise control hazards and exposures to safety management systems, worker training programs, protective equipment, and medical screening and surveillance programs, there are multiple paths to prevention. And, as a former assistant secretary of labor for occupational safety and health, David Michaels, recently wrote in Harvard Business Review, safety management and operational excellence are intimately linked.

Historic progress now at risk

The Good News: It’s important to note and remember that workplace health and safety in the US is a lot better than it used to be before Congress enacted the Occupational Safety and Health Act of 1970, and even since 2000. This progress has resulted large measure from the struggles of labor unions and working people, along with the efforts of federal and state agencies. Workplace fatalities and injuries have declined significantly, and exposures to toxic chemicals have been reduced.

It is also a testament to the effectiveness of health and safety regulations and science-based research. We can thank the Occupational Safety and Health Administration (OSHA), the Mine Safety and Health Administration (MSHA), and the National Institute for Occupational Safety and Health (NIOSH) for many of these protections and safeguards. We must also acknowledge and thank the persistence, energy, and efforts of the workers, unions, researchers, and advocates that have pushed these agencies along the way.

The Red Flags: There are numerous indications that this progress will be slowed or even reversed by a Trump administration intent on rolling back public protections and prioritizing industry interests over the public interest. For example:

  • Right off the bat, the president issued his two-for-one executive order requiring agencies to rescind two regulations for each new one they propose. So, to enact new worker health and safety protections, two others would have to go.
  • OSHA has delayed implementation or enforcement of several worker protection rules that address serious health risks and were years in the making—i.e., silica, the cause of an irreversible and debilitating lung disease, and beryllium, a carcinogen and also the source of a devastating lung disease.
  • OSHA has left five advisory and committees to languish—the Advisory Committee on Construction Safety and Health; the Whistleblower Protection Advisory Committee; the National Advisory Committee on Occupational Safety and Health; the Federal Advisory Council; and the Maritime Advisory Committee—thus depriving the agency of advice from independent experts and key stakeholders.  Earlier this week, a number of groups, including the Union of Concerned Scientists, sent a letter to Secretary of Labor Acosta asking him to stop sidelining the advice of independent experts.
  • President Trump signed a resolution that permanently removed the ability of OSHA to cite employers with a pattern of record keeping violations related to workplace injuries and illnesses. Yes, permanentlybecause it was passed under the Congressional Review ActAnd Secretary Acosta recently seemed hesitant to commit not to rescind OSHA’s rule to improve electronic recordkeeping of work-related injuries and illnesses.
  • Having failed in efforts to cut some worker health and safety protections and research in his FY18 budget proposal, the president is going at it again with his FY19 proposal. He is calling for the elimination of the US Chemical Safety and Hazard Investigation Board and OSHA’s worker safety and health training program, Susan Harwood Training Grants. There is, however, a tiny bit of good news for workers in President Trump’s proposed budget for OSHA; it includes a small (2.4 percent) increase for enforcement, as well as a 4.2 percent increase for compliance assistance. Of note, employers much prefer compliance assistance over enforcement activities.
  • The president’s budget also proposes to cut research by 40 percent at the National Institute for Occupational Safety and Health (NIOSH)—the only federal agency solely devoted to research on worker health and safety—and eliminate the agency’s educational research centers, agriculture, forestry and fishing research centers and external research programs.
  • He has also proposed taking NIOSH out of CDC, perhaps combining it later with various parts of the National Institutes of Health. Never mind that NIOSH was established by statute as an entity by the Occupational Safety and Health Act of 1970.
  • The Mine Safety and Health Administration (MSHA) has also jumped on the regulatory reform bandwagon. The agency has indicated its intent to review and evaluate its regulations protecting coal miners from black lung disease. This at a time when NIOSH has identified the largest cluster of black lung disease ever reported.
  • EPA actions are also putting workers at risk. Late last year, the EPA announced that it will revise crucial protections for more than two million farmworkers and pesticide applicators, including reconsidering the minimum age requirements for applying these toxic chemicals. Earlier in the year, the agency overruled its own scientists when it decided not to ban the pesticide chlorpyrifos, thus perpetuating its serious risk to farmworkers, not to mention their children and users of rural drinking water. And the agency has delayed implementation of its Risk Management Plan rule to prevent chemical accidents for nearly two years.
  • The Department of Interior is following up on an order from President Trump to re-evaluate regulations put into place by the Obama administration in the aftermath of the Deepwater Horizon accident in 2010, which killed 11 offshore workers and created the largest marine oil spill in United States’ drilling history.
  • And then there’s a new proposal at the US Department of Agriculture that seeks to privatize the pork inspection system and remove any maximum limits on line speeds in pig slaughter plants. Meat packing workers in pork slaughter houses already have higher injury and illness rates than the national average. Increasing line speeds only increases their risk.
Remember and renew

The Trump administration makes no bones about its (de)regulatory agenda. The president boasts about cutting public safeguards and protections, and his agency heads are falling right in line. Our working men and women are the economic backbone of our nation. They produce the goods and services we all enjoy, depend on, and often take for granted. They are our loved ones, our friends, and our colleagues. They deserve to come home from work safe and healthy.

Worker Memorial Day is a time to pause and remember workers who have given and lost so much in the course of doing their jobs. It is also a time to renew our vigilance and be ready to use our voices, votes and collective power to demand and defend rules, standards, policies and science-based safeguards that protect our loved ones at work. Let’s hold our elected leaders and their appointees accountable for the actions they take—or don’t take—to protect this most precious national resource.

This post originally appeared in Scientific American.

How Important is it for Self-Driving Cars to be Electric?

UCS Blog - The Equation (text only) -

A Waymo self-driving car on the road in Mountain View, CA, making a left turn. CC-BY-2.0 (Wikicommons).

The rapid development of self-driving technology has raised many important questions such as the safety of automated vehicles (AVs) and how they could radically alter transportation systems. These are critical questions, but AVs also have the potential to result in significant changes to the global warming emissions from personal transportation.

An interesting recent study from the University of Michigan and Ford Motor Company lays out the details of the likely changes in emissions from using an AV system on both electric and gasoline cars. The main takeaway from the study is that adding AV equipment to a car adds weight, aerodynamic drag, and electrical power consumption that leads to increased fuel consumption. There is the potential to offset emissions from more efficient driving by connected and automated vehicles, but by far the largest impact on emissions is the choice of fuel: gasoline versus electricity.

Direct emissions versus behavioral and usage changes

Switching from human control to fully automated driving will have direct effects on emissions as well as changes to the amount we use vehicles. Direct emissions changes include reductions in efficiency from factors like increased drag from sensor equipment and the power consumption of required computing and communications equipment. Positive direct impacts could include more efficient driving, such as smooth and precise acceleration control in an automated system.

Automation will also change how we use cars and how much we use them, indirectly affecting emissions, though the effect of AVs on these indirect emissions is much more speculative. While some changes, like “right-sizing’ (for example, having smaller one or two occupant cars available for solo trips), could decrease emissions, many of the usage changes considered would increase vehicle usage and therefore emissions. Making long distance driving easier or more productive could encourage people to live farther from their jobs. Having fully automated vehicles will mean more people can use a car. The elderly, blind, youth, and people with disabilities could switch from transit to a car, or simply add trips that would not have been able to happen otherwise. While many of these uses of AVs would be beneficial, it’s important to understand the potential emissions from AVs and how we could minimize the total contribution of global warming pollution from personal transportation.

That’s why this new study is important: it lets us at least estimate the direct, short-term implications of AV technologies on emissions. While it doesn’t examine the potential impacts of driving more, it does shed light on the direct effects of adding these new features to cars.

AV equipment increases fuel consumption, especially for gasoline vehicles

Focusing on the physical changes to the vehicle, the addition of self-driving and sensor equipment has three major changes to the fuel consumption (and therefore emissions) of the AV. First, the additional weight of the equipment decreases efficiency. Second, AVs that have sensor equipment like cameras and LiDAR (laser-based imaging) often require side bulges and roof-mounted equipment pods. Like a conventional cargo rack, these additions are detrimental to fuel economy as they increase the vehicle’s aerodynamic drag. Lastly, the sensors and computing equipment that enable self-driving require additional electrical power beyond a conventional vehicle. For a gasoline car, this means added load on the engine to power an alternator (and therefore higher gasoline consumption), while a battery electric car will have reduced overall driving efficiency (and therefore shorter range between recharges).

Waymo’s AV minivan adds sensors and computing systems that increase weight, drag, and electrical power consumption. This model was used as an example of a ‘large’ sized AV system in the referenced study. Image source: Waymo

The researchers from Michigan and Ford examined three sizes of AV systems that could be added to vehicles: an AV system with sensors like a Tesla Model S, a medium-sized system with smaller external sensors similar to a Ford AV prototype, and finally a large AV system modeled after Waymo’s modified Chrysler Pacifica AV. While all AV systems have a negative impact on fuel consumption and emissions, the largest impact is seen in the increased drag from the large AV system.

AV systems can increase global warming emission attributed to driving. The largest impact is seen on larger AV systems due to drag from the sensor units.

Improved driving behavior and other savings from AVs are possible in the long run

The study also points out the possibility of fuel savings from having self-driving and connected cars. These savings could come from several sources. For example, AVs could have more efficient acceleration and braking (“eco-driving”), especially if they are communicating with other cars to anticipate speed changes in traffic. AVs could also communicate with infrastructure like traffic signals to reduce idling and stop-and-go driving. On highways, groups of connected AVs could drive much closer together than a human driver could. This ‘platooning’ technology can increase fuel efficiency by reducing aerodynamic resistance, similar to the drafting that competitive cyclists and NASCAR drivers use to save energy. There is also a potential for AV technology to increase fuel consumption because cars could potentially drive safely on the highway at higher speeds and high speeds reduce efficiency.

These factors are currently harder to quantify than the impact of the AV equipment, and some of the potential benefits require having most or all cars on the road be at least connected, if not fully automated. For example, platooning would require multiple AVs traveling on the same roadway at the same time, which would require a critical mass of AVs to be deployed. The researchers in this study estimate a potential emissions savings on average of 14 percent from these technologies if fully implemented. However, they do not consider changes to vehicles that are already producing some of these benefits, such as improved aerodynamics (which gives some of the same benefits as platooning) or stop-start systems (which already act to reduce some of the adverse impacts of stop-and-go traffic and intersections).

Early AV models are more likely to have higher emissions

The study also considered the impact of the much more power-hungry equipment used in early developmental AV systems. For example, early prototypes have been reported to require in excess of 2,000 W of power, mostly for on-board computing. Increased computer power requirements in these early prototypes, for example going from the from these early AVs (see table). This is especially true for the less-efficient gasoline-engine driven vehicles, where increased electric power requirements would increase emissions over 60 grams CO2 equivalent per mile.  That’s equal to reducing the fuel economy of a 35MPG car to 29MPG, or like adding the emissions from running 10-25 iMac computers using a gasoline generator for every car. Since early AVs will not have enough numbers on the road to take advantage of platooning and connected vehicle savings, it is very likely that in the near-term AVs will contribute higher net emissions than a conventionally driven vehicle using the same fuel.

 

Emissions from AV system’s electricity use. Baseline system is 200W computer system, prototype uses 2,000W computing system. AV system size Baseline AV system, battery electric vehicle (gCO2eq/mi) Baseline AV system, gasoline vehicle (gCO2eq/mi) Prototype AV system, battery electric vehicle (gCO2eq/mi) Prototype AV system, gasoline vehicle (gCO2eq/mi) small 3.0 8.0 25.9 70.3 medium 3.2 8.6 26.1 71.0 large 4.3 11.8 27.3 74.1

 

Switching from gasoline to electricity is by far the most important factor in reducing emissions

 

The choice of fuel (gasoline versus electricity) is the most important choice for reducing emissions. Emissions estimates based on Ford Focus gasoline and battery-electric models and includes ‘well-to-wheel’ emissions for fuel production, distribution, and use in the vehicle. Emissions related to vehicle or AV system production are not included in this chart.

The most important determinant of direct emissions from vehicles is not the AV system, but is the choice of gasoline or electricity. Choosing a electric vehicle instead of the gasoline version for this analysis reduces global warming emissions from 20 to over 80 percent, depending on the emissions from electricity generation. The addition of AV equipment only increases this difference, making it clear that electric drive is required to have AVs that maximize emissions reductions.

What will the future hold? Some AV companies, like Waymo (spun off from Google) and Cruise Automation (partnered with General Motors) are using EVs and have plans to continue using electric drive in their AVs. Other companies have been less progressive, such as Ford announcing that they anticipate using gasoline-only hybrids for their AVs. If AVs have the transformative effect on mobility and safety that many predict, it will be vital to encourage the use of cleaner electricity instead of gasoline in these future vehicles.

 

 

7 Times Scott Pruitt Stole an Idea from the Villains of Captain Planet

UCS Blog - The Equation (text only) -

Captain Planet and the Planeteers (you can be one, too!) was a staple cartoon of the early nineties that I watched when I was a child. It gave me unrealistic expectations that there was indeed a goddess named Gaia who would one day bestow upon me a magical ring with which I would protect the environment.

Only later would I find out that Gaia was actually a drag queen who would not bestow upon me a magical ring but rather the power of fierceness. A quality that is surely as important in environmental protection.

I hadn’t watched the show for a very long time, until recently, when I was reminded of it by Scott Pruitt—the man in charge of environmental protection in the United States. After re-watching many episodes, it has become clear to me that Pruitt’s actions to undermine scientists, deny climate change science, and increase pollution across our country have clearly been taken from the series’ villains’ playbook.

So, Administrator Pruitt…

It’s time that someone gave credit to the Captain Planet villains. Here are at least seven times that Scott Pruitt has stolen their ideas.

1. Administrator Pruitt’s plan to heat up the Earth’s atmosphere by trapping the sun’s rays under a thick layer of air pollution—um, that was the villains’ Dr. Blight’s idea in the episode Heat Wave where she literally traps the sun’s rays under a thick layer of toxic smog, Mr. Pruitt.

2. Allowing pollutants into streams and tributaries, which will affect people’s drinking water. You stole that idea from Captain Planet villain Looten Plunder in the episode Don’t Drink the Water in which he leads efforts to pollute the Earth’s water supplies with various contaminants.

3. Remember that time Administrator Pruitt hijacked the Environmental Protection Agency’s websites and deleted and altered a ton of scientifically backed information so the public remains ignorant to it? The Captain Planet villain Verminous Skumm sure does from the episode Who’s Running the Show, in which he hijacks a national environmental television show and spreads misinformation about environmental protection.

4. When villain Looten Plunder took advantage of a disadvantaged community to continue the use of a toxic pesticide in the episode The Fine Printring a bell, anyone?

Yes, Administrator Pruitt really did that when he decided not to ban the toxic pesticide chlorpyrifos, which affects the neurological development of children (mostly children of color).

5. Assigning conflicted individuals to scientific positions? Clearly, Administrator Pruitt watched the episode Greenhouse Planet that covered this when Dr. Blight (one of the villains) is assigned to be the President’s science advisor and she incorrectly informs him about the harm of greenhouse gas emissions. It’s ironic that in the episode it’s the president’s science advisor with a conflict of interest – our president doesn’t even have an official science advisor.

Any words for that one Neil?

6. Quashing the work of scientific experts? Yea, Captain Planet and the Planeteers covered that too in the episode A Perfect World where Dr. Blight attempts to trash the work of a scientist.

Even when I try to make one of my post humorous, it gets sad as all of this terrible adds up. I wish Administrator Pruitt could feel my sad.

7. One more and I promise that’s it. Basically, just the Trump administration destroying the environment and public health protections like when all the villains came together in Mission to Save Earth to create Captain Pollution.

Even though the Trump administration is borrowing their playbook from the villains of a nineties children’s cartoon, we all know what happens at the end of each one of these episodes—Captain Planet becomes a boss and saves the planet.

Unfortunately, Captain Planet isn’t real. That’s why we at the Union of Concerned Scientists are working so hard to ensure that science remains in its rightful place when your government leaders are making important policy decisions that protect (literally) our lives.

If you’re interested in helping, check out how to do that here! Science has afforded us a lot of protections including clean air, clean water, better health care, and the conservation of many of our critically important plants and animals. I’m not about to give that up. This sassy scientist is still in this fight!

Do Local Food Markets Support Profitable Farms and Ranches?

UCS Blog - The Equation (text only) -

Local produce, sold through direct-to-consumer channels like farmers markets and community supported agriculture programs, is often sold at a price premium. But does that premium impact farmers’ bottom line? Photo: Todd Johnson/ Oklahoma State University.

How many times have you heard that when you shop locally, farmers win? Families shop at farmers markets, school districts procure locally-grown and raised items, and restaurants curate seasonal menus at least in part because they believe they are supporting the economic viability of local producers. But do we have evidence that these local markets actually provide economic benefits to farmers and ranchers?

For the past decade, we have seen growing evidence that household and commercial buyers are willing to pay a premium for local products, and that farmers capture a larger share of the retail dollar through sales at local markets. But until recently, there was little evidence of the impact of these markets on farmers’ and ranchers’ bottom line.

To better understand the potential of local food markets, we evaluated the financial performance of farmers and ranchers selling through local markets compared to those selling through traditional wholesale markets, which may pool undifferentiated grains, animals or produce from hundreds of producers to sell to large food manufacturers or retailers. We use data provided by the U.S. Department of Agriculture’s Agricultural Resource Management Survey (ARMS), a nationally representative survey providing annual, national-level data on farm and ranch businesses. ARMS targets about 30,000 farms annually, of which about 1,000 report some local food sales.

For this research, we define local markets in two distinct categories: direct to consumer sales (such as farmers’ markets; community supported agriculture, or CSAs; and farm stands) or intermediated sales to local food marketing enterprises that maintain the product’s local identity (such as restaurants, grocery stores, or food hubs).

Local food can spur rural development

The first notable difference between farms and ranches that sell through local food markets and those that do not is that, on average, farms selling through local food markets spend a higher percentage of their total expenditure on labor (8% compared to 5%). Even more interesting is that as local food producers get larger, their share of expenditure on labor increases! (See the green bars in figure 1). This stands in contrast to the ‘efficiency’ story we have long heard in agriculture. Conventional wisdom dictates that as farms scale up, they substitute capital for labor, becoming more efficient and producing more with less. But in the case of local markets, it appears that as the volume of direct and intermediary sales grows, the hours, skills, and expertise needed to manage buyer-responsive supply chains increases, as well. This finding supports the argument that local food can serve as a rural economic development driver; farms selling through local markets require more labor per dollar of sales, thus creating jobs.

Figure 1 Share of Variable Expenses, Local Food Producers, by Scale (Bauman, Thilmany, Jablonski 2018)

 

Do these additional labor expenditures impact the profitability of local producers? To answer this question, we categorized farms and ranches that sell through local markets by size, or sales class—the smallest reporting less than $75,000 in sales, and the biggest reporting $1,000,000 or more. We then broke down each sales class by performance, using return on assets as our indicator for performance, and organized farms and ranches into quartiles (see Figure 2). This categorization allowed us to zero in on the highest performing producers of every sales class.

Though performance varies widely, we found that of all producers with more than $75,000 in sales, at least half were break-even or profitable. Of every sales class – even the smallest!—farms in the top quartile reported returns over 20 percent—very strong profitability for the agricultural sector, where profit margins are generally slim.

What makes a local farm succeed?

To explore patterns in profitability a little bit further, we can compare how various financial measures vary across those with low vs. high profits. Among the top performing quartile, farms and ranches that sell through intermediated channels only or a combination of direct and intermediated channels performed much better than those using direct markets only. This may signal the importance of intermediated markets, and justify support for intermediated market development through grant programs such as the Local Food Promotion Program. Further, using more in-depth statistical analysis of local and regional producers, we found that farms and ranches selling only through direct-to-consumer markets may be struggling to control their costs, and that strategic management changes to these operations could result in significant improvements in profitability.

Figure 2 Local Food Producers Return on Assets by Sales Class and Market Channel (Quartile 4 is the most profitable) (Bauman, Thilmany, Jablonski 2018)

In summary, we see that local food markets provide opportunities for profitable operations at any scale, but that sales through intermediary markets are correlated with higher profitability when compared to producers that use only direct channels.

To learn more about the economics of local food systems (including more about this research), we encourage you to visit localfoodeconomics.com, where we have compiled a number of fact sheets on this topic. We started this community of practice in conjunction with the U.S. Department of Agriculture’s Agricultural Marketing Service and eXtension. The website and listserv serve as a virtual community in which academic, nonprofit and policy professionals can engage in conversations about the economic implications of the many activities that fall under the umbrella of local food. For the broader food system community and consumers, gaining insights on the underlying economic implications of how food markets work may inform their decisions on how they can use their food dollars in ways that impact their community in a positive way. We hope to see you there!

Becca B.R. Jablonski is Assistant Professor and Food Systems Extension Economist at Colorado State University.

Dawn Thilmany McFadden is Professor of Agricultural and Resource Economics and Outreach Coordinator at Colorado State University.

Allie Bauman is Research Assistant in the Department of Agricultural and Resource Economics and Colorado State University.

Dave Shideler is Associate Professor of Agricultural Economics at Oklahoma State University.

This research is supported through the U.S. Department of Agriculture’s National Institute of Food and Agriculture (award number 2014-68006-21871).

 

Science Network Voices gives Equation readers access to the depth of expertise and broad perspective on current issues that our Science Network members bring to UCS. The views expressed in Science Network posts are those of the author alone.

What Does North Korea Want—and What is the US Prepared to Give?

UCS Blog - All Things Nuclear (text only) -

North Korea is not likely to negotiate in earnest unless it is convinced the United States is committed to the process. It is important that the administration put together a package of what it is willing to put on the table in response to Pyongyang’s steps.

Kim has talked about the dual goals of security and improving the economy. A key goal of early talks should be for the United States to understand what North Korea wants and what it is willing to do to get those things.

(Source: KCNA)

Kim’s first interest is likely setting up conditions that assure the survival of his regime without needing nuclear weapons. Recent press reports indicated what steps North Korea sees as important to increase its security, including:

  • stopping the inclusion of “nuclear and strategic assets” during US joint military exercises with South Korea,
  • guaranteeing that the United States will not attack North Korea with either conventional or nuclear weapons,
  • converting the armistice agreement from the Korean War into a peace treaty, and
  • normalizing diplomatic relations with the United States.

As part of normalizing relations, the United States should discuss opening a liaison office in Pyongyang, and to have North Korea do so in the United States. This step was discussed in the 1990s and was expected to occur by the end of 1998, but never happened.

As noted in Part 1, North Korea stated in 2016 that denuclearization “includes the dismantlement of nukes in South Korea and its vicinity.” The United States will need to understand what it means by “its vicinity,” and whether Pyongyang sees that as including the US air base on Guam, where nuclear-capable bombers are based, or Okinawa, where nuclear storage sites may be built as part of a new US military base there.

Non-military issues

In addition to security measures, North Korea is also looking for economic and development assistance. As in past negotiations this assistance would not all come from the United States.

One step would clearly be relaxing sanctions. A second would be to remove North Korea from the list of state sponsors of terrorism. President Bush had removed it from the list in 2008, but President Trump relisted it last November. This creates a barrier, for example, to economic assistance and getting loans from the World Bank and other international institutions.

In the past there were discussions of helping North Korea grow more of its own food through assistance with fertilizer, measures to repair and improve irrigation systems, etc. Such assistance would still be important.

In past negotiations there has also been a focus on energy assistance. Frequently that took the form of shipments of heavy fuel oil, which was chosen because it could be used to produce energy but was not highly refined enough to be useful to fuel military vehicles, etc. However, its interest is certainly broader than that. In the 1990s, North Korea was interested in assistance in developing energy technologies, including sending scientists to the National Renewable Energy Laboratory. The North could also benefit from assistance in modernizing its power grid.

In the past, North Korea has also declared the right to develop nuclear energy for peaceful uses, and is currently building a reactor that it says is intended for producing power and would not be used for military purposes. In principle, this could be done once it has rejoined the NPT and allowed the IAEA to safeguard its nuclear facilities, but given North Korea’s past action in expelling inspectors and pulling out of the NPT this is certain to be controversial.

North Korea has also been interested in assistance to improve its mining sector. Such a step could be very important since minerals are one of the main resources North Korea has to earn foreign exchange. A recent article notes that

North Korea has sizeable deposits of more than 200 different minerals, including coal, iron ore, magnesite, gold ore, zinc ore, copper ore, limestone, molybdenite, graphite and tungsten. All have the potential for the development of large-scale mines.

The United States could help establish a fund to assist North Korea in developing its mining technology and infrastructure, and could encourage private capital to help develop the mining sector. In 1993, Israel was negotiating with North Korea to stop missile sales to the Middle East, and assistance for its mining industry was an important part of the deal. Ultimately, Israel backed away from this agreement under US pressure since the United States was negotiating with North Korea over its nuclear program at the time.

Former Senators Nunn and Lugar have also proposed developing a program that would help employ and retrain scientists and engineers from North Korea’s military sector, and to provide technical and financial assistance for destroying and disposing of nuclear, chemical, and biological weapons and their delivery systems. This is similar to what was done under the successful Cooperative Threat Reduction program Nunn and Lugar developed after the breakup of the Soviet Union.

Finally, North Korea has stated that it wants to be able to use space in the ways other countries do—for communications, earth monitoring, resource exploration, weather forecasts, etc.—and has developed an incipient satellite launch capability. An indigenous satellite launch program could be acceptable sometime in the future when the international community has developed more trust in the North Korean regime, but not in the near term.

There are several approaches to negotiating an end to this program. One approach is for the international community to provide North Korea access to various kinds of satellite services and help with developing the expertise needed to use it, eliminating the need for it to own and operate its own satellites.

A second approach would be to set up a consortium that could help North Korea develop technical satellite expertise and design and build a satellite. The international community would then fund or heavily subsidize foreign launch services to compensate for North Korea’s lack of domestic launch capability. And in either case it could be useful to integrate North Korea into various international and regional space and satellite forums.

(Part 1 of this post)

What Does the US Want from North Korea?

UCS Blog - All Things Nuclear (text only) -

President Trump is planning to meet with North Korean leader Kim Jong-Un in May or June. In preparing for the summit, the administration must be clear about what it wants from the process—both near-term and long-term. And it needs to figure out what it is willing to put on the table to get those things.

(Source: KCNA)

Beginning talks

The current situation seems to offer about as good a stage as one can imagine for talks that could lead to meaningful changes in North Korea’s nuclear and missile programs.

In particular, North Korea has said it is willing to talk about denuclearization, which is a long-standing US pre-condition for talking. Press reports in early April reported that Pyongyang had repeated its willingness to discuss denuclearization and indicated the key things it wanted in return, which are steps to increase the security of the regime that appear similar to steps the United States agreed to under the Bush administration. And it has said it would not require US forces to leave South Korea as part of such a deal.

Moreover, North Korea has said it is ending nuclear and missile tests. It has not conducted a missile test in more than four months—which is especially noteworthy after testing at a rate of nearly twice a month in 2017. A lack of testing is meaningful since it places significant limits on North Korea’s development of nuclear weapons and long-range missiles, and it can be readily verified by US satellites and seismic sensors in the region.

There is a debate about whether “denuclearization” is a realistic long-term goal of negotiations, what that term means to North Korea, and what it would take to get North Korea to give up its weapons. It seems significant, however, that in July 2016 Pyongyang stated that denuclearization means “denuclearization of the whole Korean peninsula and this includes the dismantlement of nukes in South Korea and its vicinity” but did not say it would only give up its weapons when the United States and other countries disarm, which is the position it had taken previously.

Whether or not full denuclearization of the peninsula is possible, there is a lot to be done in the near-term that would greatly benefit US and regional security and set the conditions for denuclearization.

And the administration should remember that the alternatives to diplomacy are not good: The best is a stalemate in which the United States uses the threat of retaliation to deter a North Korean strike, just as it does with Russia and China. A military strike and response by North Korea would be a disaster for the region.

Confrontation vs. Diplomacy

The first thing the administration must decide is whether it will pursue confrontation or diplomacy in this meeting.

There is a strong feeling among some in Washington that the North Korean regime is evil and that any effort to negotiate simply helps the regime—and that the United States should not be doing that. Instead these people believe the only solution is regime change in Pyongyang. They see a face-to-face meeting at best as an opportunity to confront North Korea rather than seriously negotiate.

This issue will certainly become a prominent point debated in Washington if negotiations go forward. If President Trump wants an agreement he will have to ignore these arguments, which torpedoed negotiations under the Bush administration.

Even among those in the administration who want to engage North Korea, the prevailing idea seems to be that the United States should demand that North Korea give the United States what it wants up front before Washington will reciprocate.

For example, in his recent confirmation hearing for secretary of state, Mike Pompeo said the administration would not give North Korea “rewards” until it had denuclearized “permanently, irreversibly.” Similarly, an unnamed administration official said “the US will not be making substantial concessions, such as lifting sanctions, until North Korea has substantially dismantled its nuclear programs.”

Because of the long-standing lack of trust between the two countries, North Korea has instead called for a “phased, synchronized” implementation of any deal. This is the approach adopted at the Six Party talks in 2005, when the parties agreed to move forward “commitment for commitment, action for action.” Kim presumably wants a step-by-step process that convinces him that he will not become then next Gadhafi.

These US statements may still allow Washington to offer things early on other than sanctions relief, such as taking steps to normalize relations and remove North Korea from the list of state sponsors of terrorism. If instead the administration expects North Korea to give the United States what it wants up front—and lose its negotiating leverage before the United States addresses the issues Pyongyang brings to the table—that approach will fail.

One concern is that the United States may overestimate the leverage it has, overplay its hand at the table, and lead to a failed summit. If other countries see an intransigent US approach as preventing progress on engaging North Korea and reducing the risk it poses, that could begin to create cracks in the sanctions regime, which would reduce US leverage for substantial changes.

It’s worth remembering that in the early 2000s the George W Bush administration’s confrontation policy derailed negotiations that appeared close to ending Pyongyang’s plutonium production and missile development at a time North Korea had no nuclear weapons or long-range missiles. Following that, North Korea continued these programs and today it has both.

What is North Korea up to?

Why the new tone from Pyongyang and the limits it has announced on its nuclear and missile programs?

Some suggest this is just a ploy by North Korea to buy time to produce more fissile material and missile parts, and to try to create splits between the countries currently supporting sanctions against it with the hope of getting sanctions relief without really limiting its military capabilities in a serious way.

On the other hand, it may be that Kim understands his military buildup is unsustainable and that to stay in power he needs to turn to improving the economy, as he promised when he took power. Nicholas Kristof wrote recently that “Kim has made rising living standards a hallmark of his leadership, and sanctions have threatened that pillar of his legitimacy.” Now that he appears to feel secure with his position within the ruling elite he may need to think about the middle class that appears to be emerging in North Korea.

He may have decided, as his father appeared to in the late 1990s, that opening to the world is his only chance for real economic growth. Not only are his nuclear and missile programs barriers to that opening, they are also two of the few things of significant value he has to take to the negotiating table.

That doesn’t mean he has decided to get rid of them any time soon. But if this is his thinking, then significantly limiting—and possibly eventually eliminating—these programs makes sense if he can get security assurances that convince him he doesn’t need these weapons.

To understand what it is dealing with, the United States will have to take steps that test to what extent the North is willing to accept meaningful limitations—such as accepting international inspectors to confirm that plutonium production and uranium enrichment facilities are shut down and beginning to be dismantled. This has happened before with North Korea’s nuclear facilities at Yongbyon, so there is a precedent. These steps are important both for understanding Pyongyang’s intent and for halting its nuclear program on the way to denuclearization.

Near-term goals

The best outcome for a meeting between the two leaders is that it will set broad goals for an agreement that addresses both countries’ security concerns and establishes a path to denuclearization. But as we’ve seen in the past, working out the details—especially on issues like inspections and verification—will be tricky and take time. So one goal of the first meeting should be to agree to a schedule of ongoing talks to give both countries an expectation of a continuing process, and a list of what issues will be on the table at future meetings.

Here are three things that should be near-term goals of the negotiations:

  1. Locking in a permanent ban on nuclear and missile tests, and satellite launches.

(Source: KCNA)

North Korea has announced that it is ending nuclear and missile tests and shutting down its nuclear test site. The United States should clarify the details and get it written down as a formal commitment.

While North Korea put this on the table even before negotiations began, people should not overlook its potential importance.

North Korea has now done a single test of a missile that in principle can reach all of US territory, several underground tests of an atomic bomb, and a single underground test of what was likely a hydrogen bomb. Given those tests, North Korea can say it has—in principle at least—the ability to hit the United States with a nuclear missile and therefore has a deterrent to a US military attack.

Indeed, in his New Years’ message this year, Kim said, “we achieved the goal of completing our state nuclear force in 2017,” adding that “the entire area of the US mainland is within our nuclear strike range, and the US can never start a war against me and our country.”

But North Korea does not yet have a fully tested capability to attack the United States with a long-range missile, and this matters. With only a single test of its Hwasong-15 missile on a lofted trajectory and no known successful test of a reentry vehicle on a long-range missile, additional tests are necessary to gain that practical capability. Similarly, after only a single test of a hydrogen bomb, it is very unlikely North Korea has a design that is small and light enough to launch on a missile, and it has little information about the reliability of the design.

This means that stopping additional nuclear and missile tests is important and meaningful. And since the United States can verify that no tests are occurring, it will know if North Korea is abiding by the agreement.

There are reasons why Kim may be happy to stop testing long-range missiles at this point. For one thing, while his single test of the Hwasong-15 missile was successful, there is no guarantee that a second test would be. A failure would undercut Kim’s claim of having a missile capability against the United States.

Moreover, gaining confidence in the missile performance would require a series of successful flight tests. The rapid increase in the range of the tested missiles during 2017 may have been possible because key components were acquired from Russia. If so, the North may be limited in how many missiles it can actually build—either to test or put in an arsenal.

While I have argued that developing a working reentry vehicle is not likely to be a technical barrier for North Korea, it has not yet demonstrated that it has one in hand for a long-range missile. Stopping further missile tests would keep it that way.

The two countries should clarify what missiles the flight ban applies to. The United States should press for it to include all missiles—ballistic and cruise—that would have a range over 300 km with a 500 kg payload, which is the MTCR limit. It would therefore apply to missiles that could reach Japan. South Korea has developed ballistic missiles with ranges up to 800 km and cruise missiles with ranges up to 1,500 km, and this flight ban would apply to the South as well. That would require South Korea’s agreement to this limit.

The United States should make clear that the ban also applies to satellite launchers. Because the technologies for satellite launchers can be used to develop long-range missiles, stopping this development is an important part of ending its missile program. Getting the North to agree to give up that program, given the civil benefits of a satellite program, is likely to require the US to arrange for the international community to provide access to space launch or satellite services in place of a domestic space launch program.

A longer term step would be eliminating all missiles on the peninsula that fall under the flight ban. Verifying elimination would be more difficult than verifying a flight ban, but was discussed in the negotiations with North Korea under both Clinton and Bush, and verification was put in place as part of the Intermediate Nuclear Forces (INF) Treaty, which eliminated all US and Russia ground-based missiles with ranges between 500 and 5,500 km.

Following that, the next step could be to eliminate all missiles, as well as the artillery North Korea has aimed at Seoul, as part of a broader agreement limiting conventional forces.

  1. A freeze on the production of separated plutonium and highly enriched uranium, leading to a ban

Yongbyon reactor (Source: US Senate)

A second near-term goal of negotiations should be an agreement to shut down North Korea’s nuclear reactors, which are the source of its plutonium, and have inspectors on the ground to ensure it does not extract plutonium from fuel rods that have been removed from the reactors. North Korea agreed to both steps in the 1994 Agreed Framework and verifiably did so until the Framework collapsed in 2002.

The agreement should also put international inspectors at North Korea’s known enrichment facility to verify that it is not being operated, and allow challenge inspections of other sites that it might suspect are being used for enrichment.

Getting these agreements would not be unprecedented. During the 2005 negotiations, Pyongyang agreed to “abandoning all nuclear weapons and existing nuclear programs and returning, at an early date, to the Treaty on the Nonproliferation of Nuclear Weapons and to IAEA safeguards.” Those negotiations eventually stalled over disagreements on verification measures and inspections, which were unresolved when the Bush administration left office.

The agreement should also require Pyongyang to preserve information that in the future would allow the IAEA to construct a history of its past nuclear activities. This would allow the IAEA to determine how much fissile material North Korea had produced—and whether it was all accounted for.

As part of the Six Party talks under George W. Bush in 2008, North Korea shut down its reactor at Yongbyon and provided 18,000 documents about its plutonium production, so there is a precedent for this as well.

  1. A ban on the sale or transfer of missile or nuclear technology, or technical assistance

As part of a deal, North Korea should agree to a ban on the sale or transfer of missile or nuclear technology to other countries or groups, and a ban on providing technical assistance on these systems. Such a ban would require agreement on transparency measures to help provide confidence that such activities were not taking place. In a recent speech, Kim stated:

… the DPRK will never use nuclear weapons nor transfer nuclear weapons or nuclear technology under any circumstances unless there are nuclear threats and nuclear provocations against the DPRK.

So this could be a starting point for a discussion of these issues.

In the longer term, in addition to talking about denuclearization, the United States should focus on getting rid of North Korea’s chemical and biological weapons programs, and put restrictions on its conventional weapons. The latter would have to include restrictions on South Korean conventional weapons as well.

(The second part of this post will discuss what North Korea is likely to want from the talks.)

EPA Chief Pruitt Even Violates His Own Principles

UCS Blog - The Equation (text only) -

With Environmental Protection Agency Administrator Scott Pruitt’s job now hanging in the balance, it is a good time to recall that, just after his Senate confirmation, he gave a speech at the Conservative Political Action Conference (CPAC) that emphasized the three principles he said would stand at “the heart of how we do business at the EPA”: process, rule of law, and federalism.

A little more than a year into his tenure, he has violated all of them.

Subverting Process

“Number one,” Pruitt told his CPAC audience, “we’re going to pay attention to process.”

In fact, as we now know, Pruitt has a long track record—going back to his days in Oklahoma—of flouting official procedures when it suits him.

Most troubling is Pruitt’s disdain for EPA policy procedures, which have a considerable impact on public health. Just this week, Pruitt undercut the EPA’s long-established process for drafting strong, protective regulations by proposing that the agency no longer accept studies if all of their data isn’t publicly available. That would mean the agency would have to ignore most epidemiological studies, which rely on private medical information that cannot and should not be shared.

Polluter-funded members of Congress have tried to pass bills instituting this restriction for years, despite the fact that it would violate the EPA’s obligation to use the best available science to protect public health. Sure enough, emails obtained by the Union of Concerned Scientists show that political appointees, not career staff or scientists, were behind the proposal, and they only considered its potential impact on industry. In response, nearly 1,000 scientists sent a letter to Pruitt asking him to back off.

Pruitt also packed the EPA’s Science Advisory Board (SAB) with industry scientists, overturning four decades of precedent by banning scientists who have received EPA grants from serving on the SAB or any other agency advisory panel. Why? Pruitt claims they have a conflict of interest. Pruitt did not renew terms for a number of respected members and dismissed several independent scientists before their terms were up, shrinking the SAB from 47 to 42 participants and more than doubling the number of its polluter-friendly members.

Likewise, Pruitt clearly has little use for standard EPA administrative procedures. The Government Accountability Office, for example, recently found that he violated federal law by ordering a $43,000 soundproof phone booth. Political appointees, it turns out, have to clear office improvement purchases over $5,000 with Congress. Unlike his predecessors, he has routinely flown first class, and so far it has cost taxpayers more than $150,000. He tripled the size of the administrator’s security team to 19 agents, and according to CNN their annual salaries alone cost at least $2 million. He has a 24-hour-a-day bodyguard. He rented a condo for $50 a night—well below market value—from the wife of an energy lobbyist who met with Pruitt last July and lobbies EPA on behalf of his clients. The list of Pruitt’s ethical infractions goes on and on.

Breaking the Rule of Law

“When rule of law is applied it provides certainty to those that are regulated,” Pruitt explained during that CPAC speech. “Those in industry should know what is expected of them. Those in industry should know how to allocate their resources to comply with the regulations passed by the EPA.”

It’s hard to argue with that. Of course industrial facility owners should be clear about their responsibility to curb emissions. Under Pruitt, however, polluters can be certain about at least one thing: There’s a good chance they won’t be prosecuted. For Pruitt, the rule of law is made to be broken.

In its first year in office, the Trump administration resolved only 48 environmental civil cases, about a third fewer than under President Barack Obama’s first EPA director and less than half under President George W. Bush’s over the same time period, according to a February report by the Environmental Integrity Project. The Trump administration recovered just $30 million in penalties from these cases, nearly 60 percent less than the $71 million the Obama administration recovered in its first year.

A December analysis by The New York Times comparing the first nine months of the Trump regime with previous administrations, also found a marked decline in enforcement. It determined that the EPA under Pruitt initiated about 1,900 enforcement cases, about a third fewer than during the Obama administration and about a quarter fewer than the Bush administration over the same time frame.

Meanwhile, Pruitt—who sued the EPA 14 times to block stronger air, water and climate safeguards during his tenure as Oklahoma attorney general—is now trying to roll back environmental protections from the inside. Since taking office, he has moved quickly to delay or weaken a range of Obama-era regulations, including ones that protect the public from toxic pesticides, lead paint and vehicle emissions.

Ironically, Pruitt’s cavalier attitude about following procedures has thus far blunted his wrecking-ball campaign. “In their rush to get things done, they’re failing to dot their ‘I’s and cross their ‘T’s, and they’re starting to stumble over a lot of trip wires,” Richard Lazarus, a Harvard environmental law professor, told The New York Times. “They’re producing a lot of short, poorly crafted rulemakings that are not likely to hold up in court.”

Federalism for all but California

“So process matters, rule of law matters, but let me tell you this: What really matters is federalism,” Pruitt told the CPAC faithful. “We are going to once again pay attention to the states across the country. I believe people in Oklahoma, in Texas, in Indiana, in Ohio, and New York and California, and in all the states across the country, they care about the air they breathe, and they care about the water they drink, and we are going to be partners with these individuals [sic], not adversaries.”

California? He must have forgotten that when he lashed out at the state for embracing stronger vehicle fuel economy standards than what he and the auto industry would prefer. “California is not the arbiter of these issues,” Pruitt said in an interview with Bloomberg TV in mid-March. California sets state limits on carbon emissions, he said, but “that shouldn’t and can’t dictate to the rest of the country what these levels are going to be.”

California, which has a waiver under the 1970 Clean Air Act giving it the right to set its own vehicle emissions standards, reached an agreement with the Obama administration and the auto industry that established the first limits on tailpipe carbon emissions. The next phase of the standards calls for improving the average fuel efficiency of new cars and light trucks to about 50 miles per gallon by 2025 in lab tests, corresponding to a real-world performance of about 36 mpg. By 2030, that would reduce global warming pollution by nearly 4 billion tons, akin to shutting down 140 coal-fired power plants over that time frame.

California wants to stick with the standards. Pruitt, echoing the specious claims of auto industry trade groups, announced in early April that he wants to roll them back. Putting aside the fact that the auto industry’s own analysis concluded that carmakers can meet the 2025 targets primarily with conventional vehicles, what happened to Pruitt’s “cooperative federalism” ideal, especially since California is not acting alone?

Thirteen states, mostly in the Northeast and Northwest, and the District of Columbia have adopted California’s stricter emissions standards. Together they represent at least a third of the U.S. auto market. And in response to Pruitt’s roll-back announcement, 12 state attorneys general and 63 mayors from 26 states released a declaration supporting the stronger standards. “Such standards are particularly appropriate given the serious public impacts of air pollution in our cities and states and the severe impacts of climate change…,” the declaration reads. “If the administration attempts to deny states and cities the basic right to protect their citizens, we will strongly challenge such an effort in court.”

That declaration sounds a lot like what Pruitt endorsed at the conclusion of his CPAC speech, but of course he was referring to state efforts to weaken federal environmental safeguards, not strengthen them. “We are going to restore power back to the people,” he said. “We are going to recognize the regulatory uncertainty and the regulatory state needs to be reined in, we’re going to make sure the states are recognized for the authority they have, and we are going to do the work that’s important to advance freedom and liberty for the future. It’s an exciting time.

“The folks in D.C. have a new attitude,” Pruitt continued. “It’s an attitude that no longer are we going to dictate to those across the country and tell them how to live each and every day. It’s an attitude that says we’re going to empower citizens and the states. It’s an idea of federalism and believing in liberty.”

The CPAC crowd gave him a standing ovation, but the reception he’s now getting from both Democrats and Republicans alike is considerably cooler. At this point, Mr. Pruitt may soon find himself out of a job.

Six Things You Should Know About The EPA’s New Science Restriction Draft Policy

UCS Blog - The Equation (text only) -

Yesterday, the EPA unveiled a long-awaited policy changing how the EPA can use science in its decision making. Ostensibly about “transparency”, the policy will actually restrict the agency’s ability to use the best available science—and thereby weaken protections for public health and the environment. While many of the provisions were known in advance, there are important details worth pointing out now that we know what the policy looks like:

1. It’s clear: This is about attacking soot and smog protections

It has always been evident that current EPA leadership is keen on dismantling the 2015 ozone standard, but the new proposal makes it abundantly clear that the administration is targeting ozone and particulate pollution protections specifically. The language is prescriptive—focused narrowly on the scientific studies that link air pollutants to adverse health effects, like premature death, heart attacks, and respiratory illness. In other words, the administration is restricting use of the very studies that show the need for air pollution protections. (In breaking with scientific community nomenclature, the proposal refers to these studies as “dose response studies”.) This directly undermines the EPA’s ability to set air pollution standards at a level that is protective of public health—as the agency is legally required to do under the Clean Air Act. The proposal even calls out the 2015 ozone rule specifically, raising the idea of having the new policy retroactively applied to the rule. To be clear, the real-world implications of this policy are likely to extend far beyond ozone and particulate matter—everything from chemical safety of consumer products to pesticide use to water quality—but the administration has showed its hand when it comes to motivations behind this policy.

2. The EPA administrator has absolute power

Tucked into the policy proposal is a provision that gives the EPA administrator notable power over how this would be implemented. He or she gets to decide what information counts and what does not. The administrator can “exempt significant regulatory decisions on a case-by-case basis.” This is a loophole big enough that we could drive a tractor-trailer through (perhaps one that doesn’t meet modern emissions standards?). By allowing the EPA administrator to exempt entire decisions, or just singular studies, this allows political appointees to pick and choose what science is “acceptable” when making (what should be) science-based policy decisions. That politicizes the evidence that can be considered or excluded for arbitrary reasons, making it easier for the EPA administrator to insert politics into existing science-based processes at the EPA.

3. A lot of talk, not a lot of evidence

The proposal provides little justification for the move. In addition to being a solution in search of a problem, the proposal includes no analysis of how this will affect the EPA’s mission-related work and no costs or benefits of such a move. In a proposed rule for public comment, the agency is supposed to present its analysis of the effect of the rule. Do they expect some types of public protections to be harder to implement? Do they expect certain standards will change? Why? And what are the gained or lost benefit to the public? What are the costs in time and resources to the agency and the public?

This lack of clarity on the implications of the proposal is especially concerning in light of a past Congressional Budget Office analysis that found that the similar HONEST Act would cost upwards of $250 million to implement. It is clear that the current proposal would create a tremendous time and resource burden on EPA and by scientists outside of the agency, as they would scramble to (needlessly) track down, collect, compile, and post (including any statistical methods needed to conceal private information) all the data, models, code, etc. for each of the hundreds of studies the EPA cites on any given decision. It is easy to see how this will add administrative red tape, not remove it.

As Senator Tom Carper and his colleagues point out, this policy will likely violate several laws, including the recently updated and bipartisan Toxic Substances Control Act, that requires the use of “best available science” in policymaking. The EPA fails to address this concern in the proposed rule.

4. But how though? The policy is lacking on details.

The proposal is vague about how all of this would happen. How would the Administrator decide what studies would be exempt? Where would the data reside? How would the resources to manage all the data be obtained? Ensuring all this extra information is received, stored, and made publicly accessible would be no small undertaking. The proposal references a system used by the National Institutes of Health to collect health data, but there are barriers and questions to how such a system could apply to the EPA. While the administration asserts that this is about transparency, the NIH system isn’t publicly accessible. There are all kinds of concerns related to disclosing personal health and other data. How will privacy be protected, and privacy laws complied with? Or will this policy simply block the agency from using any study that has relied on protected, private data? It isn’t clear that such processes have been seriously considered by the administration, and they’re hoping the public can figure it out for them.

5. Good laboratory practices, bad idea

The policy makes vague reference to “good laboratory practices.” This sounds, well, good. But it is anything but. This particular standard for good laboratory practices has long been used to ensure the industry studies are weighted heavier than independent studies conducted at universities, for example. Because the scientific community has their own ways of ensuring appropriate practices, many academic institutions do not meet the GLP standard. Historically, this has allowed chemicals with adverse health impacts, like BPA, to stay on the market because the EPA could only rely on industry studies that (conveniently) found no health concerns, while academic studies have. Applying these particular standards more broadly than they are currently applied could restrict the number of independent health studies that EPA can consider in its decision making.

6. The EPA claims support for its proposal; it doesn’t hold up.

Notably, the EPA’s own officials don’t agree that this is a good idea. In emails released to the Union of Concerned Scientists last week through an open records request, assistant administrator and former chemical industry rep Nancy Beck, flagged prior EPA positions on this issue established from court cases:

While the EPA therefore strives to ensure that data underlying research it relies upon are accessible to the extent possible, it does not believe that it is appropriate to refuse to consider published studies in the absence of underlying data … If the EPA … could not rely on published studies without conducting independent analyses of the raw data underlying them, then much relevant scientific information would become unavailable for use in setting standards to protect public health and the environment.

Externally, people don’t agree with this proposal either. The EPA policy proposal cites Science magazine and reports by the Bipartisan Policy Center and Administrative Conference of the United States as aligning with their efforts, but that’s not what they said.

Yesterday Editor-in-chief of the Science family of journals Jeremy Berg pushed back on the claim:

It does not strengthen policies based on scientific evidence to limit the scientific evidence that can inform them; rather, it is paramount that the full suite of relevant science vetted through peer review, which includes ever more rigorous features, inform the landscape of decision making. Excluding studies that do not meet rigid transparency standards will adversely affect decision-making processes.

Today, Rob Meyer of the Atlantic today talked with Wendy Wagner, University of Texas School of Law professor and author of both the BPC and ACUS reports. She too challenged the claims the reporters support the EPA’s proposal.

They don’t adopt any of our recommendations, and they go in a direction that’s completely opposite, completely different. They don’t adopt any of the recommendations of any of the sources they cite. I’m not sure why they cited them.

A new source, but still dishonest

Some parts of the legislation on which the EPA proposal is based were so indefensible that the agency didn’t even bother to propose it. Large portions of the HONEST Act suggest that studies should only be considered in rulemaking if they are reproducible—as if we should expose kids repeatedly to lead or mercury to make sure that the initial studies hold up. We will need to ensure that a final rule does not further limit the science that EPA can use through bogus reproducible arguments.

In the end, this policy will have broad implications for the EPA’s ability to fulfill its mission of protecting public health and the environment. By restricting the science that EPA can use to make decisions, we are forcing them to protect us with their hands tied and blinders on. This doesn’t serve transparency. It doesn’t serve efficiency. And it certainly doesn’t serve the public interest.

Hundreds of Leading Scientists Stand Up for Science Integrity and Plead for Climate Action

UCS Blog - The Equation (text only) -

Scientists have been justifiably alarmed since the early days of the Trump administration.  Many have voiced concerns on the removal of climate change information from websites, disregard of science on pesticides and air quality and more.  Enter the latest salvo: Yesterday, hundreds of members of the National Academy of Sciences (NAS) signed a letter calling for the administration to reverse its decision to withdraw the United States from the Paris Climate Agreement and to restore scientific integrity to decision making.

Here are three reasons why we should pay attention:

  1. The NAS scientists signers are the top minds in their fields. Elected by their peers as leaders in their respective field of science, all signers are members of the National Academies of Sciences (NAS).  President Lincoln signed the Act of Incorporation for the NAS in 1863 to provide a service to the nation.  These scientists provide voluntary service–usually at the request of Congress or agencies in the Executive Branch.  These members are used to thinking about the implications of their science and providing advice to the U.S. government. In this case, they got together to provide independent advice on matters of grave concern to the fate of our nation and its people.
  1. These leading scientists understand that addressing climate change is urgent. They call on the administration to reverse the decision to withdraw the U.S. from the Paris Climate Agreement. Our intent to withdraw matters since the U.S. is the second largest country emitting carbon dioxide from fuel combustion. The United States originally supported the Paris Agreement and committed to “achieve an economy-wide target of reducing its greenhouse gas emissions by 26-28 per cent below its 2005 level in 2025.” The Trump Administration has reneged on that promise.  As a result, the forecasts for emissions trajectories went up placing in jeopardy the Paris Agreement goal to keep the global average temperature rise this century well below 2 degrees Celsius above pre-industrial level.  This is a threshold that scientists agree we don’t want to cross if we hope to avoid some of the worst consequence for people and other life on this planet.
  1. Considering scientific and technical input is essential for a safe and prosperous nation.  In addition to noting that the U.S. is the only nation to have initiated withdrawal from the Paris Agreement, the central core of the statement is a call to restore science-based policy in government. The statement outlines the problem, “The dismissal of scientific evidence in policy formulation has affected wide areas of the social, biological, environmental and physical sciences.”  Evidence abounds.  Clean air protections have been weakened. There likely would be an unequal burden of injustice for those who live closest to sources of hazardous pollutants.  The Centers for Disease Control staff should not have to restrict the words they use to describe important public health matters and should feel free to respond to reasonable public requests for information. Reassignments of experts to positions outside their area of expertise have occurred. The statement also calls “to appoint qualified personnel to positions requiring scientific expertise.” There is an erosion of confidence in leadership at the highest levels of several agencies.  This list is long…

It takes guts to step out and speak up when you see a wrong.  Take note nation, our top scientists are loudly ringing the alarm bells.

Flint, Michigan Still Waiting for Justice Four Years On

UCS Blog - The Equation (text only) -

Photo: Lance Cheung/USDA

In April of 2014, Flint, Michigan residents noticed that there was something wrong with their water. As UCS Senior Fellow and noted Boston Globe Opinion writer Derrick Jackson recounts in his lengthy report, only a month after the city of Flint switched to using Flint River, instead of Lake Huron water, community members noticed the difference. Bethany Hazard told reporter Ron Fonger of the Flint Journal that her water was murky and foamy. LeeAnne Walters, recent winner of the Goldman Environmental Prize, noticed rashes on her children. Other residents too noticed the water was discolored, smelled or tasted bad or their families were experiencing health problems.

But state and local officials wouldn’t admit the problem or act…for a long time. In fact it took two years before the state began delivering bottled water to residents, and then only under court order. And another year for the EPA to provide a grant to upgrade the water system while a federal judge ordered the state to provide another $97 million. Just this month, the Governor of Michigan, Rick Snyder, announced that delivery of bottled water to residents is at an end. Ask yourself, would you drink the city water now?

An avoidable crisis

It’s easy to ask, why did all this occur? Did the city NEED to switch water supplies? How could the health and safety of so many residents be ignored? Why did it take so long to respond to residents?

The crisis and disaster occurred because of a decision by Flint and state officials to switch from the Detroit water system, to a new Karegnondi Water Authority that was under construction, ostensibly to save money. There was no pressing need to switch systems except the promise of future savings. And since the new system wasn’t completed, and the contract with Detroit wasn’t renewed, a short-term “fix” was chosen to use Flint River water, but without appropriate treatment with anti-corrosion additives.

So it occurred as a business decision and it didn’t need to happen. The contract with Detroit could have been extended.

But the reason so many residents’ concerns were ignored? There the answer lies in the long history of environmental injustice in our country. The city of Flint has a majority of people of color and many living in poverty. Time and again, health and safety impacts on these communities are ignored, as are the voices of the residents. In Flint, community members were the first to report that there was an issue with their water and began to organize to provide clean, bottled water well before any state intervention. And it is these most impacted communities who continue to speak up for themselves. But it was and is hard to get anyone to listen. It is easy, and appropriate, to single out the government officials from the city and state, including the governor. What about the rest of us?

Scientists and reporters—where are you?

The residents knew what they were experiencing, but not necessarily why. But scientists like Dr. Mona Hanna-Attisha, a Flint pediatrician and EPA whistleblower Miguel Del Toral as well as Marc Edwards from Virginia Tech, did provide critical diagnostic evidence that helped explain what was going on.

But there is a big university community in Michigan, and throughout the Upper Midwest. And many other environmental scientists all around the country working on water quality. If ever a community needed our help, it was and is Flint.

And, as Derrick Jackson reports, the story was slow to get traction in the media. Ask yourself if that would be the case if the crisis was in a wealthier, whiter community. If you have any doubts, see this report that scientists from EPA published in February about environmental racism—it’s behind a paywall but you can read this news article on it, and another study here.

The Flint community is organized and speaking out. They still need and deserve our attention, our support and most of all, they deserve environmental justice.

Ask yourself how you can become a part of the solution; how you can help to bring environmental justice.

Better Data Are Needed to Dismantle Racism in Policing

UCS Blog - The Equation (text only) -

Photo: Tony Webster/CC BY-SA 2.0 (Flickr)

The institutionalized killing of black and brown people in the United States is not a new phenomenon. The government’s role in the overt harming of black bodies goes as far back as slavery, when patrollers (paid and unpaid) stopped enslaved people in public places, entered their quarters without warrant, and assaulted and harmed them. In the late 19th and early 20th centuries, the government further sustained public devaluation of black lives through tolerance of lynching and by failing to pass anti-lynching legislation.

Today, this institutionalized killing is illustrated by countless racist police shootings—which should be enough to prioritize police brutality on the public policy agenda. However, as we have seen through the (almost complete) failure on the part of the justice system to indict police officers involved in these murders, institutional action is not being taken to address state violence directed at black and brown people.

Dismantling racism in policing, and in other institutionalized forms, in part rests on the better collection and maintenance of data. National representative data on exposure to various dimensions of police brutality can be linked with individual and population health indicators to paint a clearer picture of the impact of police brutality. It can also provide more insight to causes of racial health inequities and inform the formulation of specific policy interventions. We can and must do better at collecting data.

The effects of historical institutional racial oppression cut across several sectors of contemporary American life: health, criminal justice, civic engagement, education, and the economy. I teach Introduction to Public Health at Lehigh University. When I talk about racial inequities in health, I must frame them in the context of racism. I cannot also talk about forms of contemporary racism, such as police brutality, without implicating slavery and its horrors. Without doubt, students ask questions such as “Why did it take the government so long to abolish slavery?” or “Why was it not until 2005, more than a century after lynching began, that the United States Senate apologized for not passing anti-lynching laws?” or “Why were these laws not passed to begin with?” I typically respond by asking if we are a better society now than we were three centuries. Responses range from listing the benefits of the civil rights movement to framing mass incarceration and police brutality as the “new” forms of state-sanctioned structural racism.

But our collective response to police brutality should help us answer questions about why lynching laws lasted as long as they did. Police brutality, which disproportionately targets and kills black and brown people in America, is modern-day lynching. As with lynching, there are perpetrators, unconcerned onlookers, and active resisters. There is also a government that fails to take comprehensive action. In this piece, I aim to focus on what government can do.

The importance of collecting data

Collect data. Data provide necessary evidence for understanding the scope of the problem and to take informed action. Fortunately, the Bureau of Justice Statistics is leading federal efforts to collect more comprehensive data about arrest-related deaths. While this is a step in the right direction, there are still gaps. For example, there are no government-led efforts that mandate active surveillance and reporting of police-related incidents at local and state levels, whether these incidents lead to death, physical injury, or disability. Real-time data from non-governmental sources such as The Counted and The Washington Post help fill this gap but indicate a lack of federal commitment to active surveillance of police brutality—a social determinant of health that disproportionately harms communities of color.

Data are the bread and butter of public policy. In addition to understanding the scope of police brutality, data are relevant for assessing its impact on health, the economy, and other sectors. My current research seeks to identify the mechanisms through which police brutality affects health. The lack of nationally representative data is a problem. In the absence of these data though, I am conducting a qualitative case study to better understand the extent to which stress and poor mental health among Black people residing in the “inner-city” might be grounded in experiences or anticipation of police intimidation and violence.

Moving from collecting data to implementing solutions

Collecting data is important. But our government institutions must also take responsibility for their past and current role in state-sanctioned public harming of black and brown bodies. Real action at the local, state and federal levels are required. One action step is mandating active surveillance of police actions that dehumanize individuals, such as stop-and-frisk practices. Another is to fund research that seeks to understand consequences of police brutality. A third is to prioritize and finance programs and interventions that specifically reduce police brutality and that dismantle racist systems that oppress communities of color more generally.

 

Sirry Alang is an Assistant Professor of Sociology and Health, Medicine and Society at Lehigh University. Her current research explores the connection between structural racism and racial inequities in health.

Science Network Voices gives Equation readers access to the depth of expertise and broad perspective on current issues that our Science Network members bring to UCS. The views expressed in Science Network posts are those of the author alone.

A List of Scientific Organizations That Have Supported and Opposed Limiting What Research EPA Can Use to Make Decisions

UCS Blog - The Equation (text only) -

Photo: Adam Baker/CC BY 2.0 (Flickr)

The EPA today will announce a politically-motivated draft policy to restrict the use of science in agency decisions. The draft policy is based on legislation that has died in Congress for several years.

The mainstream scientific and academic organizations that have opposed new restrictions on EPA’s use of science, in alphabetical order:

The mainstream scientific and academic organizations that have supported new restrictions on EPA’s use of science, in alphabetical order:

¯\_(ツ)_/¯

The former tobacco industry-paid PR men who support new restrictions on EPA’s use of science, in alphabetical order:

The “Race” to Resolve the Boiling Water Reactor Safety Limit Problem

UCS Blog - All Things Nuclear (text only) -

General Electric (GE) informed the Nuclear Regulatory Commission (NRC) in March 2005 that its computer analyses of a depressurization event for boiling water reactors (BWRs) non-conservatively assumed the transient would be terminated by the automatic trips of the main turbine and reactor on high water level in the reactor vessel. GE’s updated computer studies revealed that one of four BWR safety limits could be violated before another automatic response terminated the event.

Over the ensuring decade-plus, owners of 28 of the 34 BWRs operating in the US applied for and received the NRC’s permission to fix the problem. But it’s not clear why the NRC allowed this known safety problem, which could allow nuclear fuel to become damaged, to linger for so long or why the other six BWRs have yet to resolve the problem. UCS has asked the NRC’s Inspector General to look into why and how the NRC tolerated this safety problem affecting so many reactors for so long.

BWR Transient Analyses

The depressurization transient in question is the “pressure regulator fails open” (PRFO) event. For BWRs, the pressure regulator positions the bypass valves (BPV in Figure 1) and control valves (CV) for the main turbine as necessary to maintain a constant pressure at the turbine inlet.

When the reactor is shut down or operating at low power, the control valves are fully closed and the bypass valves are partially opened as necessary to maintain the specified pressure. When the turbine/generator is placed online, the bypass valves are closed and the control valves are partially opened to maintain the specified inlet pressure. As the operators increase the power level of the reactor and send more steam towards the turbine, the pressure regulator senses this change and opens the control valves wider to accept the higher steam flow and maintain the constant inlet pressure.

Fig. 1 (Source: Nuclear Regulatory Commission, annotated by UCS)

If the sensor monitoring turbine inlet pressure provides a false high value to the pressure regulator or an electronic circuit card within the regulator fails, the pressure regulator can send signals that fully open the bypass valves and the control valves. This is called a “pressure regulator fails open” (PRFO) event. The pressure inside the reactor vessel rapidly decreases as the opened bypass and control valves accept more steam flow. Similar to how the fluid inside a shaken bottle of soda rises to and out the top when the cap is removed (but for different physical reasons), the water level inside the BWR vessel rises as the pressure decreases.

The water level is normally about 10 feet above the top of the reactor core. When the water level rises about 2 feet above normal, sensors will automatically trip the main turbine. When the reactor power level is above about 30 percent of full power, the turbine trip will trigger the automatic shut down of the reactor. The control rods will fully insert into the reactor core within a handful of seconds to stop the nuclear chain reaction and terminate the PRFO event.

The Race to Automatic Reactor Shut Down

The reactor depressurization during a PRFO event above 30 percent power actually starts two races to automatically shut down the reactor. One race ends when high vessel level trips the turbine which in turn trips the reactor. The second race is when low pressure in the reactor vessel triggers the automatic closure of the main steam isolation valves (MSIV in Figure 1). As soon as sensors detect the MSIVs closing, the reactor is automatically shut down.

BWRs do not actually stage PRFO events to see what parameter wins the reactor shut down race. Instead, computer analyses are performed of postulated PRFO events. The computer codes initially used by GE had the turbine trip on high water level winning the race. GE’s latest code shows MSIV closure on low reactor vessel pressure winning the race.

The New Race Winner and the Old Race Loser

The computer analyses are performed for reasons other than picking the winner of the reactor shut down race. The analyses are performed to verify that regulatory requirements will be met. When the winner of the PRFO event reactor shut down race was correctly determined, the computer analyses showed that one of four BWR safety limits could be violated.

Figure 2 shows the four safety limits for typical BWRs. The safety limits are contained within the technical specifications issued by the NRC as appendices to reactor operating licenses. GE’s latest computer analyses of the PRFO event revealed that the reactor pressure could decrease below 785 pounds per square inch gauge (psig) before the reactor power level dropped below 25 percent—thus violating Safety Limit 2.1.1.1. The earlier computer analyses non-conservatively assumed that reactor shut down would be triggered by high water level, reducing reactor power level below 25 percent before the reactor pressure decreased below 785 psig.

Fig. 2 (Source: Nuclear Regulatory Commission)

Safety Limit 2.1.1.1 supports Safety Limit 2.1.1.2. Safety Limit 2.1.1.2 requires the Minimum Critical Power Ratio (MCPR) limit to be met whenever reactor pressure is above 785 psig and the flow rate trough the reactor core is above 10 percent of rated flow. The MCPR limit protects the fuel from being damaged by insufficient cooling during transients, including PRFO events. The MCPR limit keeps the power output from individual fuel bundles from exceeding the amount that can be carried away during transients.

As in picking reactor shut down race winners, BWRs do not slowly increase fuel bundle powers until damage begins, then back it down a smidgen or two. Computer analyses of transients also model fuel performance. The results from the computer analyses establish MCPR limits that guard against fuel damage during transients.

The computer analyses examine transients from a wide, but not infinite, range of operating conditions. Safety Limit 2.1.1.1 defines the boundaries for some of the transient analyses. Because Safety Limit 2.1.1.1 does not permit the reactor power level to exceed 25 percent when the reactor vessel pressure is less than 785 psig, the computer analyses performed to establish the MCPR limit in Safety Limit 2.1.1.2 do not include an analysis of a PRFO event for high power/low pressure conditions.

Thus, the problem reported by GE in March 2005 was not that a PRFO event could violate Safety Limit 2.1.1.1 and result in damaged fuel. Rather, the problem was that if Safety Limit 2.1.1.1 was violated, the MCPR limit established in Safety Limit 2.1.1.2 to protect against fuel damage could no longer be relied upon. Fuel damage may, or may not occur, as a result of a PRFO event. Maybe, maybe not is not prudent risk management.

The Race to Resolve the BWR Safety Limit Problem

The technical specifications allow up to two hours to remedy a MCPR limit violation; otherwise the reactor power level must be reduced to less than 25 percent within the next four hours. This short time frame implies that the race to resolve the BWR Safety Limit problem would be a dash rather than a marathon.

Fig. 3 (Source: Nuclear Regulatory Commission)

The nuclear industry submitted a request to the NRC on July 18, 2006, asking that the agency merely revise the bases for the BWR technical specifications to allow safety limits to be momentarily violated. The NRC denied this request on August 27, 2007, on grounds that it was essentially illegal and unsafe:

Standard Technical Specifications, Section 5.5.14(b)(1), “Technical Specifications (TS) Bases Control Program,” states that licensees may make changes to Bases without prior NRC approval, provided the changes do not involve a change in the TS incorporated in the license. The proposed change to the TS Bases has the effect of relaxing, and hence, changing, the TS Safety Limit. An exception to a stated TS safety limit must be made in the TS and not in the TS Bases. In addition,  a potential exists that the requested change in the TS Bases could have an adverse effect on maintaining the reactor core safety limits specified in the Technical Specifications, and thus, may result in violation of the stated requirements. Therefore, from a regulatory standpoint, the proposed change to the TS Bases is not acceptable. [emphasis added]

and

… the staff is concerned that in some depressurization events which occur at or near full power, there may be enough bundle stored energy to cause some fuel damage. If a reactor scram does not occur automatically, the operator may have insufficient time to recognize the condition and to take the appropriate actions to bring the reactor to a safe configuration. [emphasis added]

In April 2012, the nuclear industry abandoned efforts to convince the NRC to hand wave away the BWR safety limit problem and recommended that owners submit license amendment requests to the NRC to really and truly resolve the problem.

Forget the Tortoise and the Hare—the Snail “Wins” the Race

On December 31, 2012, nearly ten years after GE reported the problem, the owner of two BWRs submitted a license amendment request to the NRC seeking to resolve the problem. The NRC issued the amendment on December 8, 2014. Table 1 shows the “race” to fix this problem at the 34 BWRs operating in the US.

Table 1: License Amendments to Resolve BWR Safety Limit Problem Reactor License Amendment Request License Amendment Original Reactor  Pressure Revised Reactor  Pressure Susquehanna Units 1 and 2 12/31/2012 12/08/2014 785 psig 557 psig Monticello 03/11/2013 11/25/2014 785 psig 686 psig Pilgrim 04/05/2013 03/12/2015 785 psig 685 psig River Bend 05/28/2013 12/11/2014 785 psig 685 psig FitzPatrick 10/08/2013 02/09/2015 785 psig 685 psig Hatch Units 1 and 2 03/24/2014 10/20/2014 785 psig 685 psig Browns Ferry Units 1, 2, and 3 12/11/2014 12/16/2015 785 psig 585 psig Duane Arnold 08/06/2015 08/18/2016 785 psig 686 psig Clinton 08/18/2015 05/11/2016 785 psig 700 psia Dresden Units 2 and 3 08/18/2015 05/11/2016 785 psig 685 psig Quad Cities Units 1 and 2 08/18/2015 05/11/2016 785 psig 685 psig LaSalle Units 1 and 2 11/19/2015 08/23/2016 785 psig 700 psia Peach Bottom Units 2 and 3 12/15/2015 04/27/2016 785 psig 700 psia Limerick Units 1 and 2 01/15/2016 11/21/2016 785 psig 700 psia Columbia Generating Station 07/12/2016 06/27/2017 785 psig 686 psig Nine Mile Point Unit 1 08/01/2016 11/29/2016 785 psig 700 psia Oyster Creek 08/01/2016 11/29/2016 785 psig 700 psia Perry 11/01/2016 06/19/2017 785 psig 686 psig Nine Mile Point Unit 2 12/13/2016 10/31/2017 785 psig 700 psia Brunswick Units 1 and 2 None found None found 785 psig Not revised Cooper None found None found 785 psig Not revised Fermi Unit 2 None found None found 785 psig Not revised Grand Gulf None found None found 785 psig Not revised Hope Creek None found None found 785 psig Not revised

 

UCS Perspective

BWR Safety Limits 2.1.1.1 and 2.1.1.2 provide reasonable assurance that nuclear fuel will not be damaged during design bases transients. In March 2005, GE notified the NRC that a computer analysis glitch undermined that assurance.

The technical specifications issued by the NRC allow BWRs to operate above 25 percent power for up to six hours when the MCPR limit is violated. GE’s report did not reveal the MCPR limit to be violated at any BWR; but it stated that the computer methods used to establish the MCPR limits were flawed.

There are only four BWR safety limits. After learning that one of the few BWR safety limits could be violated and determining that fuel could be damaged as a result, the NRC monitored the glacial pace of the resolution of this safety problem. And six of the nation’s BWRs have not yet taken the cure. Two of those BWRs (Brunswick Units 1 and 2) do not have GE fuel and thus may not be susceptible to this problem. But Cooper, Fermi Unit 2, and Hope Creek have GE fuel. It is not clear why their owners have not yet implemented the solution.

The NRC is currently examining how to implement transformational changes to become able to fast track safety innovations. I hope those efforts enable the NRC to resolve safety problems in less than a decade; way, way less than a decade. Races to resolve reactor safety problems must become sprints and no longer leisurely paced strolls. Americans deserve better.

UCS asked the NRC’s Inspector General to look into how the NRC mis-handled the resolution of the BWR safety limit problem. The agency can, and must, do better and the Inspector General can help the agency improve.

California’s Next Climate Change Challenge is Water Whiplash

UCS Blog - The Equation (text only) -

Today the journal Nature Climate Change published results of a groundbreaking paper that explores the changing character of precipitation extremes in California. The eye-opening results indicate that while overall precipitation levels will not change significantly in the next decades, the state has already entered a period of increased extreme precipitation events that will continue to present tremendous challenges to ensuring stable water supplies. My colleague Dr. Geeta Persad, Western States Senior Climate Scientist at UCS, reflects on the meaning of the results below.

***

The hills on the east side of the San Francisco Bay are a lush, perfect green at this time of year. I drink them in greedily during my daily commute to and from work. Especially because, like any Californian, I know that this greenness will leave with the wet season.

As climate change transforms the water landscape for our state, the greenery feels particularly precious. A new paper out of UCLA’s Institute for Environment and Sustainability today suggests that volatility in California’s water resources is only going to get worse. Its findings drove home for me how much smarter we need to get about managing climate change impacts on water in California.

Water whiplash

Climate change is not just a slow and steady trend affecting water conditions in our state. One of the key findings of the paper led by Dr. Daniel Swain, is that “precipitation whiplash” – a rapid transition from very dry conditions to very wet conditions – is likely to increase with climate change.

Why does this matter? This kind of whiplash is exactly what we’ve experienced over the last few years. We’ve transitioned from the worst drought in California’s recorded history to two wet seasons that produced around $1 billion in flood-related damage and repairs. These projections could mean an increase in extreme wildfire risk and weakened, dried-out soil followed by extreme rainfall and runoff events—the perfect storm of ingredients to produce mudslides like the ones that devastated Montecito earlier this year.

Simulations in this new paper also indicate that, between now and 2060, almost the entire state has at least a 66.6% chance of experience a precipitation event like the one that created the 1862 California flood, which transformed the Central Valley into an inland lake. This means that in the next 40 years, our largest urban centers are more likely than not to experience unprecedented flood events that our modern water management systems and infrastructure have never before had to deal with.

Climate change means more than a change in averages

This paper is especially important because it quantifies climate change impacts on California precipitation—like precipitation whiplash and extremes—that really matter for water management. As long as we keep only planning for averages, we won’t be managing many of the catastrophic risks that climate change creates for water management in California.

In Swain and his coauthors’ simulations, the tripling of extreme precipitation risk, sharp increases in precipitation whiplash, and uptick in extreme dry seasons happen even while average precipitation barely changes. Plus, their projections show a strong shortening of the wet winter season and expansion of the dry summer season statewide. That could create a need for more water storage to bridge between the wet and dry seasons, even without a change in the total amount of water we get each year.

We have a very long way to go to adequately plan and build a safe and reliable water system for the reality of how climate change will impact our water resources and infrastructure.  Studies like this one and others that have come out over the past several years highlight that we have to fundamentally transform how we think about the role of climate in water management in California. Climate change’s influence on the character of California’s water is complicated, but the more of that complexity we integrate into our decision-making, the more likely we are to develop management strategies that avoid the worst outcomes. Luckily, papers like this one show that we have the science to do so.

***

Geeta’s reflections highlight the need to accelerate and intensify work that has recently begun in California. We are just at the beginning of figuring out how to manage water for changing climatic conditions.  Since 2015, UCS has been highlighting the problems of how our current infrastructure is not built for increased drought and flooding in conjunction with more precipitation falling as rain rather than snow. We also have shown that climate change is highlighting the critical importance of increased groundwater management to meet our needs. The UCLA report’s findings on the increase in extreme events further underscores the urgent need for change.

Our highly-engineered water systems in the western states is built for a seasonal regime that is fast disappearing. It is designed to store melting snowpack in the spring for farms and cities to use in the summer and fall. As in the drought of 2012-2016, we can no longer count on sufficient precipitation and snowpack to store sufficient water in dry years. Furthermore, during the last two wet years we found that our systems were in some cases inadequate to deal with some extreme events, most dramatically demonstrated by the near-failure of Oroville Dam in February 2017.

Rethinking how we build for a new normal of extremes

The need to change how we build and manage water infrastructure is only one example of how we need to think differently about the built environment now that dramatic changes from a warming world are taking hold. Yet few of the people who are responsible for how we build roads, dams, canals, bridges, and buildings know what to do with the science that is emerging. That is why UCS sponsored legislation in 2016, AB 2800 (Quirk) to bring scientists and engineers together  to come up with recommendations (due to be released late this summer) on how science can better inform our building decisions.

Global warming is going to necessitate a wholesale re-thinking of how we approach building and maintaining our communities far into the future. Translating science to a form that can be used by engineers and architects is only one facet of the problem, however. It will require we rethink about everything from updating building codes and standards, to improving coordination between local, state, and federal levels of government, to approaching cost/benefit calculations of projects with climate change factors included, to fixing chronic underinvestment in disadvantaged communities so that they can better deal with future challenges.

A future of “whiplash” weather in California is but a microcosm, and a warning for much more uncertain and hazardous conditions as the world gets warmer. We are fortunate that science is improving our ability to forecast these changes, but we need policies and programs to make changes based on what we are learning. Time is very short: this is now a problem of our present, not our future.

Photo: Zack Cunningham / California Department of Water Resources

Japan’s Nuclear Hawks Could Block US-North Korean Agreement on Denuclearization

UCS Blog - All Things Nuclear (text only) -

Momentum has been building for a productive meeting between President Trump and Kim Jung-un that could lead to an agreement on North Korean denuclearization. But after speaking with Japanese Prime Minister Shinzo Abe, Trump warned the world that he might cancel or walk out of the meeting if “it is not going to be fruitful.”

US President Donald Trump and Japanese Prime Minister Shinzo Abe shake hands at a press conference concluding two days of talks.

What did Mr. Abe tell Mr. Trump that precipitated the warning?  The prime minister may have reminded the president that his Nuclear Posture Review, which the Japanese Foreign Ministry strongly endorsed, included US promises to increase the role of US nuclear weapons in Asia. The ministry could be trying to prevent any weakening of those promises from becoming part of an agreement with North Korea on denuclearization.

Defining Denuclearization

US and foreign observers have disagreed about the meaning of the term. But North Korea has made it clear that it considers denuclearization a mutual responsibility. The United States has acknowledged reciprocal denuclearization obligations in the past, but they were limited to the Korean land mass.

US negotiators should be aware that North Korean conditions for a credible security guarantee may include a slightly broader definition of US denuclearization obligations and some additional US relaxation of its nuclear posture in Asia. In July 2016 Pyongyang stated that denuclearization means “denuclearization of the whole Korean peninsula and this includes the dismantlement of nukes in South Korea and its vicinity.”

This would not be an unreasonable request. Nuclear-capable US aircraft and submarines patrolling in the region are just as threatening to North Korea as US nuclear weapons stationed on the peninsula itself. The United States has used displays of regional nuclear capabilities, such as nuclear-capable bombers deployed to Guam, to threaten North Korea in the past. North Korean threats to attack Guam with medium range missiles were a response to those displays, and a prominent part of the tense fall run-up to this spring’s negotiations.

If North Korea were to ask for a broadening of reciprocal US obligations to denuclearize the region as a condition for relinquishing its nuclear capabilities, the United States may have to walk back some aspects of the extended nuclear deterrence commitments it made to Japan during the Obama administration and cancel plans to further enhance those commitments—plans included in the Trump administration’s Nuclear Posture Review.

Japanese Nuclear Preferences

On 25 February 2009 Minister Takeo Akiba, who headed the political section of Japan’s embassy in Washington, presented a document to a US congressional commission stating President Obama assured Prime Minister Aso, at a meeting in Washington the day before, that the United States would honor the Japanese Foreign Ministry’s request to make nuclear deterrence “the core of Japan–US security arrangements.” The document contained a list of US nuclear weapons capabilities the ministry believed were needed to make that assurance credible.

The list included US nuclear weapons that could be deployed in the region, including nuclear-capable cruise missiles on US attack submarines that patrol in Asia and nuclear-capable aircraft on the island of Guam. A conversation about the list between Mr. Akiba and commission co-chair James Schlesinger included consideration of deploying US nuclear weapons on US military bases on the Japanese island of Okinawa. Mr. Akiba, who is now Japan’s Vice Minister of Foreign Affairs, explained that domestic political conditions in Japan made deployment in Okinawa problematic. But he also noted that there is a constituency within Japan’s Foreign Ministry that supports deployment and he appeared to agree to construct storage facilities for US nuclear weapons in Okinawa in anticipation of eventual deployment when political conditions in Japan change.

The Obama administration permanently retired the nuclear-capable cruise missile the United States once deployed on US attack submarines patrolling in Asia. US President George H.W. Bush removed them from service in 1992. But Obama reportedly agreed to compensate for the loss of this capability by making US nuclear weapons available for deployment in Asia aboard dual-capable aircraft. The Trump administration, noting the importance of the capability to deploy US nuclear weapons in Asia, plans to build a new submarine-launched nuclear-capable cruise missile to replace the one his predecessors removed from service and retired.

Reciprocal Verification

The United States expects North Korea to agree to verifiable measures to halt the development of new nuclear weapons, eliminate its existing nuclear weapons and dismantle its ability to reconstitute its nuclear weapons program in the future. It is only reasonable to expect that North Korea would require credible assurances that the United States will not introduce or threaten to introduce US nuclear weapons into the region in the future.

The United States could agree to such a request without diminishing its ability to provide extended nuclear deterrence to its Asian allies with its strategic nuclear forces, which do not need to enter the region to be effective. But it would have to forgo whatever psychological advantages it presumes to obtain by maintaining the ability and expressing the will to deploy US tactical nuclear weapons in Asia if deemed necessary.

South Korea seems to be prepared to make this concession in the interest of avoiding a war with the North. But Trump’s unexpected threat to cancel or walk out of a summit meeting with Kim Jung-un, announced while standing next to Japan’s prime minister after two days of meetings, suggests Abe may have told the US president that exchanging the option to deploy US tactical nuclear weapons in Asia for a deal on denuclearization with North Korea would not be “fruitful.”

Here are the “Transparency” Policy Documents the EPA Does Not Want You to See

UCS Blog - The Equation (text only) -

Photo: US Department of Defense

On April 17th, the Union of Concerned Scientists obtained EPA records through three separate Freedom of Information Act (FOIA) requests demonstrating that a proposed Trojan horse “transparency” policy that would restrict the agency’s ability to use the best available science in decision-making is driven by politics, not science. The records also embarrassingly showed EPA officials were more concerned about the release of industry trade secrets than they were about sensitive private medical information.

Three days later, EPA officials removed the records from an online portal where anyone could review them.

Today, UCS restored public access by posting almost all of the responsive documents (more than 100 out of the 124 responsive records) related to the so-called “secret science” policy.

The documents obtained by UCS provide insight into how political appointees and industry interests, not science, are driving EPA's pursuit of administratively implementing failed anti-science legislation.

The documents obtained by UCS provide insight into how political appointees and industry interests, not science, are driving EPA’s pursuit of administratively implementing failed anti-science legislation.

EPA removed the documents after extensive reporting on the contents, including by reporters at POLITICO, The Hill, E&E News, Reuters, and Mother Jones. In each of those stories, which I encourage you to read, journalists highlighted how EPA officials are attempting to administratively implement failed anti-science legislation advanced by House Science Committee Chairman Lamar Smith, to benefit industries the agency is in charge of regulating and at the public’s expense.

The documents provided a window into the considerations of many agency officials, and showed that a policy that would be a fundamental shift in the way EPA uses science, was driven exclusively by political appointees, not scientists.

There were a number of documents on other topics that were also included in the records that were responsive to our public records request, that we are still reviewing. However, as a result of EPA’s actions, public access was denied.

My colleague and I spent much of our Friday afternoon trying to figure out why the documents were taken down. We repeatedly reached out to the agency, and were informed that the records were removed because of concerns about “privacy information” and “attorney-client communication.” Before posting the documents online, we spent some time going through all of the records and removed any documents that could be considered as private in nature (i.e. family pictures, etc.) or represent such privileged communication.

The irony here is not lost on me, as EPA tries to hide records that are critical to understanding the policy development process while officials try to develop a policy about “transparency” in the agency’s use of science.

The agency on Thursday sent a proposed policy to the White House for review. This means that a policy to restrict independent science can be announced any day now. These documents are critical to reporting around the motivation for the policy and to evaluate EPA Administrator Scott Pruitt’s claims of improving transparency in policymaking at the agency.

So, in the spirt of the presumption of openness doctrine under FOIA, we believe that it is our responsibility to restore public access to these documents. It is up to us, the public, to watchdog EPA and hold agency officials accountable.

You can find the documents here.

Pages

Subscribe to Union of Concerned Scientists aggregator - Combined UCS Blogs