Combined UCS Blogs

Don’t Make the Same Mistake on Iran that Bush Made on North Korea

UCS Blog - All Things Nuclear (text only) -

Press reports say President Trump will likely not certify Iranian compliance with the Iran nuclear deal in the near future, setting up a situation in which Congress can reimpose sanctions and effectively end US compliance with the deal.

(Source: US State Dept.)

Since the agreement includes several other countries, that would significantly weaken the deal but would not end it.

Still, that the United States would undermine the agreement—which administration officials acknowledge Iran is abiding by—is incredibly short-sighted. It goes against the advice of President Trump’s senior advisors and essentially the whole US security policy community. It erodes US credibility as a treaty partner in future negotiations.

Killing the deal would throw out meaningful, verified limits on Iran’s ability to make nuclear weapons because the president doesn’t think the agreement goes far enough.

The US did this with North Korea, and it was a disaster

The US did this before—with North Korea—and that led to the crisis we are in today.

In 2001, when the Bush administration took office, there was an agreement in place (the Agreed Framework) that verifiably stopped North Korea’s production of plutonium for weapons and put international inspectors on the ground to make sure it was not cheating. This stopped Pyongyang from making fissile material that could be used for dozens of nuclear weapons, and provided the world valuable information about an intensely opaque country.

Also by 2001 North Korea had agreed to stop ballistic missile tests—which was readily verified by US satellites—as long as negotiations continued. This was also meaningful since it would cap Pyongyang’s missile capability at a range of only 800 miles.

Former Secretary of Defense William Perry, who was closely involved in the negotiations with Pyongyang, has said he believes at that point the United States was a couple months from reaching an agreement that would have ended the North’s nuclear and missile programs. This was years before North Korea had done any nuclear tests or long-range missile tests.

Instead of capturing these important restrictions and building on them, the Bush administration—like Trump today—argued these limits were flawed because they did not go far enough to reign in the whole range of activities the United States was concerned about. Bush stopped the talks and eventually let the constraints on North Korea’s nuclear and missile programs fall apart, bringing us to where we are today: facing a North Korea with hydrogen bombs and long-range missiles.

One reason the Bush administration gave for stopping implementation of the Agreed Framework was that Pyongyang had a fledgling uranium enrichment program that was not captured by the agreement. US negotiators knew about that program in the 1990s, and were watching it, but decided that ending Korea’s operating plutonium-production capabilities and getting inspectors on the ground was the crucial first step, and with that in place the uranium program could be addressed as a next step. The Agreed Framework was not meant to be all-encompassing—it was an important, logical step toward solving the bigger problem that was too complex to be solved all at once.

The Iran deal was similarly seen by those negotiating it as a meaningful, achievable step toward solving the bigger issues that could not be addressed all at once. And it has been successful at doing that.

Drifting toward disaster

In the case of Iran, as well as North Korea, President Trump is taking provocative steps that go against the advice of his senior advisors—and in many cases simply defy common sense. The stakes are extremely high in both cases. Dealing with them requires an understanding of the issues and potential consequences, and a long-term strategy built on realistic steps and not magical thinking.

If Trump de-certifies the Iran agreement, he will be tossing the fate of the deal to Congress. Congress needs to heed the advice the president is not taking. That means it should listen to Secretary of Defense James Mattis; Gen. Joseph Dunford, chair of the Joint Chiefs of Staff; Secretary of State Rex Tillerson; and others who believe it is in the best interests of the United States to continue to support the agreement.

We find ourselves in a situation in which the whims of the president are escalating conflicts that potentially put millions of lives at risk and create long-term security risks for the United States, and no one appears to have the ability to reign him in and stabilize things. That situation should be unacceptable to Congress and the US public. If this situation continues, it could go down as one of the darkest periods of US history.

Well-Deserved Recognition: ICAN Wins Nobel Peace Prize

UCS Blog - All Things Nuclear (text only) -

For most of my professional life going back to the late 1980’s, I have been a nuclear weapons organizer/campaigner.  It’s my life’s work.  Over all these years, no group of campaigners has impressed me more than the good folks with the International Campaign to Abolish Nuclear Weapons (ICAN).  Their skill, passion, energy, professionalism and unrelenting doggedness is truly inspiring in our mutual pursuit of a safer world free of nuclear weapons.

I am not the only one who feels this way and today I am so pleased to join a global chorus of folks honoring and congratulating ICAN for being awarded the Nobel Peace Prize for their “work to draw attention to the catastrophic humanitarian consequences of any use of nuclear weapons and for its ground-breaking efforts to achieve a treaty-based prohibition of such weapons.”

It is hard to overstate how significant an achievement it was to get 122 nations to join together and adopt this treaty –one vigorously opposed by all of the nuclear weapons states and those under their nuclear protection.

To this day, the many supporters of the US nuclear status quo—both within and outside of the government—are full of excuses for not acting and not aggressively pursuing disarmament.  Even worse, the United States seems to be going in the wrong direction with all of the talk of, and plans for, new more usable nuclear weapons and the rebuilding of the entire US nuclear arsenal at a cost that is sure to exceed $1 trillion of our tax dollars. The international discussion that ICAN has been leading about nuclear weapons and humanitarian consequences is even more important in that context.

Similarly, it’s well past time for a debate on the morality of threatening millions of innocent civilians in the name of national security.  And who thinks it’s OK that one person has the power and authority to effectively end humanity?

What ICAN and many of us are saying is: let’s get serious folks (we are looking at you. nuclear weapons states) about nuclear disarmament before our luck runs out.

But for now, let’s raise our glasses and congratulate and honor everyone at ICAN and elsewhere who wake up every day and work so hard—against such incredible odds—to prevent nuclear war and make the world a safer, better place.  I thank you.  My children thank you.

Would Jim Bridenstine Be a Down to Earth NASA Administrator?

UCS Blog - The Equation (text only) -

Credits: NASA-JPL/Caltech/GSFC/University of Montana SMAP website

Let’s get right to it. Understanding the dynamics of our Earth, including disasters like hurricanes and droughts, has never seemed more important. As if on cue, we have a confirmation hearing for the NASA Administrator nominee coming down the pike. Is President Trump’s nominee, Representative Jim Bridenstine (R-OK), the right fit?

There are a number of things that members of Congress should be looking for as they go into the confirmation hearing. If members of Congress want a NASA that will be able to advance understanding of the formation and impacts of disasters like Hurricane Irma, Harvey, and Maria or droughts in the Upper Midwest, they need a NASA Administrator who will prioritize Earth science research, including that of climate change and land use/land cover change. Accomplishing this in an administration that has politicized – and at times obfuscated – climate science will not be easy.

The job will require a NASA Administrator who can tell science from politics, and whose main objective is to advance the former.

Critical Attribute #1 – A NASA administrator who will not drop NASA’s Earth science research

Bridenstine would come into the NASA Administrator position, most often held by a scientist or a space professional, with no formal science or engineering training.

His main qualifications include his service as a pilot in the U.S. Navy, his service on the House Science, Space, and Technology and Armed Services Committees, and his leadership of the Tulsa Air and Space Museum and Planetarium.

In 2016, Bridenstine introduced a piece of legislation that should have anyone who wants NASA to better understand natural disasters worried. In H.R. 4945, Bridenstine recommends significantly altering NASA’s mission by stripping out all Earth science related work from Congress’ declared policy and purpose for NASA:

Bridenstine’s recommended changes to Congress’s policy and purpose statement for NASA, as specified in his 2016 proposed legislation, “The American Space Renaissance Act”.

As Administrator, would Bridenstine seek to extract Earth science research from NASA’s work, or would he commit at the confirmation hearing to supporting this critical arm of the institution?

Critical Attribute #2 – A NASA administrator who understands and will promote climate science

Bridenstine made a number of public remarks that both question the well-accepted human cause of climate change and are incorrect. Members of Congress need to hold him accountable to these woefully inaccurate statements, such as:

“I would say that climate is changing. It is always changing. There were periods of time long before the internal combustion engine when the Earth was much warmer than it is today. Going back to the 1600s, we have had mini Ice Ages from then to now.”

Reality: We are living in between two ice ages – one that ended roughly 11,500 years ago, and one that is yet to come. Ice ages happen because of changes in the Earth’s orbit around the Sun. If people weren’t emitting so many greenhouse gases into the atmosphere, the Earth would be slowly beginning to cool right now.

But that’s not the case.

Instead, the Earth’s average surface temperature is the warmest it has been in the past 1,400 years in the Northern Hemisphere (where it is possible to make this kind of measurement). Atmospheric carbon dioxide (CO2) levels are the highest in at least 800,000 years.  Sure, there was a period of cooling in some portions of the world between ~1400 and 1900 (not an actual ice age, but a period that is affectionately known as the Little Ice Age). However, by using both basic physics (more heat-trapping gases in the atmosphere = more heat trapped) and sophisticated computer models, scientists know that the warming they have seen since the mid twentieth century is the result of human-caused global warming emissions.

Members of Congress also need to inquire about this false statement:

“Here’s what I would tell you. That if you look at the Chinese and the Russian and the Indian production of carbon emissions, it is overwhelmingly massive compared to the carbon footprint of the United States of America.”

Reality: Currently, the United States is the second largest producer of global warming emissions in the world, behind China and ahead of Russia and India, and has produced more global warming emissions than any other country since preindustrial times.

And this one:

“Again, I am not opposed to studying it [climate change.] What you’ll find, though, is that the space-based assets that are studying climate change are not in agreement with the terrestrial assets that are studying climate change. In fact, the space-based assets are not corroborating some of the data.”

Reality: Scientists have examined trends in the Earth’s average surface temperature using satellite observations of the troposphere (the lower atmosphere), weather balloon and ocean buoy measurements, information from weather stations, and more – and they all show that the Earth’s surface temperature has increased significantly since the 19th century. Furthermore, the latter part of Bridenstine’s statement is a claim made by skeptics of climate science that has been debunked many times.

Can Bridenstine explain these statements and demonstrate an accurate understanding of climate science?

Measurements of Earth’s changing climate. Each colored line represents an independent measurement of an aspect of the Earth’s climate. IPCC AR5

Critical Attribute #3 – A NASA administrator who will be able to differentiate science from politics

Bridestine’s public remarks suggest that his current understanding of Earth science is largely informed by politically-charged skeptics of climate change research.

Given that Bridenstine would enter into the Administrator position with no formal science education, it is particularly important that members of Congress test his ability to differentiate science from politics.

Members of Congress should not underestimate the quandary they will find themselves in if NASA does not continue these critical Earth science research activities. The products of these endeavors form the basis of our nation’s weather forecasts, lead to new technologies that drive our economy forward, and help protect American lives, infrastructure, and investments. Doing away with or demoting these activities is a risk they should not be willing to take.

Anyone who does not support Earth science research at NASA should not be confirmed as Administrator.

 

 

Original document created by Rachel Licker using the text of H.R. 4945 IPCC Working Group I, Fifth Assessment Report, Frequently Asked Questions

Much to Grouse About: Interior Department Calls for Changes That Could Threaten Sage Grouse Protection

UCS Blog - The Equation (text only) -

The sage grouse's survival is entirely dependent on sagebrush. Photo: Jennifer Strickland, USFWS

That the current administration places very little value on the merit of robust scientific evidence when considering its actions (or inactions) is no longer shocking, but it remains an intolerable practice. In this week’s episode of “How is the Trump Administration Dismantling Science-Based Protections?”, we visit the Interior Department’s decision to formally reconsider a widely heralded Obama-era agreement for protections of the greater sage grouse in the West.

On Thursday, the Interior Department published a formal notice of intent to rework 98 sage grouse management plans across the quirky bird’s 11 state range. This change comes after a mere 60 days deliberation by the Interior Department’s internal Sage-Grouse Review Team (appointed by Secretary Ryan Zinke) and Sage-Grouse Task Force (representatives of Governors of the eleven Western States) – and much to the chagrin of the many stakeholders who worked for several years to craft a cooperative land use agreement in an effort to protect the sage grouse and its habitat.

What’s the deal with the sage grouse?

The sage grouse is the chicken of the “Sagebrush Sea” — an ecosystem which is “suffering death by a thousand cuts”, as former Secretary of Interior Sally Jewell put it. Habitat fragmentation, invasive species, and wildfires in the sagebrush have all contributed to the decline of this magnificent bird.

Importantly Secretary Jewell worked to put in place federal-state partnerships in order to protect the sage grouse. In 2010 the FWS proposed listing the sage grouse under the Endangered Species because of the threats its survival faced. After much input from stakeholders and the public, the agency in 2015 chose NOT to list the species and instead put efforts into state management plans, assuring us all that states could put programs in place to ensure the bird’s protection.  With Secretary Zinke’s moves, we’re now paving over (perhaps literally) those state protection plans, leaving the sage grouse at least as vulnerable as it was when the FWS proposed listing it under the Endangered Species Act.

The sage grouse has long been caught in the crosshairs of political controversy, especially when it comes to undermining the science behind conservation efforts. For example, in 2004, Julie MacDonald, a political appointee at the Fish and Wildlife Service (FWS), altered scientific content in a report examining the vulnerability of the greater sage grouse, which was subsequently presented to a panel of experts that recommended against listing the bird under the Endangered Species Act (ESA)(read my colleagues’ thoughts on political interference in sage grouse conservation efforts here and here).

Ignoring the science

The Sage-Grouse Review Team (SGRT) recommendations include potentially removing or modifying the boundaries of critical habitat called sagebrush focal areas (SFAs), as well as setting population targets and captive breeding, and modifying or issuing new policy on fluid mineral leasing and development. Also worth noting is that an Obama-era moratorium on mining claims in six Western states recently expired, with no indication of renewal from Secretary Zinke.

The problem with the Interior changing the conservation plans is twofold: 1) the motivation for reviewing the sage grouse management plans was to “ease the burden on local economies” by opening protected lands to development, which could have negative impacts on already rapidly-dwindling sage grouse populations, and 2) reopening the plans could spell more trouble for recovery efforts and potentially force FWS to list the sage grouse under the ESA in the future, which is precisely what states wanted to avoid. The conservation plan is critical, but it only works with the agreed upon protections in place.

The decision to undo years of collaboration and compromise between federal, state, local, and tribal governments, NGO’s, scientists, industry, landowners, ranchers, and hunters in a matter of two months sends a loud message to the public that economic considerations prevail over scientific evidence, even at the cost of an entire ecosystem and the species dependent upon it.

The SGRT recommendations ignore the science and put the entire sagebrush landscape at risk, much to the detriment of the sage grouse. Wyoming Governor Matt Mead is critical of the new plan, concerned that it ignores scientific consensus. “We’ve got to have good science lead the way, and that trumps politics,” Mead said. “Let’s look at what the states have done, and what biologists, folks who know this, are telling us.”

Sage advice

We cannot allow our government to irresponsibly cater to oil and gas industry at the expense of our wildlife and public lands. Instead, we must urge the Department of Interior to focus their efforts on collaborative, science-informed management of the sage grouse and its habitat.

Jennifer Strickland, USFWS Wikimedia Commons

Why Are So Many Car Companies Making Big EV Announcements?

UCS Blog - The Equation (text only) -

If you’ve been reading the news lately you might have noticed a trend in the automotive news: Major car brands are announcing their transition plans to go electric.

This is quite a string of announcements in the last few months from some major players in the automotive industry! Why is this happening now and what does it mean for the industry and the environment?

International and domestic pressure to clean up cars and trucks

To answer the question of why now, let’s look at another list of headlines from this year:

These countries (and state) are in different stages of enacting limits on gasoline and diesel-powered vehicles, but the trend is clear: if you want to be part of the future in the biggest automotive markets you need to have a transition plan from petroleum to electric vehicles.

Even beyond these limits on internal combustion engines altogether, many jurisdictions are strengthening the emissions standards for vehicles, meaning auto companies need to produce cleaner and more efficient cars and trucks. Electric vehicles can of course be a part of automakers’ efforts to comply with air pollution and global warming regulations.

Cleaner vehicles, fuels needed to reduce emissions

Transportation has recently eclipsed electricity generation as the largest source of global warming emissions in the US.  Governments around the world are concerned not only with the carbon emissions from petroleum-powered vehicles, but also with the Volkswagen emissions scandal, which has heightened awareness of the air pollution from vehicle tailpipes. Electric vehicles, when paired with cleaner electricity, are an excellent solution to reduce pollution and global warming emissions from transportation.

In our most recent analysis, the average electric vehicle in the US only produces global warming emissions equivalent to what a 73 MPG gasoline car would produce. And the trend in the US has been towards cleaner electricity, meaning these electric cars will likely get even cleaner over time. So these plans by General Motors and others to vastly increase their EV offerings could mark a significant transition to much cleaner transportation.

 

 

Excitement tempered by automakers’ work to weaken regulations

Looking only at the headlines about large automakers’ EV plans, it would seem as though they have embraced the need for cleaner vehicles and fuels wholeheartedly. However, this is not the case.

The automakers’ lobbying groups, led by the Alliance of Automobile Manufacturers, convinced the US EPA to re-review its recently finalized 2022-2025 global warming emission standards for cars and light trucks. Even as their trade groups work to weaken the fuel economy and global warming pollution standards, individual manufacturers have recently announced moves to increase their number of EV models, including General Motors, Ford, and BMW.  But as they tout their plans for cleaner cars (and get good press), they are actively opposing US efficiency standards already in place. And they are also opposing international regulations, such as GM’s CEO Mary Barra’s  pointed push back at China’s efforts to require electric vehicles.

The increasing number of electric vehicles being announced by automakers around the world is good news and certainly a step in the right direction. But these intentions aren’t enough. We need the automakers to make sure these vehicles are a success, putting them at the center of their showrooms and marketing efforts as they do with gasoline-powered cars and trucks today. And they certainly need to stop actively opposing the efforts of policymakers and regulators to clean up transportation and reduce emissions.

Who Would Lose with New Suniva/SolarWorld Solar Tariffs? Just About Everybody

UCS Blog - The Equation (text only) -

A recent decision by the US International Trade Commission (USITC) in favor of two solar manufacturers means that new tariffs on solar cells and panels could be coming. As the reactions from companies and organizations across the economy—and across the political spectrum—make clear, that’s bad news for just about everyone, including you and me.

The solar tariff case

Solar means jobs. As long as we don’t mess things up. (Credit: John Rogers)

The case was brought by Suniva and SolarWorld Americas, two foreign-owned US manufacturing operations that had hit rocky patches in recent years. The companies applied to the USITC under Section 201 of the Trade Act of 1974, which basically says that “domestic industries seriously injured or threatened with serious injury by increased imports” can ask the USITC for “import relief.”

That might seem like a pretty low bar—competition is never easy, whether it’s domestic or foreign, and some of that competition could indeed be serious—but Section 201 has been used only once in the 21st century (in 2002, in a short-lived attempt to protect the steel industry, but one that would have harmed consumers and destroyed more jobs than it created because of the impact of the higher steel prices).

It’s not lost on anybody, though, that this latest petition comes at a time when we have a president who is no friend of trade, and is hungry for tariffs.

The relief that the two petitioners are asking for—sizeable new tariffs on both solar modules, and the cells that manufacturers (yes, US manufacturers) might assemble into modules—would put a definite dent in solar’s incredible momentum in recent years. More importantly, for a president who professes to be about jobs, it would be very likely, as with the 2002 case, kill more jobs than it saved or created.

Even so, on September 22, the bipartisan USITC voted 4-0 in favor of the petition, determining:

…that increased imports of crystalline silicon photovoltaic cells (whether or not partially or fully assembled into other products) are being imported into the United States in such increased quantities as to be a substantial cause of serious injury to the domestic industry producing an article like or directly competitive with the imported article.

How do I love thee not? Let me count the ways…

The reaction to both the original petition and the recent USITC decision has been notable in the breadth of organizations and people reacting negatively, the near unanimity in condemning these moves. Here’s a sampling of reactors and reactions.

The solar industry – Those opposed to Suniva-SolarWorld include just about the whole rest of the US solar industry. Manufacturing jobs account for only 15% of the industry’s 260,000 jobs. For solar project developers, sales forces, installers, and even other manufacturers, new tariffs means increased costs and, likely, diminished prospects for success. As SEIA (the Solar Energy Industries Association) put it:

The ITC’s decision is disappointing for nearly 9,000 U.S. solar companies and the 260,000 Americans they employ… An improper remedy will devastate the burgeoning American solar economy and ultimately harm America’s manufacturers…

Indeed, SEIA has claimed that, if the petitioners are successful in their appeal to the USITC, “88,000 jobs will be lost nationwide, including 6,300 jobs in Texas, 4,700 in North Carolina and a whopping 7,000 jobs in South Carolina.”

The US solar industry is about manufacturing, and a whole lot more. (Source: National Solar Jobs Census 2016)

Bipartisan voices — Before the recent vote, a bipartisan group of governors of leading solar states—Colorado, Massachusetts, Nevada, and North Carolina—sounded the alarm in a letter to the commission:

The requested tariff could inflict a devastating blow on our states’ solar industries and lead to unprecedented job loss, at steep cost to our states’ economies. According to a study conducted by GTM Research, if granted, the tariff and price floors would cause module prices to double, leading solar installations—both utility-scale and consumer-installed—to drop by more than 50 percent in 2019. At a time when our citizens are demanding more clean energy, the tariff could cause America to lose out on 47 gigawatts of solar installations, representing billions of dollars of infrastructure investment in our states.

Conservative groups – From the “strange bedfellows” department came the news that opponents also include conservative groups who don’t like the idea of mucking with trade, and particularly not in defense of two relatively minor companies. The Heritage Foundation, for example, spoke against what it said was “a case that could undermine the entire U.S. solar energy industry.”

Solar jobs; red dots indicate “manufacturer/supplier”. It’s about a lot more than modules. (Source: SEIA National Solar Database)

Likewise, the American Legislative Exchange Council (ALEC), not usually on the same side of arguments as renewable energy companies or advocates, cited the broader solar industry’s impressive job tally and job progress in recent years, and the risks to even the manufacturing piece of that:

Many of those [260,000] workers are employed by other solar companies that have successfully figured out how to prosper in this growing industry. Over 38,000 solar workers are employed in manufacturing positions at firms domestically making solar components like inverters, racking systems and more…

Those 38,000 manufacturing jobs might disappear if artificially high input costs price the entire industry out of existence.

Source: National Solar Jobs Census 2016

A broad coalition – The Energy Trade Action Coalition formed by SEIA, solar companies, ALEC, Heritage, plus utilities, retailers, and others in response to this Section 201 threat reacted to the recent decision by going after the petitioners themselves:

The ITC decision to find injury is disappointing because the facts presented made it clear that the two companies who brought this trade case were injured by their own history of poor business decisions rather than global competition, and that the petition is an attempt to recover lost funds for their own financial gain at the expense of the rest of the solar industry.

Security experts – For security types, the risks have to do with our military preparedness, resilience, and assurance; more than a dozen former members of the US military, including a lieutenant general and a rear admiral, weighed in with the USITC on the fact that “[t]his dramatic cost increase could potentially jeopardize the financial viability of planned and future solar investments on or near domestic military bases.” This could put at risk bases, missions, and critical services.

And the list goes on.

Not everyone is opposed, of course. Along with the petitioners themselves, a coalition of labor, manufacturing and agricultural interests, the Coalition for a Prosperous America, has spoken out in support of the Suniva-SolarWorld move, saying that the coalition “strongly believes that relief is needed in the face of an Asian import surge to prevent the complete collapse of a critical industry, the manufacture of solar panels”:

Thousands of workers have lost good paying U.S. jobs as a result [of overproduction by international module manufacturers]. That these severe effects occurred during a period of booming U.S. [solar] demand, and despite two successful solar trade cases, is all the more troubling.

But national opinion is overwhelmingly on the other side. Even Suniva’s majority owner, Hong Kong-based Shunfeng International Clean Energy, is purportedly against Suniva’s crusade.

Credit: U.S. Department of the Interior

What’s next

With the September 22 commission decision that the petitioners were indeed seriously hurt by imports, the next step is the “remedy” phase, which starts with various parties weighing in to say what they think the fix should be.

Flush with (and surprised by?) the success of their ITC petitions, Suniva and SolarWorld have backed down a little in their demands… but only a little. Others are pushing for a “cure” much closer to a placebo, in the hopes of minimizing the damage to (other) US companies, US consumers, and American jobs.

The USITC then needs to make a recommendation to President Trump, by mid-November. And then the president needs to decide where this goes.

Meanwhile, SolarWorld has said it’s planning to ramp up production given the recent decision. The president of SolarWorld Americas is quoted as saying:

With relief from surging imports in sight, we believe we can rev up our manufacturing engine and increase our economic impact… [W]e at SolarWorld are prepared to scale up our world-class manufacturing operations to produce leading solar products made by more American workers.

That commitment to leaping right back in is a little hard to believe, given the uncertainties that remain while this plays out. The 2002 Section 201 case around steel tariffs ended in failure the following year, after a purported loss of 200,000 American jobs.

It’s the president’s call

What’s clear, though, is that this is potentially a pivotal moment in solar’s trajectory in this country. The US solar industry is about much more than manufacturing, and even the manufacturing sector is about more than cells and modules.

President Trump could take a tariff sledgehammer to the shining solar piece of our nation’s impressive clean energy momentum, favoring a small piece of the industry regardless of the damage to the rest. That would mean harming a sector that has been arguably the best story of job creation and economic growth over the last 10 years. Destroying US jobs while pretending he’s all about creating them.

Or our president could take minimal or no action, send out a victorious tweet or two, and let the US solar industry—in all its dimensions—continue to do its thing. Creating American jobs, not killing them. Strengthening our energy security, not weakening it. And benefiting millions of US customers with greater affordability and access to solar.

Let’s go with option B.

What’s Tax “Reform” Got to Do with Science and Public Well-being?

UCS Blog - The Equation (text only) -

Photo: USCapitol/Flickr

In the days since the “Big Six” group of Congressional leaders and Trump administration officials unveiled the outlines of their tax “reform” proposal, there’s been a fierce debate—and rightly so—over who stands to win and who lose. Will the average working American get anything significant from this tax plan, or are most of the benefits skewed towards the wealthy and profitable corporations?  More on this in a minute.

What’s gotten less attention is the impact of this plan on the public science enterprise and the well-being of all Americans.

An unprecedented assault

Federal government investments in science research and innovation have led to discoveries that have produced major benefits for our health, safety, economic competitiveness, and quality of life.  This includes MRI technology, vaccines and new medical treatments, the internet and GPS, earth-monitoring satellites that allow us to predict the path of major hurricanes, clean energy technologies such as LED lighting, advanced wind turbines and photovoltaic cells, and so much more. The work of numerous federal agencies to develop and implement public and worker health and safety protections against exposure to toxic chemicals, air and water pollution, workplace injuries, and many other dangers has also produced real benefits.

These essential programs are already under unprecedented assault. UCS president Ken Kimmell has called President Trump’s proposed FY18 budget “a wrecking ball to science.” Others at UCS have detailed the devastating impacts of Trump’s proposed budget cuts on the Environmental Protection Agency, the Department of Energy, the Department of Agriculture, the Federal Emergency Management Agency, the National Oceanic and Atmospheric Administration, worker health and safety, the Forest Service, and early career scientists.

UCS and our allies are pushing back hard on these proposed budget cuts, and we remain vigilant to ensure that when Congress takes final action on the FY18 appropriations bills in December, these irresponsible cuts will be rejected.

All these programs (along with veterans’ care, homeland security, transportation and other infrastructure, law enforcement, education, and many other core government programs) fall within the non-defense discretionary (or NDD) portion of federal spending, which has been disproportionately targeted for spending cuts over the last decade. As an analysis by Paul Van de Water of the Center for Budget and Policy Priorities points out, “NDD spending in 2017 will be about 13 percent below the comparable 2010 level after adjusting for inflation (nearly $100 billion lower in 2017 dollars).”

Even if the draconian Trump budget cuts are beaten back, the very real need to increase spending on entitlement programs such as Social Security and Medicare, along with a push by many in Congress to maintain (or increase) defense spending, will continue to squeeze NDD expenditures in the years ahead.

Creating long-term pressure on essential programs

Here’s where the Republican tax plan comes in, as it will almost certainly reduce government revenues substantially and add to the national debt. While Treasury Secretary Steven Mnuchin told ABC News that the tax plan would generate higher economic growth rates and “will cut the deficit by $1 trillion,” few independent economists agree with that rosy outlook.

The Committee for a Responsible Federal Budget estimates the plan could increase the deficit by $2.2 trillion over the next decade; CRFB president Maya MacGuineas cautioned that “tax cuts shouldn’t be handed out like Halloween candy,” and said they “certainly don’t pay for themselves.”

Senate Republicans openly acknowledge that the tax plan will increase the deficit; the Budget Committee resolution that they plan to put before the full Senate for a vote later this month contains reconciliation instructions to the Finance Committee that would allow the deficit to increase “by not more than $1.5 trillion over the next 10 years.”

Deficit spending is sometimes justified, such as for investments in infrastructure, education, public health, and other forms of physical and human capital that more than pay back over time, or to kick-start the economy when unemployment is high. But that’s not the case here; as discussed below, the bulk of the benefits from this plan would flow to the wealthiest Americans, with low- and middle-income Americans receiving only modest direct benefits, if any.

Moreover, the resulting increase in the federal deficit would lead to louder calls for cuts in programs that benefit low- and middle-income Americans, including food assistance programs, student loans, unemployment insurance, economic development, and worker retraining.  As another analysis by the Center for Budget and Policy Priorities put it, “the majority of Americans could ultimately lose more from the program cuts than they would gain from the tax cuts.”

The government needs more revenue, not less

Looking down the road, it’s clear that the aging of the American population, continued increases in health care costs, the need to replace crumbling infrastructure, and other factors are creating pressure for federal spending to increase substantially over the next few decades.

The Center for Budget and Policy Priorities estimates that to accommodate these factors, federal spending will need to grow from 20.9 percent of gross domestic product (GDP) to 23.5 percent of GDP by 2035. This is largely driven by increased costs for Social Security, Medicare, and Medicaid; CBPP projects that defense and non-defense discretionary spending will decrease somewhat as a share of GDP over the next couple of decades. As the CBPP report observes, the need to increase federal spending is “hardly a controversial notion. Budget plans from such diverse organizations as the National Academy of Sciences, the Bipartisan Policy Center, and the American Enterprise Institute have reached the same conclusion.”

To keep the national debt from growing faster than the overall economy, CBPP estimates that annual budget deficits need to be held to an average of 3 percent of GDP; this in turn means that federal revenues should increase from some 17.8 percent of GDP in 2016 to at least 20.5 percent in 2035. There are any number of ways to do this, from closing special interest loopholes in the tax code to putting a tax on carbon dioxide emissions or other forms of pollution. Of course, given the current political realities in Washington, no one expects a serious discussion of this issue anytime soon; the current challenge is just to avoid making the situation worse.

Tax fairness: the rhetoric and the reality

President Trump and Republican leaders insist that their aim is to provide tax relief for the middle class, and that taxes won’t be cut for wealthy Americans; President Trump even asserted that this tax plan is “not good for me. Believe me.”

But a preliminary analysis of the framework by the Tax Policy Center found otherwise. While acknowledging that several details remain to be filled in, TPC estimates that in 2018 under the “Big Six” plan, “taxpayer groups in the bottom 95 percent of the income distribution would see modest tax cuts, averaging 1.2 percent of after-tax income or less. The benefit would be largest for taxpayers in the top 1 percent (those making more than $730,000), who would see their after-tax income increase 8.5 percent.”

Over half of the total benefit of the tax cuts would accrue to taxpayers in the top 1 percent, increasing to nearly 80 percent of the benefits by 2027. Others have examined how the elimination of the alternative minimum tax, the abolition of the estate tax, and several other provisions of the plan would personally benefit President Trump—and his heirs.

Private interests vs. the public good

It’s clear that the stakes in the tax debate now under way in Washington are not just about the critical issue of whose tax bills go down (or up) and by how much. The outcome will also have an impact on our ability to maintain America’s global leadership on scientific and medical research and technology innovation, improve air and water quality, avert the worst impacts of climate change (and cope with the impacts we can’t avoid), upgrade our transportation, energy, and communications infrastructure, and many other important issues.

It’s hard to dispute the need for real tax reform—a plan that clears away the dense thicket of special interest loopholes and simplifies the tax code, in a way that’s equitable to all Americans. But that’s not what’s on offer right now—instead we’re seeing a drive to give trillions of dollars in handouts to profitable corporations and the wealthiest Americans, while laying the groundwork for deep cuts in a broad range of important federal programs down the road.

Our elected officials can – and should – do much better than this; if they’re unwilling to, they should observe the Hippocratic oath, and “first do no harm.”

 

 

Is Your Representative Setting Us Up for Another Dieselgate?

UCS Blog - The Equation (text only) -

Remember dieselgate? The Volkswagen scandal that led to huge emissions of harmful air pollution from their cars, criminal charges, and a $30 billion mea culpa? Well, dieselgate may be small compared to the new emissions scandal that is playing out across the country. This time, however, the emissions cheating would be explicitly allowed by Congress.

As with the VW scandal, it involves so-called emission defeat devices – equipment that shuts off a vehicle’s emissions control system, allowing the car to spew hazardous pollution into the air. These defeat devices are marketed to amateur racers (and sometimes the general public who think it’s fun to “roll coal” and blow black smoke at Priuses). Manufacturers of these defeat devices are pushing Congress to let them off the hook for selling products that are used illegally in our communities, and so far many in Congress are siding against clean air.

What do defeat devices do and who wants them?

All vehicles on public roads must have pollution control systems to remove dangerous air pollutants such as particulate matter (PM), nitrogen oxides (NOx), and smog precursors (carbon monoxide and hydrocarbons) from vehicle exhaust. And this is a really good thing. The EPA estimates that current pollution control systems will prevent up to 2,000 premature deaths, avoid 2,200 hospital admissions, and eliminate 19,000 asthma attacks annually because some of these pollutants cause lung cancer, heart disease, and respiratory harm.

These emission control systems can, however, be turned off by defeat devices which are frequently marketed as “tuners”, “oxygen sensor simulators” or “exhaust gas recirculation delete kits”.

Why would someone want to turn off their vehicle pollution controls? One popular reason is for amateur car racing. We’re not talking NASCAR here, as purpose-built race cars are already exempt from this requirement. Instead these are local races where people “convert” their regular cars into race cars to use at tracks.  And if people want to modify a car that they use just for racing so that it goes a little faster on the track, it’s probably not that big of a deal.

Out of the millions of vehicles on the road, only a tiny fraction of them are modified to be used in racing competitions. However, if people bypass the emission controls on cars they use on our streets on a regular basis, that’s a different story: it imposes unnecessary pollution on the drivers’ neighbors and it’s against the law. So if device manufacturers are knowingly selling defeat devices for off-track use, they should be prosecuted.

How big of a deal could this be?  Big. One settlement that the EPA made with H&S Performance states that they sold over 100,000 devices and that the pollution from those devices would be nearly TWICE the NOx pollution put out by VW diesel cars from 2008 until they were caught in 2015.[i] 

One company, double dieselgate.  It’s staggering.

It turns out that there are hundreds, if not thousands, of companies who are willing to sell people defeat devices that they can put on their own cars.  We don’t have a complete handle on the number of devices sold, or how much extra pollution they are spewing out into our communities. But based on the emissions from just H&S Performance, it has the potential to be HUGE. And if manufacturers and retailers of these devices are marketing these defeat devices to the general public for use on our roads, the emissions, and therefore health, impacts could be enormous.

So, what does this have to do with Congress?

Manufacturers of defeat devices have a vested interest in making it difficult for regulators to stymie the illegal use of these defeat devices since the more they sell, the bigger their profits. There are bills in the House (H. 350 ) and Senate (S. 203) called the “RPM Act” that would make it very difficult for the EPA to go after manufacturers of these defeat devices who are clearly selling to people who are using these on their everyday vehicles. It is critical that the EPA maintains the ability to stop manufacturers who aren’t playing by the rules.

In a recent hearing about the RPM Act in front of the House Energy and Commerce Committee, Alexandra Teitz, a consultant for the Sierra Club, dubbed this “DIY Dieselgate”, which is incredibly apt.

There are a lot of Senators and Representatives supporting this bill because the trade association for the manufacturers who make these devices (and other aftermarket parts) is putting in a lot of effort on Capitol Hill. The manufacturers see a challenge to their business model and profitability. And they have put a lot of effort into convincing amateur racers, wrongly, that the EPA intends to stop all amateur racing or take their race cars.

The manufacturers are selling this bill as a clarification of existing law, when in actuality it will make it very hard, if not impossible, for the EPA to do their job and ensure that all Americans have access to clean air – and one way they will do it is to prosecute manufacturers who are clearly selling these defeat devices to individuals who are not using them solely for racing. We need to make sure Congress is aware they are voting for legislation that will put the health of their constituents at risk.

Allowing amateur racers to modify a small number of vehicles that are solely used at the track is one thing – but sanctioning mass marketing of emissions defeat devices that are resulting in deadly air pollution in communities across the country is another. Check out the list of cosponsors for the House and Senate bills to see if your representative is on the bill. If so, please call your representative and ask that they withdraw their support for the RPM Act.

[i] The settlement agreement notes 71,669 short tons (or 65,017 metric tons) of NOx emissions over the lifetime of vehicles with H&S Performance defeat devices installed.  An analysis by MIT researchers estimate excess NOX emissions of 36,700 metric tons between 2008 and 2015 from non-compliant 2.0L VW vehicles.

 

 

Sociological Gobbledygook or Scientific Standard? Why Judging Gerrymandering is Hard

UCS Blog - The Equation (text only) -

In Tuesday’s historic Supreme Court case, the question asked was how to identify and remedy unconstitutional partisan gerrymandering, where electoral district boundaries are drawn so as to benefit one political party’s voters over others.  The phrase uttered during oral argument that is getting the most attention is Chief Justice Roberts’ assessment of the various techniques that have been proposed to measure it: “sociological gobbledygook.”  It’s a funny way to describe Roberts’ apparent distaste for mathematical, as opposed to legal, explanations, but it also reveals a serious problem for the use of scientific evidence in the court.

Let’s look at the evidence.

One of the core issues in these cases, as I’ve previously discussed, involves the discovery of “workable standards.”  To be workable, a standard must identify a constitutional (fundamental) harm, as opposed to a de minimus (minor) harm, so as not to inundate the court with cases.  Further, the standard must be capable of being practically applied by justices who are not themselves scientists.

Whether or not tests for the standard of partisan symmetry, the equal treatment of voters regardless of which party they support, are workable, was the primary point of contention when Justice Roberts made his remark.

In describing his concern about judicial overreach into the political process, Roberts proclaimed that “you’re taking these issues away from democracy and you’re throwing them into the courts pursuant to, and it may be simply my educational background, but I can only describe as sociological gobbledygook.”

On the one hand, Roberts is identifying a serious problem that needs to be addressed by scientists in the courtroom.  Statistics can be manipulated and are open to interpretation in ways that other forms of legal evidence are often not.

In many cases, both parties trot out potentially motivated “experts” to exchange criticisms in specialized language, leaving judges to make decisions based on evidence that their educational background does not train them for.  Consider two examples taken directly from yesterday’s argument.

The term “false positives” was used by the defense (the state of Wisconsin) to refer to the inaccuracy of one way to measure symmetry, the efficiency gap.  “False positive” refers to a Type I error, when the test for something (like pregnancy, using a urine test that measures levels of the hormone chorionic gonadotropin) turns up positive, but has not actually occurred (no fertilized egg embedded in the uterus, which produces the hormone).  Pregnancy tests have about a 3% false positive rate.  But back to gerrymandering.

In this case, the claim of “false positive” was misapplied, and expanded to describe any state with a significant efficiency gap, where the plan was not drawn by the state legislature.  That is, the defense implied that districting plans not drawn by parties (those drawn by courts through litigation or by commissions, etc.) could not be biased.  But the efficiency gap is not a test of who draws a districting map, it is a measure of bias.

Even randomly drawn maps using computer simulations can result in quite biased plans, depending on the underlying geographic distribution of voters.  None of the justices seemed to pick this up.  Justice Alito, responding to such claims, expressed grave concern about “the dozens of uncertainties about this whole process.”

Worse still was Chief Justice Roberts’ mistaking of symmetry for “proportional representation, which has never been accepted as a political principle in the history of this country.”

Partisan symmetry is explicitly not a test of proportionality in election results (where a party receives the same percentage of seats as its percentage of votes).  In fact, symmetry was intentionally designed as an alternative standard of testing the principle of political equality in U.S. elections, because proportionality is a higher standard than what the Constitution demands.

These mistakes might have been avoided through a more thorough reading of the many scientific briefs offered to the court for review (or the video above).  Nevertheless, the burden is on scientists to communicate our work clearly and concisely to non-experts, otherwise this problem will only persist.

On the other hand, several of the Justices had a strong grasp of how scientific standards operate within the voting rights framework.  Justice Kagen, for example, correctly noted that both partisan symmetry and the one-person, one-vote standard (prohibiting unequally populated districts) address the dilution of voting strength for individual voters as a function of statewide plans, not single electoral districts.

Moreover, Justice Sotomayor, responding to the defense’s claims about inaccuracies in estimating the impact of Wisconsin’s plan, pointed out that “every single social science metric points in the same direction.”  That is the sort of understanding about probabilistic estimates that scientists need to convey to judicial authorities.  It is how scientists forecast everything from economic growth to health epidemics and weather patterns.  The Justice continued, noting that the same types of statistical estimates were used to create Wisconsin’s maps in the first place, and that “it worked.  It worked better than they even expected, so the estimate wasn’t wrong.  It was pretty right.”

Judges have their work cut out for them if the Supreme Court finally provides a means by which political parties can be restrained from advancing their partisan interests at the expense of voters’ fundamental right to an equally weighted vote.  But it is up to the scientific community to work with the judiciary in the appropriate application of statistical evidence.  The consequences, which feedback through the entire policy making process, make it well worth the effort.

Mr. President: Puerto Rico Is, Indeed, Living a “Real Catastrophe”

UCS Blog - The Equation (text only) -

Citizen Soldiers of Puerto Rico Army National Guard, alongside residents of the municipality of Cayey, clear a road after the destruction left by Hurricane Maria through the region, Sep. 30.

Recently, President Trump visited Puerto Rico to meet with federal and Commonwealth officials coordinating the relief and recovery effort in the wake of Hurricane María.  At an Air National Guard base, the president held a briefing where he congratulated the first responders on the ground there. Following that, President Trump toured neighborhoods and a church, where he threw paper towels at the crowd, much as a celebrity would at a sporting event.

I know well the neighborhoods he toured, as they are in my hometown of Guaynabo (goo-aye-nah-bow), a relatively wealthy city of almost 98,000 inhabitants near San Juan.  As I said in a previous blog post, I am relieved that my city–at least the part where my neighborhood lies–was not among the most heavily hit, yet most areas have not been so lucky.

President Trump would have done well to have visited not just Guaynabo, but the hardest-hit areas where a real catastrophic humanitarian crisis—contrary to what he implied yesterday in Puerto Rico— is indeed ongoing.

He could have seen, for example, a destroyed bridge connecting the rural towns of Morovis and Ciales, or a flooded residential subdivision in the eastern coast, where the storm surge came in with force. Not too far from Guaynabo is the San Juan working class neighborhood of Puerto Nuevo, which was heavily flooded as well. Or further inland in the mountainous region of the central and eastern Cordillera Central, he could have witnessed the ravaging defoliation of the El Yunque tropical forest, a vital part of the rich ecosystem of Puerto Rico. An aerial survey of the Levittown suburb in the northern plains would have shown him the effects of widespread flooding in one of the largest suburbs in the Commonwealth.

Even before a visit, just checking with NASA would have given the president an accurate idea of the scope of damage in the San Juan area, as this damage assessment to buildings done with remotely-sensed imagery shows, or the widespread loss of electricity. Maybe that would have made President Trump think twice before declaring that due to the latest official death toll (34 so far, but likely to rise), the situation in Puerto Rico is not “a real catastrophe like Katrina.

President Trump responds with scorn to the humanitarian plight of Puerto Ricans

But during the visit to the territory, the president did not go to any of the hard-hit areas, had nothing but scorn for the people of Puerto Rico, and seemed to be more worried about money than about the lives and well-being of 3.4 million U.S. citizens.

In one of his first remarks during the visit, President Trump claimed that Puerto Rico was “throwing our [federal] budget out of whack,” an assistance that so far totals $35 million dollars. In contrast, the National Hispanic Leadership Agenda, a coalition of national Hispanic organizations in the U.S., has called on Congress and the President to, among other things, provide $70 billion dollars for Puerto Rico and the U.S. Virgin Islands.

The president’s visit does nothing to assuage the fears among Puerto Ricans that Puerto Rico will be left behind in the wake of Hurricanes Irma and María.

A few days before his visit, President Trump suggested that Puerto Ricans want “everything to be done for them” in relation to emergency aid and relief. That assertion can’t escape being lumped together with the long-standing racist social construction of Puerto Ricans—especially those of African descent—as lazy and wilfully dependent on the government. That characterization is also patently not true, as the massive mobilization of both Puerto Ricans in the territory, along with the Puerto Rican diaspora in the United States, demonstrates anything but lazy complacency in the falsehood that Puerto Ricans expect the federal government to do everything for them. I know because over the last few weeks, I have been part of relief efforts in my local community and witnessed the solidarity of people everywhere—both of Puerto Rican origin, and otherwise.

The president’s unfortunate comments were followed by a personal attack on the mayor of San Juan, Carmen Yulín Cruz, for her very candid rebuttal of the acting head of Homeland Security’s comments that the federal response to the Puerto Rico crisis was a “good news story” when thousands of people still have no running water or electricity.

The president has an obligation to provide aid and comfort to millions of U.S. citizens in Puerto Rico and the U.S. Virgin Islands

The demand for immediate action made by Puerto Ricans and many in the U.S. from Congress and the President is the demand of 3.4 million American citizens (and another 100,000 in the USVI) to which both Congress and the President have an obligation to provide for their well-being and safety in the face of such a catastrophe.

President Trump’s callousness and harsh words for Puerto Ricans sharply contrasts with the swift approval of an emergency aid package for Texas and Florida in the wake of Hurricanes Harvey and Irma.

I watch these events unfold, and the response of Congress, but especially of President Trump, with a mix of anger and sadness. It is simply incomprehensible that almost two weeks after Hurricane María made landfall, Puerto Rico and the U.S. Virgin Islands still have not received an emergency aid package to speed-up recovery and avert worsening of the humanitarian crisis unfolding there.

That is why I joined together with my colleagues at Voces Verdes – Latino Leadership in Action to elevate our voices to demand that Congress and the president provide an initial emergency spending package for Puerto Rico and the US Virgin Islands. We are not alone in this demand. The National Hispanic Leadership Agenda (NHLA) just sent a letter to Congress and the President with other demands in addition to the emergency relief package.

Multiple sectors have mobilized to aid Puerto Rico in recovering from the devastation and misery brought by this hurricane season. It is inspiring to see the way private individuals, including Puerto Rican musicians and actors, have stepped in to help. However, they should not have had to take on something that should be the responsibility of the president and Congress. Their solidarity and quick action contrasts sharply with the victim-blaming rhetoric the President has engaged in.

The president is supposed to be a unifying voice in times of national emergencies. Instead, President Trump has chosen to visit ravaged Puerto Rico to make a mockery of human suffering in the face of catastrophic extreme weather. The federal agencies he commands need the resources to continuing doing their job adequately (as they have been doing within the limitations of the aid that has been provided) to help Puerto Rico and the U.S. Virgin Islands. That job can be best done if the president and Congress direct the adequate resources to FEMA and other federal agencies. His scorn and jarring tone with people who have lost everything make that task even more monumentally difficult than it already is.

Want to help?

Congress Could Help Farmers, Prevent Pollution, and Reduce Flood and Drought Damage. Will They?

UCS Blog - The Equation (text only) -

U.S. Department of Agriculture (USDA) Natural Resources Conservation Service (NRCS) Soil Conservationist Garrett Duyck and David Brewer examine a soil sample on the Emerson Dell farm near The Dalles, OR. USDA NRCS photo by Ron Nichols.

The news lately has been full of Congressional battles—healthcare, the debt ceiling, and now tax “reform” (ahem)—and it’s starting to seem like Congress is only interested in blowing things up. But a huge legislative effort is gaining steam on Capitol Hill, one that is likely to have general bipartisan support, though you probably haven’t heard nearly as much about it. I’m talking about the next five-year Farm Bill—which really should be called the Food and Farm Bill, as it shapes that sprawling economic sector worth more than 5 percent of US GDP, and which Congress must reauthorize by September 30, 2018.

In this first of a series of posts on the 2018 Farm Bill, I look at how this legislation could do more to help farmers conserve their soil, deliver clean water, and even reduce the devastating impacts of floods and droughts, all of which would save taxpayers’ money.

Farm conservation works

Since 1985, the Farm Bill has promoted stewardship of soil, water, and wildlife by directing funding to a variety of US Department of Agriculture (USDA) conservation programs. These programs provide financial incentives and technical assistance for farmers and ranchers to protect their soil and store carbon by planting cover crops, reduce fertilizer and pesticide use by rotating a mix of crops, capture excess fertilizer and add wildlife habitat by planting perennial prairie strips in and around vast cornfields, and even take environmentally sensitive acres out of farming altogether.

Recent UCS analysis has shown that farm practices like these lead to positive environmental outcomes while maintaining or increasing farmers’ yields and profits and saving taxpayers’ money.

And our latest report, Turning Soils into Sponges, reveals a surprising additional benefit: growing cover crops and perennial crops can make farmers and downstream communities more resilient to the effects of floods and droughts. The report demonstrates that these practices—which keep living roots in the soil year-round—result in healthier, “spongier” soils soak up more water when it rains and hold it longer through dry periods. Using these practices, farmers can reduce rainfall runoff in flood years by nearly one-fifth, cut flood frequency by the same amount, and make as much as 16 percent more water available for crops to use during dry periods. But farmers need help to do it.

A changing climate demands more conservation, not less

So it was a real step backward when the 2014 Farm Bill cut the very programs that help farmers build healthy soil and prevent pollution. That bill cut the USDA’s Conservation Stewardship Program (CSP), for example, by more than 20 percent. A USDA official recently told a Senate committee that CSP is “greatly oversubscribed” and must turn away thousands of farmers who want to participate.

(Incidentally, the Senate will hear this week from President Trump’s nominee to lead the USDA’s conservation efforts, whose conservation record as Iowa Secretary of Agriculture has been mixed.)

Meanwhile (surprise!) the problems that on-farm conservation can help solve are not going away by themselves. Midwestern farm runoff has led to deteriorating water quality from Iowa to the Gulf of Mexico. And climate change will only worsen water quality and increase the frequency and severity of floods and droughts.

The latter is particularly bad news for farmers, and for all of us. A new report from the USDA’s Risk Management Agency, which operates the taxpayer-subsidized federal crop insurance program, shows that losses from drought and flooding were to blame for nearly three-quarters of all crop insurance claims paid to farmers and ranchers between 2001 and 2015.

Farmers are adopting conservation practices, and policy support is growing

For example, earlier this year researchers at Iowa State University released the results of their 2016 Iowa Farm and Rural Life Poll, which asked farmers across the state about conservation practices they used between 2013 and 2015. Nearly half (44 percent) reported an increase in the use of practices to improve soil health, with 20 percent reporting they’d increased their use of cover crops.

Meanwhile, the National Farmers Union (NFU), which represents family farmers and ranchers across the country, has become increasingly vocal about the need for USDA programs and research to help farmers build soil health and cope with climate change. And taxpayer advocates have lent their voice to the call stronger requirements for on-farm conservation as a condition of participating in the federal crop insurance program (so-called conservation compliance). A number of states have undertaken healthy soil initiatives, and some observers expect soil health to get more attention in this Farm Bill, as it should.

Congress: Don’t ask farmers to do the impossible

To recap: farm conservation works, farmers want to do it, and we all need more of it to cope with a changing climate and the floods, droughts, and escalating costs it will bring. So why wouldn’t Congress invest more?

As usual, budget-cutting fever is the problem. The Trump administration’s proposed USDA budget reductions shocked farmers and their allies in Congress last spring, cowing even the powerful Republican chair of the Senate agriculture committee, who warned that the 2018 Farm Bill will need to “do more with less.” That’s a silly thing to say, of course…with most things in life, doing more requires, well, more. For farm conservation, that means financial incentives and technical assistance for more farmers and more acres, along with more monitoring to ensure that it’s getting results.

That’s why UCS joined with NFU and two dozen other organizations in outlining our collective conservation priorities for the 2018 Farm Bill. These include a substantial increase in funding for USDA conservation programs including CSP, along with additional monitoring and evaluation of outcomes, better enforcement of conservation compliance, and improvements in the federal crop insurance program to remove barriers to conservation.

As Congress debates the Farm Bill in the coming months, UCS will be urging them to see farm conservation programs for what they are—critical programs to help farmers stay profitable today while preventing pollution, improving resilience, and avoiding more costly problems down the line.

In short, an excellent investment in our future.

Why Going 100% Electric in California Isn’t as Crazy as it Might Seem

UCS Blog - The Equation (text only) -

Electric vehicle charging stations line the perimeter of San Francisco's City Hall. Photo: Bigstock.

California’s top air pollution regulator, Mary Nichols, made headlines last week after making comments to a Bloomberg reporter about the possibility of banning gasoline cars in California.  Shortly after that, California Assembly member Phil Ting announced he would introduce state legislation to do just that. Skeptics may raise their eyebrows, but if California is going to meet its long term climate and air quality goals then nearly all future cars and trucks must be powered by renewable electricity and hydrogen. The good news is the state is already on this path.

Our health and our climate depends on vehicle electrification

It’s no secret that widespread vehicle electrification is needed to meet California’s climate and air quality goals. In 1990, the first Zero Emission Vehicle program was adopted – an acknowledgment that vehicles with zero tailpipe emissions were necessary to ensure healthy air in a state with a growing population and a whole lot of cars.

Climate change has only added to the importance of vehicle electrification, which takes advantage of the efficiency of electric motors and the ability to power vehicles with renewable electricity or hydrogen (fuel cell vehicles have an electric motor and zero tailpipe emission similar to battery electric cars).

The state’s recent assessment of vehicle technologies needed to meet our climate and air quality goals shows the importance of widespread vehicle electrification suggesting all sales of new cars should be electric by 2050 (including plug-in hybrids or PHEVs).  A national assessment, Pathways to Deep Decarbonization in the United States, and a California assessment, also point out a large-scale transition to electric vehicles (EVs) is needed to achieve the level of emission reductions needed to avoid dangerous climate change.

Figure 1: From a presentation by staff to the Air Resources Board in March 2017 showing that by 2050 the majority of cars on the road – and all of new car sales – are powered by electric motors.

Banning gasoline and diesel gains popularity  

In the wake of VW’s Dieselgate and with the impacts of climate change becoming more and more apparent –  banning the sale of internal combustion vehicles is becoming a popular policy choice around the world, with France, Britain, India and China all making big splashes with recent commitments to eliminate them at some point in the future.

With these strong commitments gathering steam, some one might ask if California is somehow losing its leadership on EVs.  California isn’t losing its leadership, it’s starting to share it with many more parts of the globe.  This is great news, as increased global demand for EVs will help drive down technology costs for everyone and help automakers recoup their investments in EV technology faster.

But is going to 100% electric vehicles practical? It might be hard to imagine a time when every car at your local dealership will be electric. But there are reasons to be bullish on the future of EVs. Battery prices are dropping with estimates that EVs could have comparable costs to gasoline vehicles sometime in the 2020s. And recent announcements by major manufacturers like Ford, GM, Volvo, VW and others about expanding electric vehicle line-ups over the next 5 years indicates the industry is betting on growth opportunities.

Figure 2: As recently noted in a blog by my colleague David Reichmuth,  battery costs are declining and approaching the point where EVs achieve cost parity ($125-150 per kWh).

California is taking the right steps to making electric cars an option for more and more drivers

In addition, California is implementing policies to support the deployment of EVs.  There’s a long list, but some of the most critical are direct consumer rebates, incentives targeting low- and moderate-income households, utility investments to support the deployment of EV charging infrastructure, the Low Carbon Fuel Standard, and the Zero Emission Vehicle program, which requires automakers to bring EVs to market. Meanwhile, California’s relatively clean electricity grid means that driving an EV results in global warming emissions equivalent to a 95 mile-per-gallon gasoline car. As California increases its reliance on electricity from renewable sources, emissions will continue to decline.

Long-term goals must be matched with near-term action

Adopting a ban on gasoline and diesel cars would certainly send a strong long-term signal that powering electric vehicles with clean energy is our ultimate destination. It could focus policy makers’ and regulators’ efforts on supporting the transition and give automakers, charging companies, utilities, and entrepreneurs a vision and long-term target for the future to guide their investments.

However, it’s the near-term efforts to make EVs more accessible to all Californians that will accelerate the transition. That means expanding current programs targeted toward individuals and businesses who buy or use new and used cars and increasing access to charging. And it also means supporting electrification for those who rely on other modes of transportation too (see my colleague Jimmy’s blog on electric buses).

A future without internal combustion engine cars is consistent with a future of clean air and minimizing climate impacts. Ultimately, for a transition to a clean, electric transportation system to succeed, the system needs to be better than the one we have today. And it’s the policies we implement today that will drive the investments needed to reach a tipping point, a point where choosing the EV is a no brainer for whomever is shopping for a car.

 

Can Science (and The Supreme Court) End Partisan Gerrymandering and Save the Republic? Three Scenarios

UCS Blog - The Equation (text only) -

Photo: Wikimedia Commons

On October 3, the US Supreme Court will hear a case concerning the state of Wisconsin’s legislative districts that could resolve a pending constitutional crisis and dramatically improve electoral representation.

At the center of the dilemma is the applicability of a scientific standard to measure discrimination resulting from district boundary manipulation. What’s new in this case is that social scientists have developed a standard. But what the court will do with it is anybody’s guess. So let’s guess.

We have a scientific standard that is discernible and manageable

Social scientists have been hard at work since 2004, when the Supreme Court issued a fragmented, 5-4 decision in Vieth v Jubelier holding that “plaintiffs failed to establish a standard” to determine when partisan gerrymandering has gone too far  The analytical tools for estimating various forms of partisan discrimination have dramatically improved since Vieth, as described in one of the many amicus briefs submitted to the court.

Consensus has emerged around partisan asymmetry as a scientific standard that is both discernible (logically grounded in constitutional protections) and manageable (so that courts can apply it). It measures any difference in the percentage of seats that a given percentage of voters (say 50%) receive, depending on what party they vote for. Asymmetries can be easily estimated with actual election results and computationally simulated vote swings across districts, along with measures of statistical confidence.

Similarly, the mean-median test, comparing each party’s actual vote share in its median district to overall mean vote share, is another way of estimating asymmetries between voters. There are important theoretical and methodological differences between various measures, including the efficiency gap, which compares “wasted” votes between parties. But all are empirically accurate at identifying partisan bias where it matters most: in competitive states where voters from one party have a major seat advantage.

However, the fact that a standard has emerged is no guarantee that it will be adopted. Attention will focus on convincing Justice Anthony Kennedy, who welcomed the discovery of “workable standards” as the swing vote in Vieth. His level of satisfaction with these results is likely to drive the justices toward one of the following three scenarios.

Scenario one: Kennedy keeps the Supreme Court out of the thicket

In a crushing defeat to defendants and electoral reformers in both parties, Justice Kennedy is unpersuaded, leading to another 5-4 decision in which the more liberal justices (Ginsburg, Breyer, Kagen, and Sotomayor) agree that symmetry is a workable standard, but they don’t have the votes. A plurality of the court’s conservatives either dismiss outright the idea that courts ought to be entering the political thicket of partisan competition, or they reassert a version of Antonin Scalia’s Vieth opinion, holding that symmetry is a standard measuring discrimination against parties, not people, with only the latter having constitutional rights (although it has been demonstrated that symmetry reflects individual political equality).

Kennedy writes a concurrent opinion with the conservatives, articulating a more nuanced failure on the part of plaintiffs to specify “how much is too much” as both plaintiffs and most of the scientific briefs submitted explicitly placed responsibility for specifying a threshold of unconstitutional discrimination with the courts. Kennedy could also point to in-fighting among political scientists over our favored measures as lack of consensus. Talk about a tragedy of the commons.

Scenario two: Wisconsin’s districts are thrown out, but the real work is left for future courts

A focused interrogation by Kennedy results in a majority opinion that overturns Wisconsin’s gerrymandered map. Several measures of bias are incorporated into a multi-pronged test that verifies if 1) the district boundaries caused the observed discrimination (asymmetry), and 2) the extent of asymmetry is not likely to be reduced through changing voter preferences. That is, even a “wave” of public opposition would allow the entrenched party to hold power.

However, the majority does not go so far as to prescribe a general threshold for “how much is too much” gerrymandering. There is no precise level of necessary asymmetry, or responsiveness, or competitiveness specified that constitutes a violation of equal protection or free speech. Standards are left to emerge through future cases, of which there are many. Some version of this outcome seems most likely, given the scientific consensus, the level of extreme gerrymandering witnessed in the 2011 redistricting cycle, and the bipartisan response to it.

Scenario three: A precise standard is adopted with clear direction for lower courts

In this third, and probably least likely scenario, the justices not only establish a multi-prong test to identify unconstitutional partisan discrimination, they also specify the degree of relief that discriminated voters are entitled to. The question of “how much is too much” discrimination is answered precisely through a specific measure, either when asymmetry would result in a single seat change, or change in majority control of a legislative body, or a mean-median difference greater than 5 percent (which is rare) or an efficiency gap greater than 7 percent (which is also rare), etc.

The court could apply the breadth of knowledge that we have to specify thresholds of tolerance, below which any hypothetical districting plan would be invalidated. But because the process of districting involves maximizing numerous conflicting principles, such as geographic compactness and bias, the justices are unlikely to go this far, at this time. And only time will tell if a more cautious approach will be adequate.

Can a constitutional crisis be averted?

If the Supreme Court fails to rein in partisan gerrymandering, the fundamental democratic principle of majority rule is undermined. The Electoral College has enabled minority control over the executive, and majority control of the Senate has been determined by a minority of voters due to the underrepresentation of large states.

In 2018, a majority as large as 56 percent of Americans could vote against the governing party (currently the Republican Party) in the House of Representatives, while they retain control of a majority of (gerrymandered) seats. It is up to the Supreme Court to re-establish a republic “of the people.”

On Oct 3, the Supreme Court of the

UCS Blog - The Equation (text only) -

On Oct 3, the Supreme Court of the United States will hear a case concerning the state of Wisconsin’s legislative districts that could resolve a pending constitutional crisis (see below), and dramatically improve electoral representation. At the center of the dilemma is the applicability of a scientific standard to measure discrimination resulting from district boundary manipulation. What’s new in this case is that social scientists have developed a standard. But what the court will do with it is anybody’s guess. So let’s guess.

We have a scientific standard that is discernable and manageable

Since 2004, when the Justices issued a fragmented, 5-4 decision in Vieth v Jubelier holding that “plaintiffs failed to establish a standard” to determine when partisan gerrymandering has gone too far, social scientists have been hard at work. The analytical tools for estimating various forms of partisan discrimination have dramatically improved since Vieth, as described in one of the many amicus briefs submitted to The Court.

Consensus has emerged around partisan asymmetry as a scientific standard that is both discernable (logically grounded in constitutional protections) and manageable (so that courts can apply it). It measures any difference in the percentage of seats that a given percentage of voters (say 50%) receive, depending on what party they vote for. Asymmetries can be easily estimated with actual election results and computationally simulated vote swings across districts, along with measures of statistical confidence.

Similarly, the mean-median test, comparing each party’s actual vote share in its median district to overall mean vote share, is another way of estimating asymmetries between voters. There are important theoretical and methodological differences between various measures, including the efficiency gap, which compares “wasted” votes between parties. But all are empirically accurate at identifying partisan bias where it matters most: in competitive states where voters from one party have a major seat advantage.

However, the fact that a standard has emerged is no guarantee that it will be adopted. Attention will focus on convincing Justice Anthony Kennedy, who welcomed the discovery of “workable standards” as the swing vote in Vieth. His level of satisfaction with these results is likely to drive The Justices toward one of the following three scenarios.

Scenario One: Kennedy keeps SCOTUS out of the thicket

In a crushing defeat to defendants and electoral reformers in both parties, Justice Kennedy is unpersuaded, leading to another 5-4 decision in which the more liberal justices (Ginsburg, Breyer, Kagen, Sotomayor) agree that symmetry is a workable standard, but they don’t have the votes. A plurality of the court’s conservatives either dismiss outright the idea that courts ought to be entering the political thicket of partisan competition, or they reassert a version of Antonin Scalia’s Vieth opinion, holding that symmetry is a standard measuring discrimination against parties, not people, with only the latter having constitutional rights (although it has been demonstrated that symmetry reflects individual political equality).

Kennedy writes a concurrent opinion with the conservatives, articulating a more nuanced failure on the part of plaintiffs to specify “how much is too much” as both plaintiffs and most of the scientific briefs submitted explicitly placed responsibility for specifying a threshold of unconstitutional discrimination with the courts. Kennedy could also point to in-fighting among political scientists over our favored measures as lack of consensus. Talk about a tragedy of the commons.

Scenario Two: Wisconsin’s districts are thrown out, but the real work is left for future courts

A focused interrogation by Kennedy results in a majority opinion that overturns Wisconsin’s gerrymandered map. Several measures of bias are incorporated into a multi-pronged test that verifies if 1) the district boundaries caused the observed discrimination (asymmetry), and 2) the extent of asymmetry is not likely to be reduced through changing voter preferences. That is, even a “wave” of public opposition would allow the entrenched party to hold power.

However, the majority does not go so far as to prescribe a general threshold for “how much is too much” gerrymandering. There is no precise level of necessary asymmetry, or responsiveness, or competitiveness specified that constitutes a violation of equal protection or free speech. Standards are left to emerge through future cases, of which there are many. Some version of this outcome seems most likely, given the scientific consensus, the level of extreme gerrymandering witnessed in the 2011 redistricting cycle, and the bipartisan response to it.

Scenario Three: A precise standard is adopted with clear direction for lower courts

In this third, and probably least likely scenario, Justices not only establish a multi-prong test to identify unconstitutional partisan discrimination, they also specify the degree of relief that discriminated voters are entitled to. The question of “how much is too much” discrimination is answered precisely through a specific measure, either when asymmetry would result in a single seat change, or change in majority control of a legislative body, or a mean-median difference greater than 5% (which is rare) or an efficiency gap greater than 7% (which is rare), etc.

The Court could apply the breadth of knowledge that we have to specify thresholds of tolerance, below which any hypothetical districting plan would be invalidated. But because the process of districting involves maximizing numerous conflicting principles, such as geographic compactness and bias, the Justices are unlikely to go this far, at this time. And only time will tell if a more cautious approach will be adequate.

Can a constitutional crisis be averted?

If the Supreme Court fails to rein in partisan gerrymandering, the fundamental democratic principle of majority rule is undermined. The Electoral College has enabled minority control over the executive, and majority control of the Senate has been determined by a minority of voters due to the underrepresentation of large states.

In 2018, a majority as large as 56% of Americans could vote against the governing party (currently the Republican Party) in the House of Representatives, while they retain control of a majority of (gerrymandered) seats. It is up to the Supreme Court to re-establish a republic “of the people.”

President Trump is About to Give a Speech That Directly Undermines Science

UCS Blog - The Equation (text only) -

Next week, President Donald Trump is going to deliver a speech highlighting his only major policy “achievement” to date: sidelining science and rolling back critical public health, safety, and environmental protections. You’re probably going to hear a lot about the president’s absurd executive order that requires agencies to cut two regulations (aka public health protections that provide us clean air, safe consumer products, and more) for every new one issued and you’ll probably hear some muddled thinking and misinformation about the cost of regulations and all the rules the administration has reversed.

The president might even frame all this deregulatory talk as “winning.”

But this is not what his speech is about. This speech is about President Trump and his administration sidelining science-based safeguards, stripping away vital public health, safety, and environmental protections from the American people. These are regulations that keep our air and water clean, our food safer to eat, our household products and our kids’ toys safer to play with, and our workers safer at work. And it is these regulations that can and should have the greatest positive impact on low-income communities and communities of color, who are often disadvantaged and facing some of the worst public health and environmental threats.

Deregulation = Real world impacts

We’ve already seen the administration’s deregulatory policies in action. Earlier this year, the administration delayed updates to the Risk Management Program, designed to enhance chemical risk disclosure from industrial facilities and improve access to information for first responders, workers, and fenceline communities, all while encouraging the use of safer technologies.

After Hurricane Harvey hit Houston, Arkema’s chemical plant in Crosby exploded, highlighting the importance of this public protection. People were forced to stay away from their homes, first responders suffered injuries and weren’t informed about the dangerous chemicals being stored there (and are now suing Arkema), and Harris County had to divert critical resources from hurricane recovery efforts to respond to the explosion (and is now also suing Arkema).

This is just one example of how sidelining science and rolling back safeguards can negatively impact communities across the country. In a recently released report, UCS chronicled several examples of how the administration has delayed many science-based rules and weakened protections from hazards at work and home. This is what the president’s speech is about.

Science-based policymaking ¯\_(ツ)_/¯

This administration has shown zero interest in evidence-based policymaking. Even when it comes to rolling back regulations, the administration has used inaccurate information to support its actions. In other instances, it has simply used misleading information to support its delay tactics. The Environmental Protection Agency (EPA), whose mission is to protect human health and the environment, has an administrator who is only interested in meeting with representatives from regulated industries, instead of meeting with independent scientists and communities who need the federal government to step up and implement strong protections.

What the administration is focused on though is using any means available to them to invalidate public health protections that took years to develop.

All this flies in the face of how science-based policymaking should happen. You look at the threat, the scientific and technical evidence, and then figure out how to mitigate it and ensure the public is not in danger. You don’t arbitrarily decide which public protections should stay in place and which should be rolled back. Nor should our government only take input from vested interests who favor their bottom line over protecting the public.

But that is the Trump Doctrine on regulations. And for this reason, scientists need to continue to watchdog the administration’s actions and hold agencies accountable to ensure that we have science-based protections in place and policies are based on facts not politics.

Threats are threats. They cannot be addressed only when another public protection is no longer on the books. In the future, if the Food and Drug Administration were to issue a rule to ensure safe food, should the EPA be forced to roll back standards for clean water?

The bottom line is when it comes to protecting public health, the ideas championed by President Trump make no sense. Regulations matter, and protecting the system of evidence-based policymaking matters. The only thing President Trump’s speech will be good for is to show the American people how many losses we have taken in the first 10 months of this administration.

One Lesson For DOE From Harvey & Maria: Fossil Fuels Aren’t Always Reliable

UCS Blog - The Equation (text only) -

Photo: Chris Hunkeler/CC BY-SA (Flickr)

The US Department of Energy has proposed that paying coal plants more will make the grid reliable. But last month, three feet of rain from Hurricane Harvey at a coal plant in Fort Bend, Texas complicated the messaging around the reliability of fossil fuels in extreme weather. The vulnerability of power grids to storm damage is also on horrible display in Puerto Rico in the aftermath of Hurricane Maria.

Past studies by the Union of Concerned Scientists have highlighted risks from worsening storms and grid issues. The demonstrated risks are in the wires, not the types of power plants.

The damage and hardships in Puerto Rico are expected to exceed past US storm impacts when measured in number of people out of service and number of hours of the outage,. Those storms stirred efforts to make the power system more reliable and resilient to extreme weather.

Recently, new debates have arisen regarding the more contentious but less-relevant (and erroneous) argument that “base-load” plants are the single best provider of grid reliability. In a market where coal-burning plants are losing money and closing, coal’s champions argue that a long list of reliability features of coal are unique and valuable. Now that the owner of the W.A. Parish plant in south Texas reported it shifted 1,300 MW of capacity from coal to gas due to rainfall and flooding disrupting power plant operations in the aftermath of Hurricane Harvey, yet another of these claims about the unique advantages of coal for electricity has been muddied by facts.

Plant owner NRG reported to the Public Utility Commission of Texas that W.A. Parish units 5 and 6 were switched to burn natural gas due to water saturating the coal. The subbituminous coal stored on site is supposed to be a reliability advantage, according to those pushing coal. As that debate heats up (the DOE is seeking vague and unspecified changes to compensation in the electricity markets for plants that have a fuel supply on-site), the too-simple notion that reliability is created by power plants rather than grid operations that integrate all sources will be put to the test.

Some policymakers have asserted that solid fuel stored on-site is superior to natural gas, wind, and solar. Oil is a player too: although it’s a very small part of the electricity fuel supply in the mainland US, that’s not the case in places like Puerto Rico, Hawaii, or the interior of Alaska, where it’s the primary fuel.

People in Puerto Rico use oil to fuel private back-up generators. This too is not unique. Hospitals, police stations, and other pieces of critical infrastructure have historically relied on backup generators powered by fossil fuels for electricity supply during blackouts. However, this requires steady and reliable access to fuel. Puerto Rico is now experiencing a fuel supply crisis, as challenges throughout the supply chain have made it extraordinarily challenging to keep up with the demand around the island. After Sandy damaged the New Jersey – New York metropolitan area, many subsequent crises arose because so many back-up generators there failed, including due to inadequate fuel deliveries.

Fortunately, renewable energy and battery storage technology have advanced rapidly in the aftermath of Sandy, and the Japanese earthquake that destroyed the Fukushima nuclear plant. Solar panels combined with energy storage are now a viable alternative to back-up generators. This combination has the great advantage over back-up oil-burning of providing economic savings all year, as well as serving in an emergency. Even apartment buildings and low-income housing can gain the benefits of solar-plus-storage as a routine and emergency power supply.

Puerto Rico has a great solar resource, and the sun delivers on schedule without regard to the condition of the harbors or roads. Additional back-up power supplies there should be built from solar-plus-storage, so the people depending on electricity need not worry about fuel deliveries, gasoline theft, or dangers from fuel combustion. In Texas, the grid has already absorbed more wind power than any other US state. The next energy boom in Texas will be solar.

These are real resiliency and reliability improvements.

Photo: Chris Hunkeler/CC BY-SA (Flickr)

Pointless Delay to the Added Sugar Label Keeps Consumers in the Dark

UCS Blog - The Equation (text only) -

In another frustrating example of undermining science-based protections, the FDA this morning proposed delaying compliance for revisions to the Nutrition Facts label.

Most food companies were supposed to roll out their revised labels by July 2018. This delay would mean that those initial, larger companies would have until January 2020 and smaller companies until January 2021.

I have been dreading this official announcement all year and hoping—as more and more products I see in stores have updated their labels—that the FDA would acknowledge that its original rule was perfectly reasonable and has already given companies ample time to comply.

In December, food industry leaders proposed two different riders to draft House appropriations legislation that would have delayed the rule. Luckily, those failed to make it into final language.

Then, in April at now-FDA Commissioner Scott Gottlieb’s confirmation hearing, he implied that he might delay the revised nutrition facts label. I urged Gottlieb to keep the compliance dates as a part of the final rule that was issued in 2016.

Once confirmed, Gottlieb was faced with what I would consider a pretty clear-cut decision: Implement a rule that was based in clear science on the public health consequences associated with excessive added sugar consumption—one that was also supported by the expert-driven Dietary Guidelines recommendations—or cow to industry wishes to delay the rule, even though the majority of food companies would have had until 2019 to make the new changes to their labels, and larger food companies like Mars, Inc. and Hershey Co. have already met the deadline or are on track to meet it.

In fact, according to the Center for Science in the Public Interest, at least 8,000 products from a variety of companies already bear the new label.

A few months later, the FDA announced of its intention to push back compliance dates, but there was no formal decision or indication of how long the delay would be. I, again, urged Gottlieb not to take a step backward on food label transparency by delaying the new label.

Despite what some food companies will have you believe, they have had plenty of time to accept the science on added sugar consumption and to give consumers the information for which they’ve been clamoring. The FDA first began its work to revise the nutrition facts label in 2004, and the proposed rule which included the added sugar line was issued in 2014. Industry has had over ten years to give consumers the information they want to make informed decisions, and to acknowledge the mounting evidence that excessive sugar consumption can lead to adverse health consequences, including heart disease, obesity, diabetes, and hypertension.

Instead, as we demonstrated in a 2015 analysis of public comments on the FDA’s proposed rule, the majority of unique comments supported the rule (99 percent of whom were public health experts), while 69 percent of those opposed to the rule were from the food industry. The companies’ reasons for opposition included flimsy arguments about consumers’ ability to understand nutrition labels.

Last week, we signed onto a letter along with twenty other science, public health, and consumer organizations urging Gottlieb to let the rule move forward. As we wrote in the letter, this delay means that “an entire cycle of the Dietary Guidelines for Americans will have passed without the federal government’s premier public-health regulatory agency taking final action to implement a major recommendation of the Guidelines.”

It also means that consumers will have to continue to guess how much of the sugar in their food is added, gambling on healthy food purchasing decisions. While asking the agency to delay its labeling rules, the sugar industry seems to understand that it’s actually time to reformulate and meet consumer demand for healthier products to win consumers’ trust. A surefire way to win our trust would have been to move forward with the label, not force us to wait another year and a half for information we have the right to know.

The FDA’s failure to follow the science and listen to public health experts, including HHS staff who helped write the most recent Dietary Guidelines, is incredibly disappointing. We will be weighing in on this decision with comments that will be accepted for 30 days after October 2nd and will update you on how you can tell the FDA to rescind its rule to delay the enforcement dates for added sugar labeling.

Pruitt Guts The Clean Power Plan: How Weak Will The New EPA Proposal Be?

UCS Blog - The Equation (text only) -

News articles indicate that the EPA is soon going to release a “revised” Clean Power Plan. It is very likely to be significantly weaker than the original CPP, which offered one of the country’s best hopes for reducing carbon emissions that cause global warming.

EPA Administrator Scott Pruitt and President Trump have made no secret about their intent to stop and reverse progress on addressing climate change, so there’s every reason to expect that the revised CPP will be fatally flawed and compromised.

Here’s how we’ll be evaluating it.

How we got here

In August 2015, the EPA issued final standards to limit carbon emissions from new and existing power plants, a historic first-ever step to limit these emissions. Those standards, developed under the Clean Air Act, came about as a result of a landmark 2007 Supreme Court ruling and subsequent Endangerment finding from the EPA.

The final Clean Power Plan (CPP) for existing power plants was projected to drive emissions down 32 percent below 2005 levels by 2030, while providing an estimated $26 billion to $45 billion in net benefits in 2030.

In March 2017, President Trump issued an executive order blocking the Clean Power Plan. He claimed to do so to promote “energy independence and economic growth,” (despite the fact that the US transition to cleaner energy continues to bring significant health and economic benefits nationwide.) The EPA then embarked on a process of implementing the EO, including initiating a review of the CPP.

The US Court of Appeals for the DC Circuit has granted two stays in court challenges related to the CPP, the most recent of which was issued on August 8 for a 60-day period.  These stays were specifically to give the EPA time to review the rule; this in no way changes the agency’s “affirmative statutory obligation to regulate greenhouse gases.”

The EPA is currently expected to issue a revised CPP by October 7, aiming to head off litigation on this issue. Of course, if the plan they issue is a weak one, as it is likely to be the case, there is no question that court challenges will continue.

EPA’s most recent status update filed with the DC Circuit confirms that the agency has sent a draft rule to the Office of Management and Budget, and Administrator Pruitt expects to sign the proposed rule in fall 2017. This will begin a comment period on the new draft rule before it can be finalized.

Five Metrics for Assessing the Revised Clean Power Plan Proposal

While we don’t yet know exactly how the proposed rule will look, there are some key things we’ll be watching for:

1. Will the revised plan cut power sector carbon emissions at least as much as the original CPP?

Not likely. Reports indicate that reductions might be limited to what can be achieved through measures at individual power plants, such as efficiency improvements. (Power plant efficiency improvements, known as ‘heat rate improvements,’ reduce the energy content of the fossil fuel consumed per unit of electricity generated at power plants.)

The associated carbon reductions are going to be relatively small compared to what could be achieved through a power sector-wide approach—including bringing on line cleaner generation resources, increasing demand-side energy efficiency and allowing market-based trading—as was adopted in the original Clean Power Plan. For the final CPP, the EPA estimated that on average nationally a fleet-wide heat rate improvement of approximately 4 percent was feasible, which would result in a fleet-wide CO2 reduction of about 62 million tons in a year. (For context, US power sector CO2 emissions in 2016 were 1,821 million metric tons)

2. Will it promote renewable energy while heading off an over reliance on natural gas?

An approach that’s limited to carbon reductions at current fossil-fired power plants will miss one of the biggest opportunities to lower power sector emissions: ramp up cheap renewable energy!

The original CPP explicitly called out a role for renewable energy in helping to cost-effectively bring down carbon emissions. UCS analysis shows how boosting renewable energy can help cut emissions affordably while bringing consumer and health benefits. Simply switching from coal to gas, while it does lower carbon emissions at the power plant, is just not going to be enough to achieve the deep cuts in power sector emissions we ultimately need from a climate perspective. Boosting the contribution from renewable energy can help limit the climate, economic and health risks of an overreliance on natural gas.

3. Will it lowball the harms posed by climate change?

Administrator Pruitt seems to understand that legally the EPA is required to regulate carbon emissions and he cannot simply do away with the CPP without replacing it. But will the new plan actually recognize the magnitude of the damages that climate change poses?

Earlier this year, President Trump also issued an executive order undercutting the use of the social cost of carbon (SCC),which measures the costs of climate change (and the benefits of cutting carbon emissions). The SCC served as a proxy for measuring the dollar benefits of carbon reductions from the original CPP. If the re-proposed CPP uses an artificially low SCC, that would fly in the face of the latest science and economics.

4. Will it actually help coal miners get their jobs back?

Not very likely, a fact that even coal company executive Robert Murray and Senator Mitch McConnell have admitted. Market trends are continuing to drive a historic transition away from coal-fired power that is unlikely to change just by getting rid of the CPP.

If the Trump administration and Congress are serious about helping coal miners and coal mining communities, they should invest in real solutions—worker training, economic diversification and other types of targeted resources—to help these communities thrive in a clean energy economy, as my colleague Jeremy Richardson writes.

5. Will it increase pollution?

If the revised proposal attempts to maintain or increase the amount of coal-fired power, that will lead to more air, water and toxic pollution.

In addition to being a major source of carbon emissions, coal-fired power plants are a leading source of emissions of nitrogen oxides, sulfur dioxide, particulate matter, and mercury, among other types of harmful pollution. These pollutants cause or exacerbate heart and lung diseases and can even lead to death. Mercury can affect the neurological development of babies in utero and young children. The Clean Power Plan would have delivered significant health benefits through reductions in these co-pollutants.

Clean energy momentum will continue

Despite Administrator Pruitt’s attempts to undermine the CPP, clean energy momentum will continue nationwide. The facts on the ground are rapidly changing. Market trends continue to drive down coal-fired power because coal is an increasingly uncompetitive option compared to cleaner options like natural gas and renewable energy.

That’s why Xcel CEO Benjamin Fawke recently said “I’m not going to build new coal plants in today’s environment.” And “We’re investing big in wind because of the tremendous economic value it brings to our customers.”

It’s why Appalachian Power’s Chris Beam also said,

At the end of the day, West Virginia may not require us to be clean, but our customers are (…) So if we want to bring in those jobs, and those are good jobs, those are good-paying jobs that support our universities because they hire our engineers, they have requirements now, and we have to be mindful of what our customers want. We’re not going to build any more coal plants. That’s not going to happen.

The pace of growth in renewable energy growth is particularly striking, with new wind and solar installations outstripping that of any other source of power including natural gas.

And as my colleague Julie McNamara recently pointed out, energy efficiency is one of the top electricity resources in the US, and in fact was the third-largest electricity resource in the United States in 2015.

That’s why more and more states, cities and businesses are doubling down on their commitment to renewable energy and the goals of the Paris Climate Agreement, saying ‘We’re Still In!’

For all of you who care deeply about our nation’s transition to clean energy, please ask your state legislators to push for more renewable energy even as the Trump administration tries to turn back progress.

We still need robust federal policies

Despite the promising market trends, there’s no denying we need robust federal policies to accelerate the current clean energy momentum and cut US carbon emissions faster and deeper to meet climate goals.

The reality is that the original CPP itself was not strong enough, though it was a pivotal step in the right direction. The US will need to do more, both in the power sector and economy-wide to cut emissions in line with the goals of the Paris Agreement.

A weakened CPP would be a sad step back in our efforts to address global warming. At a time when the risks of climate change are abundantly clear—just consider this years’ terrible hurricane and wildfire seasons—this is no time to delay action.

Administrator Pruitt: Do your job

Mr. Pruitt continues to show a blatant disregard for the mission of the agency he heads, while pandering to fossil fuel and other industry interests. Weakening the power plant carbon standards is just the latest in a long string of actions he has taken to undermine public health safeguards that were developed in accordance with laws Congress has passed.

Furthermore, he has repeatedly attacked the role of science in informing public policy. Perhaps most egregiously, he continues to deny the facts on climate change. (If he is genuinely interested in understanding the latest science, he need look no further than the US National Academy of Sciences.)

Administrator Pruitt, stop hurting our children’s health and future. Do your job—and start by setting strong carbon standards for power plants.

 

Photo: justice.gov

Happy 40th, SNAP! Celebrating Four Decades of Effective Nutrition Assistance

UCS Blog - The Equation (text only) -

Happy birthday to the Supplemental Nutrition Assistance Program as we know it!

SNAP, it’s hard to believe it was only 40 years ago that President Carter made you into a better, stronger safety net by signing the Food Stamp Act of 1977. Of course, you’re grown now, and you know it takes more than one person to make a law. You were really born out of the hard work and bipartisanship of Senators George McGovern and Bob Dole—two legislators who loved effective anti-hunger legislation very, very much, and who improved the Food Stamp Act of 1964 by eliminating required payments for food stamp users and fine-tuning eligibility.

Naturally, some things have changed over 40 years

Like your name. You went through that phase where everybody called you “food stamps,” and we supported you, but “SNAP” really does suit you better.

You’ve also seen a host of changes come and go related to program eligibility, work requirements, and nutrition education funding—many of which continue to be subjects of debate.

And technology keeps barreling forward. You’ve seen the amazing things it can do—watching as schools handily adopt data matching technologies you’d never dreamed of having—and some days you feel like you’re getting the hang of it, like when you finally transitioned from paper stamps to an electronic benefit system. (Other days you’re calling your daughter-in-law because you once saw her set up a Roku in ten minutes and boy could you use her help with this.)

But some things have stayed the same

You’ve been there for the American people, unfailingly, through all the ups and downs of economic recovery and recession, changes in administration and leadership, and even that time Representative Steve King said that mean and totally untrue thing about you right to your face. (Sorry again. No one likes him, if that makes you feel better.)

You were there when the 2008 recession hit and 2.6 million Americans lost their jobs—many unexpectedly—and in the years that followed, as “middle-class” jobs became harder and harder to come by and people really needed you for a while.

And even now, amid the devastation of hurricanes and flooding, you are providing food to those who desperately need it through the Disaster Supplemental Nutrition Assistance Program.

Despite what people say, you’re not just a program for “the poor.” You’re a program for all of us, because we are all vulnerable to the unexpected, economic crises and natural disasters included, and you understand that.

The best thing about getting older?

Take it from an organization that hit 40 a few years ago—the best thing about getting another year older is realizing that the people you’ve supported, through thick and thin, are here to support you too.

And one of the best things about the farm bill is that it gives us a chance to do just that.

On behalf of the 21 million American households you serve, and the millions more who know you’ll be there when they need you: Happy Birthday, SNAP.

Nuclear Plant Risk Studies: Then and Now

UCS Blog - All Things Nuclear (text only) -

Nuclear plant risk studies (also called probabilistic risk assessments) examine postulated events like earthquakes, pipe ruptures, power losses, fires, etc. and the array of safety components installed to prevent reactor core damage. Results from nuclear plant risk studies are used to prioritize inspection and testing resources–components with greater risk significance get more attention.

Nuclear plant risk studies are veritable forests of event trees and fault trees. Figure 1 illustrates a simple event tree. The initiating event (A) in this case could be something that reduces the amount of reactor cooling water like the rupture of a pipe connected to the reactor vessel. The reactor protection system (B) is designed to detect this situation and immediately shut down the reactor.

Fig. 1. (Source: Nuclear Regulatory Commission)

The event tree branches upward based on the odds of the reactor protection system successfully performing this action and downward for its failure to do so. Two emergency coolant pumps (C and D) can each provide makeup cooling water to the reactor vessel to replenish the lost inventory. Again, the event tree branches upward for the chances of the pumps successfully fulfilling this function and downward for failure.

Finally, post-accident heat removal examines the chances that reactor core cooling can be sustained following the initial response. The column on the right describes the various paths that could be taken for the initiating event. It is assumed that the initiating event happens, so each path starts with A. Paths AE, ACE, and ACD result in reactor core damage. The letters added to the initiating event letter define what additional failure(s) led to reactor core damage. Path AB leads to another event tree – the Anticipated Transient Without Scram (ATWS) event tree because the reactor protection system failed to cause the immediate shut down of the reactor and additional mitigating systems are involved.

The overall risk is determined by the sum of the odds of pathways leading to core damage. The overall risk is typically expressed something like 3.8×10-5 per reactor-year (3.8E-05 per reactor-year in scientific notation). I tend to take the reciprocal of these risk values. The 3.8E-05 per reactor-year risk, for example, becomes one reactor accident every 26,316 years—the bigger the number, the lower the risk.

Fault trees examine reasons for components like the emergency coolant pumps failing to function. The reasons might include a faulty control switch, inadequate power supply, failure of a valve in the pump’s suction pipe to open, and so on. The fault trees establish the chances of safety components successfully fulfilling their needed functions. Fault trees enable event trees to determine the likelihoods of paths moving upward for success or downward for failure.

Nuclear plant risk studies have been around a long time. For example, the Atomic Energy Commission (forerunner to today’s Nuclear Regulatory Commission and Department of Energy) completed WASH-740 in March 1957 (Fig. 2). I get a kick out of the “Theoretically Possible but Highly Improbable” phrase in its subtitle. Despite major accidents being labeled “Highly Improbable,” the AEC did not release this report publicly until after it was leaked to UCS in 1973 who then made it available. One of the first acts by the newly created Nuclear Regulatory Commission (NRC) in January 1975 was to publicly issue an update to WASH-740. WASH-1400, also called NUREG-75/014 and the Rasmussen Report, was benignly titled “Reactor Safety Study: An Assessment of Accident Risks in U.S. Commercial Nuclear Power Plants.”

Fig. 2. (Source: Atomic Energy Commission)

Nuclear plant risk studies can also be used to evaluate the significance of actual events and conditions. For example, if emergency coolant pump A were discovered to have been broken for six months, analysts can change the chances of this pump successfully fulfilling its safety function to zero and calculating how much the broken component increased the risk of reactor core damage. The risk studies would determine the chances of initiating events occurring during the six months emergency coolant pump A was disabled and the chances that backups or alternates to emergency coolant pump A stepped in to perform that safety function. The NRC uses nuclear plant risk studies to determine when to send a special inspection team to a site following an event or discovery and to characterize the severity level (i.e., green, white, yellow, or red) of violations identified by its inspectors.

Nuclear Plant Risk Studies: Then

In June 1982, the NRC released NUREG/CR-2497, “Precursors to Potential Severe Core Damage Accidents: 1969-1979, A Status Report,” that reported on the core damage risk from 52 significant events during that 11-year period. The events included the March 1979 meltdown of Three Mile Island Unit 2 (TMI-2), which had a core damage risk of 100%. The effort screened 19,400 licensee event reports submitted to the AEC/NRC over that period, culled out 529 event for detailed review, identified 169 accident precursors, and found 52 of them to be significant from a risk perspective. The TMI-2 event topped the list, with the March 1975 fire at Browns Ferry placing second.

The nuclear industry independently evaluated the 52 significant events reported in NUREG/CR-2497. The industry’s analyses also found the TMI-2 meltdown to have a 100% risk of meltdown, but disagreed with all the other NRC risk calculations. Of the top ten significant events, the industry’s calculated risk averaged only 11.8% of the risk calculated by the NRC. In fact, if the TMI-2 meltdown is excluded, the “closest” match was for the 1974 loss of offsite power event at Haddam Neck (CT). The industry’s calculated risk for this event was less than 7% of the NRC’s calculated risk. It goes without saying (but not without typing) that the industry never, ever calculated a risk to be greater than the NRC’s calculation. The industry calculated the risk from the Browns Ferry fire to be less than 1 percent of the risk determined by the NRC—in other words, the NRC’s risk was “only” about 100 times higher than the industry’s risk for this event.

Fig. 3. Based on figures from June 1982 NRC report. (Source: Union of Concerned Scientists)

Bridging the Risk Gap?

The risk gap from that era can be readily attributed to the immaturity of the risk models and the paucity of data. In the decades since these early risk studies, the risk models have become more sophisticated and the volume of operating experience has grown exponentially.

For example, the NRC issued Generic Letter 88-20, “Individual Plant Examination for Severe Accident Vulnerabilities.” In response, owners developed plant-specific risk studies. The NRC issued documents like NUREG/CR-2815, “Probabilistic Safety Analysis Procedures Guide,” to convey its expectations for risk models. And the NRC issued a suite of guidance documents like Regulatory Guide 1.174, “An Approach for Using Probabilistic Risk Assessment in Risk-Informed Decision on Plant-Specific Changes to the Licensing Basis.” This is but a tiny sampling of the many documents issued by the NRC about how to conduct nuclear plant risk studies—guidance that simply was not available when the early risk studies were performed.

Complementing the maturation of nuclear plant risk studies is the massive expansion of available data on component performance and human reliability. Event trees begin with initiating events—the NRC has extensively sliced and diced initiating event frequencies. Fault trees focus on performance on the component and system level, so the NRC has collected and published extensive operating experience on component performance and system reliability. And the NRC compiled data on reactor operating times to be able to develop failure rates from the component and system data.

Given the sophistication of current risk models compared to the first generation risk studies and the fuller libraries of operating reactor information, you would probably think that the gap between risks calculated by industry and NRC has narrowed significantly.

Except for being absolutely wrong, you would be entirely right.

Nuclear Plant Risk Studies: Now

Since 2000, the NRC has used nuclear plant risk studies to establish the significance of violations of regulatory requirements, with the results determining whether a green, white, yellow, or red finding gets issued. UCS examined ten of the yellow and red findings determined by the NRC since 2000. The “closest” match between NRC and industry risk assessment was for the 2005 violation at Palo Verde (AZ) where workers routinely emptied water from the suction pipes for emergency core cooling pumps. The industry’s calculated risk for that event was 50% (half) of the NRC’s calculated risk, meaning that the NRC viewed this risk as double that of the industry’s view. And that was the closest that the risk viewpoints came. Of these ten significant violations, the industry’s calculated risk averaged only 12.7% of the risk calculated by the NRC. In other words, the risk gap narrowed only a smidgen over the decades.

Fig. 4. Ratios for events after 2000. (Source: Union of Concerned Scientists)

Risk-Deformed Regulation?

For decades, the NRC has consistently calculated nuclear plant risks to be about 10 time greater than the risks calculated by industry. Nuclear plant risk studies are analytical tools whose results inform safety decision-making. Speedometers, thermometers, and scales are also analytical tools whose results inform safety decision-making. But a speedometer reading one-tenth of the speed recorded by a traffic cop’s radar gun, or a thermometer showing a child to have a temperature one-tenth of her actual temperature, or a scale measuring one-tenth of the actual amount of chemical to be mixed into a prescription pill are unreliable tools that could not continue to be used to make responsible safety decisions.

Yet the NRC and the nuclear industry continue to use risk studies that clearly have significantly different scales.

On May 6, 1975, NRC Technical Advisor Stephen H. Hanauer wrote a memo to Guy A. Arlotto, the NRC’s Assistant Director for Safety and Materials Protection Standards. The second paragraph of this two-paragraph memo expressed Dr. Hanauer’s candid view of nuclear plant risk studies: “You can make probabilistic numbers prove anything, by which I mean that probabilistic numbers ‘prove’ nothing.”

Oddly enough, the chronic risk gap has proven the late Dr. Hanauer totally correct in his assessment of the value of nuclear plant risk studies. When risk models permit users to derive results that don’t reside in the same zip code yet alone the same ball park, the results prove nothing.

The NRC must close the risk gap, or jettison the process that proves nothing about risks.

Pages

Subscribe to Union of Concerned Scientists aggregator - Combined UCS Blogs