Combined UCS Blogs

Offshore Wind’s Next Steps: 6 to Watch For

UCS Blog - The Equation (text only) -

Credit: Ad Meskens

Things certainly aren’t dull in the world of offshore wind these days. Between new legislation to kick-start offshore wind markets, new bids to meet states’ demand for projects, and new markets getting set to open up, momentum just keeps building. Here are six near-term things I’m watching for.

1. New York’s first 800 megawatts

The journey of 9,000 megawatts, it might be said, starts with the first 800. Thanks to Governor Andrew Cuomo, the Empire State has the most ambitious target in the nation, and is working to live into that goal. That included issuing a request for proposals (RFP) for the first 800 or so megawatts late last year, with bids due in February.

Developers responded in a big way, with four proposing a total of 18 projects. Any one of those developers would bring some serious overseas experience to bear on the US market.

Decisions about which project or projects to go forward with could come out as early as this week, so I’m definitely watching for those.

2. A 2,000-megawatt target in Connecticut

September 1, 2010 – Siemens 2.3 MegaWatt Offshore Wind Turbine Installation, Baltic 1 Offshore Wind Farm, Baltic Sea, Germany. (Photo by Walt Musial / NREL)

The Constitution State’s house of representatives earlier this month passed a bill, in strong bipartisan fashion, to have the state contract for up to 2,000 megawatts (MW) of offshore wind. Now it’s up to the senate, where a vote could also happen this week.

Gov. Ned Lamont, who recently announced a $93 million public-private partnership to upgrade New London’s port to handle offshore wind, is poised to sign the 2,000-MW mandate when the senate does its thing. Stay tuned.

3. New Jersey’s first 1,100 megawatts

Meanwhile, just down the coast, the Garden State has been busy re-building offshore wind momentum since Governor Phil Murphy came into office in January 2018. That has included NJ issuing its own RFP in January, to find the first 1,100 of the state’s 3,500-MW target.

As in NY, the NJ RFP attracted strong interest from international players, with bids from three developers. And, as in NY, a decision about the first project(s) could be coming any day now.

4. Massachusetts’s next 1,600-megawatt pull

The Bay State was the first out of the gate with a big legislative pull, putting in place a 1,600-MW requirement in 2016. A follow-on 2018 law asked Governor Charlie Baker’s administration “to investigate the necessity, benefits and costs of requiring distribution companies to conduct additional offshore wind generation solicitations of up to 1,600 MW,” and execute if things look good—in other words, to bring the state’s total up to 3,200 MW.

We and many others weighed in during that study, and it’s due to be wrapped up and presented to the legislature shortly.

5. Massachusetts’s next 800-megawatt bid

Meanwhile, Massachusetts’s first 1,600 MW chunk is moving along, with near-term things-to-watch-for of its own. The project selected to satisfy the first 800 MW of that, Vineyard Wind, has recently gotten state approvals for its contracts with Massachusetts utilities, and for the transmission line for connecting to the state’s electricity grid.

And now the RFP for the second half of the first 1,600 megawatts (stay with me now…) is out, released last week. So watch for the bids of up to 800 MW, due in August, and the project selection, ‘long about November.

6. California leases

And, while a lot of the spotlight is on the Northeast and Mid-Atlantic, other parts of the country are well worth keeping an eye on, too. California, for example, where the federal Bureau of Ocean Energy Management (BOEM) is looking at three wind areas off the central and northern parts of the state. Fourteen companies have indicated an interest in one or more of those areas.

And, equally importantly, a broad group of stakeholders is engaging to make sure that as offshore wind happens on the West Coast, it’s done right.

And more

Those are six things I’m watching for in terms of offshore wind’s next steps, but this is far from a comprehensive list. Maine, Rhode Island, Maryland, Delaware, Virginia, North Carolina, and the Great Lakes, for example, should also be on folks’ radar screens, along with technological developments and happenings overseas.

Because however and wherever it’s happening, offshore wind development is well worth watching.

Photo: Ad Meskens Dennis Schroeder/NREL

You Can’t Ignore the Future: 5 Reasons Climate Science Looks Beyond 2040

UCS Blog - The Equation (text only) -

Photo: Julian Osley/Geograph

Yesterday it was reported that the Trump administration is redoubling its efforts to undermine climate science. James Reilly, head of the US Geological Survey, reportedly instructed scientists in the office to limit projections of climate impacts to just 2040. Studies typically project out to 2100. It is nearly the end of May 2019.  Failing to look beyond 2040 is like pretending a baby born today won’t live past 21.  As with many life plans, like mortgages signed onto today, climate science routinely looks past the year 2040.  Here are five reasons why:

Figure 1. Carbon dioxide lingers a long time in the atmosphere.

  1. Due to Earth’s carbon cycle, carbon dioxide (CO2) released by burning coal, oil, and gas today will be trapped in the atmosphere for decades to thousands of years. The more that is released the longer it lingers in the atmosphere (see figure 1).
  2. Climate change is largely “baked in” over the next decade and starts to diverge after that (see figure 2). This means that without near-term changes, some of the climate impacts we would see would be irreversible even if we decrease atmospheric carbon dioxide later on in the century.
  3. Governments around the world, including the US and the federal agencies that comprise the US Global Change Research Program (including the Department of the Interior, of which the USGS is a part), monitor and report the human activities that overload our atmosphere with carbon, other heat-trapping gases, and aerosols. These climate calculations ensure business leaders; planners; and local, state and national governmental leaders have the most up-to-date tools needed to make informed decisions on behalf of people living in the US.
  4. Figure 2. Global carbon emissions and associated global average temperature change.

    Most parties to the United Nations Framework Convention on Climate Change, as stated in goals of the Paris Agreement, have committed to holding Earth’s global average surface temperature to well below 2 degrees Celsius (or 3.6 degrees Fahrenheit). We won’t stop working on limiting the rise in global average surface temperature after 2040.

  5. Congress, back in the first Bush Administration, recognized the need for examining climate in the long-term. The Global Change Research Act of 1990, which established the US Global Change Research Program, states that the research plan (emphasis added) “shall provide for, but not be limited to the following research elements: (1) Global measurements, establishing worldwide observations necessary to understand the physical, chemical, and biological processes responsible for changes in the Earth system on all relevant spatial and time scales… (4) Predictions, using quantitative models of the Earth system to identify and simulate global environmental processes and trends, and the regional implications of such processes and trends.” Furthermore, the Act requires that the scientific assessment (the National Climate Assessment) “analyz[e] current trends in global change, both human-inducted [sic] and natural, and projects major trends for the subsequent 25 to 100 years.”

In other words, the science allows us (and the law requires us) to look beyond the next 21 years to gauge the coming impacts we may face as the climate crisis mounts. As long as agencies follow the laws of the US, the Department of Defense, NASA, the Department of Commerce (including NOAA), the Department of the Interior, the Department of Health and Human Services, and the other agencies that contributed to the National Climate Assessment should continue to provide the latest evidence and update outlooks up to and continuing past 2040 to ensure the health, safety and economic prosperity of those living in the US.

Photo: Julian Osley/Geograph IPCC USGCRP NCA4 Vol II

Drops, Ripples, Waves: Reflections on a year of science advocacy from the 2018 UCS Science and Democracy Fellows (Part 1)

UCS Blog - The Equation (text only) -

In response to the increasing political attacks on science, in 2018 the Union of Concerned Scientists launched the Science and Democracy Fellowship to support scientists in becoming local advocacy leaders. We were selected for the inaugural six-month program to mobilize our local communities, in partnership with UCS, in confronting federal attacks on science.

Who are we, and what did we do?

We are five early- and mid-career scientists from Indiana (Adrienne), Maine (Shri), Missouri (Emily), Montana (Lindsay), and Nevada (Tim) who organized actions and events within our respective local communities to stand up for science-based policies at the local and federal level. Some common themes emerged as we reflected on our collective lessons-learned, which we’ll share in a two-part blog series.

  1. Being a constituent gives you the right to engage. Start with one small step; each action you take will empower you to do more.
  2. Develop inclusive relationships.
  3. Be explicit in your ask and prepare to be adaptive to the response.

We continue to integrate these ideas in our advocacy and invite you to listen to our experiences in our own voices below.

1. Being a constituent gives you the right to engage. Start with one small step; each action you take will empower you to do more.

Tim – Advocacy doesn’t have to be complicated or be some huge project. Little efforts, like letters to the editor (LTEs), are small acts of advocacy and achieve small action goals when people don’t have much time or experience.  Having success on these smaller projects builds momentum that can provide better support for larger projects. I found that once my group had done a smaller effort, it was easier to focus on a bigger goal.

Adrienne – I agree, Tim. Leading with small examples makes the bar lower for others to get engaged. For example, if you are trying to get others to call their congresspeople, are you also calling every week? Even better, can you get someone else to make a call with you during your coffee break? LTEs can be short, but should be timely which can be challenging to accomplish as a solo act. Can you get one other person to join you in writing a letter, getting them engaged, splitting the work, and enhancing inclusivity?

Shri – The template for writing LTEs provided by UCS made it so easy! Once I had a local paper in my hand it took about 5 minutes of skimming to find an article I could reply to. Then it was as simple as plugging in sentences according to the template. The biggest barrier during our LTE party was the fact that none of us actually read the paper! We realized that this was an important way to be connected to the local community.  Writing an LTE is a great place to start. If I were to do it over again I would get a bunch of people in the same room with a stack of papers. We could easily go through the whole process of skimming the paper, finding the article and writing the letter in an hour.

Emily – Remember that newspapers are struggling with shrinking budgets! Many editors and  reporters will be happy that you’re offering your own perspective in the form of an op-ed or LTE, and they don’t have to use vital resources to track a story down themselves.

Lindsay – It’s easy to make excuses that prevent you from engaging. It’s just as easy to engage. An LTE is a mere 150 words, we all have 150 words to share on a topic we care about! I put off writing an LTE for months, when I finally did it, it was a breeze. Having community members and representatives reach out to me on social media in response was validating, and made me feel silly for putting off such a simple, effective task for so long.  

  2. Develop inclusive relationships.

Shri: Build inclusive relationships, make connections, use your network. Consider who is at the table, be grateful to those who show up, and make the effort to reach out to the people who are not at the table, whose voices are under represented. When you prepare to take an action, take a step back and identify who is impacted, then make moves to raise their voices. This could mean putting your efforts on hold to support what they already have going on. Be intentionally inclusive and proactive about addressing equality. It’s helpful to make meaningful connections by keeping the ask low pressure, simple and sincere.

Tim – I think of this one as a network circle. A journey around my network circle includes members of science advocacy groups, such as the Nevada watchdogs, the communities who have a specific science supported goal, and the audience or recipient of the action or advocacy goal. I agree with Shri and the other fellows that inclusion is essential to success. This inclusion was a key to identifying important scientific question topics that we submitted to our Senate debate.

Emily – As Tim has said, a “network circle” is a great visual that really speaks to the core importance of advocacy communities. I envisioned a “ripple” when thinking about the network I hoped to build. As in, how far do my advocacy concerns resonate outside my immediate circle? Also, in thinking about diversity equity inclusion (DEI), it’s important to ask, “Who am I not seeing ‘at the table’? Who is missing, and why?” I’m learning not to assume that the people who show up to your actions are the only ones interested. Many others may be constrained by time, resources, or a feeling they don’t belong. But the strongest network circle is going to be the one that captures as many voices as possible.

LindsayDiversity, equity, and inclusion were at the forefront of all of my planning but I struggled to incorporate it in an effective way in my actions. This struggle led me to organize a discussion that was open to the community on the challenges and importance of creating inclusive spaces. My takeaway is that this is not always an easy task, but it is a task worth every ounce of energy. Be willing to learn in public spaces and learn from your neighbors.

3. Be explicit in your ask and prepare to be adaptive to the response.

Tim – In my experience, people have limited time but do want to get involved. That being said the more specific tasks or asks that you present, the greater the chance for involvement or success. I had two contrasting group events and saw a noticeable difference in engagement when I was able to drill down to a single or few items for each person to accomplish in a project rather than an open format that allowed individuals to work out the details of a larger project.

Shri – Use examples to show what a finished piece looks like, i.e., an LTE or letter to one’s representatives.

Emily – After you’ve made a specific ask, as Shri and Tim mention, it might be time to “adapt to the response”. Remember that your advocacy goals versus the goals of the groups or individuals you are working with may differ, but be prepared to lean into that difference. A community member who’s been engaged for longer than you has insider information as well as needs and constraints that are important to heed.

Each of us joined the Fellowship as a single drop, so to speak, but in joining our advocacy efforts with each other, and in engaging with members of our local communities, we made ripples in advancing science and policy advocacy in our respective states. Over time, these advocacy ripples became waves and influenced science and policy at a higher level. Just remember – we started as drops. Now, that little drop could be YOU.

Stay tuned for the second part of our series where we focus on how to make organizing and advocacy a sustainable endeavor–even while juggling work, school, and life.

 

 

Shri A. Verrill grew up in the Western foothills of Maine and holds a M.S. in Biology from the University of Southern Maine where she gained expertise in wetland science focusing on coastal salt marsh, estuarine ecology. Shri is currently a Habitat Restoration Project Manager with the Downeast Salmon Federation, and has lobbied both at the State and Federal level with the Maine Chapter of the Sierra Club, the Natural Resources Council of Maine and with the Downeast Science Watchdogs.  

 

Lindsay Wancour works with Swan Valley Connections, a collaborative conservation and education non-profit, as their Field Program Coordinator. Originally from Michigan, Lindsay moved to Montana after graduating from Michigan State University and served in Americorps’ Montana Conservation Corps. She then went on to complete her M.S. in Environmental Science from University of Montana, focusing on community engagement in watershed health. After completing her UCS fellowship, she started a UCS Western Montana Local Team and has continued her work in advocacy with her newly formed team.

 

Adrienne Keller is a PhD student in the Evolution, Ecology and Behavior program in the Department of Biology at Indiana University, where she studies forest carbon and nutrient cycling. Adrienne holds an M.S. in Resource Conservation from the University of Montana and a B.A. in Biology and Geography from Macalester College (St. Paul, MN). In addition to her research in ecosystem ecology, Adrienne is an active member of the newly formed, grassroots organization Concerned Scientists @ IU.

 

Tim Rafalski is a Ph.D. candidate in the Computer Science department at the University of Nevada Las Vegas. He works under Dr. Andreas Stefik conducting empirical studies—designing, running, and implementing programming language experiments—to validate scientific computing design and organization. Outside of the lab, Tim is a math and science tutor for students in elementary school through college, and he helps organize and participate in community elevating educational events.

 

Emily Piontek is seeking her master’s degree in Human Dimensions of Natural Resource Management at the University of Missouri – Columbia. She believes that climate solutions and common-pool-resource protections require a combination of political action and the fostering of place-based environmental values in our communities. In her classes and as a research assistant, she studies the relationship between human behavior and natural resources.

 

Science Network Voices gives Equation readers access to the depth of expertise and broad perspective on current issues that our Science Network members bring to UCS. The views expressed in Science Network posts are those of the author alone.

Energy Collision Coming: Technology Evolved, Why Haven’t Utilities?

UCS Blog - The Equation (text only) -

Photo: Famartin/Wikimedia Commons

Our modern economy depends on electricity, the miracle technology of the 19th century.  Many old policies and practices of the electric utility industry have stuck with us into the 21st century.  Electricity has had heroes and villains along the way, as well as enormous accomplishments of engineering, public service and safety.  While economics and public attitudes have changed about many things since the first electric bill was sent in January 1883, there are tools and techniques, as well as attitudes in the utility industry that do not change as much.  To serve society and maintain a healthy environment, we need a utility industry open to modern ideas and new approaches.

Maybe because the industry was based on a monopoly business model for 100 years, or because the invisible product requires engineering, or because there is a tension between the public interest and companies shrouding themselves in the name “public service company”… but whatever the reason there is a utility company culture that often appears paternalistic or patronizing.  New technologies owned by consumers could make our energy supply safer and less expensive if customer and public interests and investments were better recognized by the old utilities.

Today in the U.S., many changes are bearing down on electric utilities that demand a modern grid approach, such as state and city clean energy goals, and customers adding rooftop solar and electric vehicles.  But the old utility company tendency remains, with assurances to one and all that the large, sophisticated corporation has society’s best interests in mind, and there’s no need to pay too close attention. Sometimes that doesn’t quite work out.

Exhibit A: PJM

By some measures, the largest and most sophisticated U.S. utility is PJM, the grid operator for the region from New Jersey west to Chicago and south to Virginia, Kentucky and North Carolina. PJM provides the command and control, the market rules, and the financial clearing for wholesale electricity and high voltage transmission used by 65 million people in 13 states plus D.C. PJM is regulated by the Federal Energy Regulatory Commission (FERC).

But when we look at recent developments making headlines for PJM, it seems clear that the public is still getting an arrogant utility that is treading on state policies, with too little care for consumers wallets.

Witness the case of Green Hat, a financial trading firm with a questionable history (and a headquarters addressed to a UPS store). In response to Green Hat’s unusual and ultimately failed financial activity, PJM made a series of mistakes, from misunderstanding collateral and risk management, and then incorrectly believed a Green Hat pledge would suffice, or that the situation could get worse. Green Hat’s unchecked speculation in PJM markets ended in default, which led to the departure of the PJM CFO and some $400 million in financial obligations for consumers. An independent report cited “an unwarranted air of confidence” as a contributing factor to this fiasco.

Witness a level of arrogance when PJM fails to report its political contributions.  This is required by law to allow proper public oversight.

Witness the fight PJM has entered by taking to FERC a request to exclude zero carbon and renewable energy plants from its capacity market. The capacity market creates the inventory of plants that count for reliability and transmission infrastructure.  PJM holds that state clean energy policies that support clean air and climate by helping these plants is an unacceptable interference with the market. (This is the “MOPR” rule, for folks following closely.)

The PJM proposal for changes in rules (known as MOPR”) will move state-supported resources out of the capacity market, resulting in higher costs for consumers.

A particularly old-style approach from PJM is seeking additional revenues for coal plants with a vague argument that “fuel security” is a thing we need. (It isn’t.) PJM continues to push this debate, despite PJM meeting all reliability standards and recently boosting power plant payments already used for reliability. In addition, PJM has rejected ideas that would look at winter needs separate from summer needs, saying over and over that there is ample supply in winter. See graph below.

PJM pays for year-round supplies, even though winter needs are lower.

Change is here, what’s the right thing to do?

Meanwhile, changes abound in the energy landscape. Public opinion favoring clean energy is higher than ever, and states as well as local governments and corporations are adopting policies for higher levels, even 100% clean energy. Homeowners (2 million the US) have taken up solar as a DIY energy policy.  In fact, even PJM anticipates current mandatory Renewable Portfolio Standards will lead the PJM region to add 25,000 MW of wind and 12,000 MW of solar resources, with ~8,000 MW of that total behind the meter, by 2034. Thanks to state policy leadership! The economics and investment in renewable energy, demand response tech, and storage are all on the upswing across the U.S.  But PJM has taken steps to hold each of these down.  As PJM stays with old approaches to issues of fuel supply, a year-round capacity approach that over-supplies in winter, and an embrace of old state-sponsored resources while rejecting new state-sponsored resources, a collision is coming.

Elected officials and public interest advocates are championing lower costs and clean energy with a variety of state policies. These policies often address market failures and unpriced environmental externalities. Energy policy is relevant to local economic development, job creation and pollution-reduction. PJM objects to the states making policies that set priorities in energy production.

The way forward

There will be a public interest, and involvement, in the utility industry. PJM will need the participation of assertive state agencies and public interest groups that can look beyond the screen that PJM offers to through its external affairs efforts. PJM could change, and decide to facilitate the existing state RPS laws and demand-side policies. Change could also be driven in a constructive way by Congress and FERC, it could be a gradual erosion of engagement as local jurisdictions go a different way than the PJM vision.  Policymakers and industry have got to keep up with the times.  Changing public needs, technology and economics cannot be ignored forever.  The record shows that PJM has more to do for consumer protection, transparency, and the enabling of a clean energy transition.  In reference to the words of Martin Luther King, Jr., these are times of challenge and controversy for the utility industry.

Photo: Famartin/Wikimedia Commons

The Wettest 12 Months: New Analysis Shows Spikes in Flood Alerts in the US

UCS Blog - The Equation (text only) -

Photo: Staff Sgt. Herschel Talley/Nebraska National Guard

April 2019 marked the wettest 12-month period in the United States since recordkeeping began 124 years ago, breaking the previous record set from May 2015 – April 2016.  In most places in the continental US, by April 2019 it had already rained more than the annual average during the 20th century. This week, heavy rain is dumping up to one foot of rain in northern and central parts of the US  It’s evident that extreme precipitation events are getting more extreme, and also that climate change is one of the culprits.

But what does this mean to flood risks? We already know that more frequent, heavier rainfall is causing higher risk of flooding. As of May 21, 2019 hundreds of counties in states in the Great Plains and Midwest were under active flood or flash flood warnings and advisories (in light green, dark green, and dark red on the National Weather Service (NWS) map below). But it’s unclear what the “wettest 12-month period” ever on record means in terms of flood risks to the population, so we took a look at flood alerts, warnings, and advisories issued by the NWS to get some clarity. A key function of NWS is to communicate to government agencies and the public when life-threatening extreme weather events are likely or certain to occur. One key way in which NWS does that is by issuing alerts based on meteorological forecasting models and data.

Active flood or flash flood warnings and advisories (in light green, dark green, and dark red) on May 21, 2019.

Flood risks are increasing – and climate change is playing a large role

One useful way that climate scientists assess change over time is by looking at change in climatological variables (e.g., precipitation) between a recent period and a historical period – the difference is called a “climate anomaly.” I’ve taken that concept and applied it to calculating the difference between the average number of flood alerts issued between 1986-2017 (the historical period) and the number of alerts issued between May 2018 and April 2019—what NOAA just declared to be the wettest 12-month period on record (the recent period).

My calculations for the continental US—based on data from NWS alerts archived at the Iowa Environment Mesonet—are alarming. I found that out of 3,108 counties in the continental US, 71 percent (2,197) had more flood watches, warnings, and advisories during the last 12 months than the average for the 1986-2017 historical period. Many of the counties with the largest estimates of increased alerts (in red on the map) host large populations in metropolitan areas, for example Los Angeles (CA), Indianapolis (IN), Chicago (IL), Phoenix (AZ), Dallas and southeastern Texas, Sioux Falls (SD), Nashville (TN), Columbus (SC), Asheville (NC), and Pittsburgh and Philadelphia (PA). Risks are high in these urban regions, many of which have large impermeable surface areas that can contribute to flood risks. And many have insufficient flood protection to deal with increased water volumes during the heaviest precipitation events.

Change in National Weather Service flood watches, warnings, and advisories

71 percent of counties in the CONUS had more flood watches, warnings, and advisories during the last 12 months than the average for the 1986-2017 historical period

Slicing the data by region over time also shows that regional patterns are similar to the observed changes in heavy precipitation in the National Climate Assessment over the longer period 1958-2016 (99th percentile precipitation panel in the linked figure), which found the largest increases in the Northeast and Midwest.  These regions also experienced the largest rapid increases in flood watch, warnings, and advisories, with the Great Plains, Southeast and Alaska also showing some rapid increases.

Flood watches, warnings, and alerts issued by NWS, 1986-2018

We are clearly in a new normal in terms of extreme precipitation and that raises questions about the new normal of flood risks. Certainly, the Arctic is getting hotter, and the US Northeast has been chilly and gloomy, the March bomb cyclones contributed a ton of snow to Midwestern places that were quickly flooded as temperatures increased shortly after – and did you see that it snowed in New England in May?

But besides climate factors, the increase in flood alerts also points to increased risks to population due to new development (i.e., population increase) in areas of the country that did not have so many people. Recall that extreme weather alerts are issued by NWS to protect lives and property – so extreme weather that happens in non-populated areas does not typically trigger an alert. But as more places are populated, the people and infrastructure there become vulnerable to flood risks. That seems to be the case in places with large population growth over the last few decades. Case in point: four of the counties with 50 or more alerts in the 12 wettest months on record are Arizona counties that experienced large population growth since the 1980s – and Maricopa County, where metro Phoenix is located, had the second-highest number of alerts (Yavapai County is #1). And along the Mississippi river banks, scientists say that farmland, residential, and commercial development on floodplains, and too much reliance on levees and other forms of flood protection have given people a false sense of security. Yet that security is eroding, because the floodgates along the Mississippi have had to be opened more frequently.  Floodwaters are intentionally diverted away from urban locations and onto agricultural or rural areas, often leading to economic damages in the agricultural sector.  Obviously, there is a complex set of factors contributing to localized increased flood risks—it’s not lost on me that even if Arizona counties had the largest increases, the Southwest as a region had the smallest growth in NWS alerts—expected for a semi-arid region.  However, when it does rain, it is more likely to be more intense and overwhelm the infrastructure constructed for dry river beds that were constructed to handle flash flood flow rates of the past.  We know that climate is a growing contributor to risks in many parts of the country. And these risks are happening more frequently in the same places – as the map above shows — which is exacerbating impacts on people and property.

Building resilience to increased flood risks from climate and other factors

With flood risks growing, it’s urgent to do more to help build resilience. Building resilience to flood risks requires integrated actions and resources by the federal family – (e.g., FEMA, HUD, Corps of Engineers, USGS, NOAA, NASA, DOD, etc.), and NWS’ flood warnings are just one part of this.  A few obvious places to start would be for the federal government to consider climate risks and make other improvements to  mapping flood risks, investing in pre-flood risk reduction measures, thinking carefully about where and how we develop, preserving non-paved areas, and paying attention to low-income and other vulnerable communities who are typically overburdened with climate risks such as flooding. In addition, the country relies on a network of government agencies, universities, non-government organizations, and the private sector for advancing and communicating the science on extreme precipitation and riverine flooding to communities, policymakers, planners, and engineers. Congress should continue to support adequate funding for the following key agencies and programs:

  • The National Oceanic and Atmospheric Administration and the National Aeronautics and Space Administration provide weather forecasting and scientific research on extreme weather events and a changing climate.
  • The US Geological Survey leads the Federal Priority Stream gauges program (part of the larger National Streamflow Network), Flood Inundation Mapping program, and the 3D Elevation Program, which the nation depends on for accurate flood risk mapping and planning.
  • FEMA provides flood risk mapping.
  • The Centers for Disease Control and Prevention and the Environmental Protection Agency provide valuable resources for families and communities to help them stay safe and healthy before, during, and after floods.

Funding should be increased for both the stream gauge and mapping programs. Flood risk maps exist for only about one-third of the nation and many of these are out of date and limited in scope.  Congress and states with federal and state agencies, could take three critical actions that would address the science and data needs by expanding research on extreme precipitation events, increasing the river gauge network, and ramping up flood mapping programs.

Finally, the analysis presented here fills one critical gap in tracking cumulative flood warnings – a task that should be routinely done by NWS and communicated to the public by FEMA. All of these actions led by government agencies can help strengthen the science that keeps us safe from emerging flood and climate-related risks.

Photo: Staff Sgt. Herschel Talley/Nebraska National Guard weather.gov

PREPA’s Agreement is Terrible for Puerto Rico

UCS Blog - The Equation (text only) -

Jose Jimenez Tirado / Getty Images fileLea en español >

A new agreement on Puerto Rico Electric Power Authority’s (PREPA) debt represents a major setback for the future of the island.

 

It’s not new that PREPA is in bankruptcy and that the priority of Gov. Roselló is its privatization. It’s also not new that Puerto Ricans have been worried about the possible disastrous consequences that the privatization can generate. These include the excessive increase in electricity rates and the exacerbation of public health and environmental problems due to the improper handling of ashes, air pollution and emissions causing the climate crisis.

These concerns are being confirmed with the recent announcement of the agreement reached between the Fiscal Control Board, the majority of PREPA’s bondholders, a PREPA bond insurer and Rosselló’s government.

To explore the implications of this agreement I talked with Dr. Agustín Irizarry, a professor of Electric Engineering at the University of Puerto Rico in Mayagüez.

What is this agreement about?

Through this agreement, a “debt charge” will be required to cover the deficit inherited from PREPA. This charge must be paid by all PREPA users starting this summer until 2067.

Additionally, the debt charge will apply to those who currently have or will install their own generation system in the future.

Why is the PREPA agreement concerning?

For several reasons:

  • Puerto Ricans will pay more than double the value of PREPA’s debt. The agreement establishes that Puerto Ricans must pay the debt charge for 47 years to cover a deficit of close to $9,000 million. The current rate of 22 cents per kilowatt hour (kWh) will rise 2.8 cents/kWh in 2020 (before the election); it will rise 4.55 cents/kWh starting on 2043 and it will remain that way for more than 20 years. This means that for a debt of about $9,000 million, Puerto Ricans will pay more than $23,000 million without including between $100 million and $200 million to cover administrative expenses.
  • Autonomous generation users will also have to pay the debt charge. This is a moment in which people are searching for alternatives to avoid going through what so many had to endure, living months and months without energy after Hurricane Maria. That’s why after Maria everybody wants a solar system on their roof with a battery to store the energy. Hurricane Maria demonstrated the tremendous vulnerability of our centralized electric system. Therefore, autonomous generation systems should be promoted. On the contrary, the agreement establishes that the debt charge will also apply to those who own or install their own generation systems. Those who start generating their own energy with solar panels beginning on September 30, 2020 should pay the charge immediately after installing the system. And those who have installed their own generation systems before this date and are connected to the network, should start paying the debt charge for the energy they produce as of 2040.
Imágenes satelitales de Puerto Rico por la noche antes y después del huracán María.

Puerto Rico a oscuras después de María.

What are the choices?

The agreement guarantees the payment of the debt but does not offer any alternative for:

  • increasing the reliability of the electrical network,
  • reducing air pollution by improving the health of Puerto Ricans, and
  • reducing the emissions that produce the climate crisis.

Since 1989, electricity rates have not risen, contributing in part to the lack of investment in the electricity infrastructure. This has had a negative impact in the quality of the service.

What must happen is that the agreement should not be signed because it only benefits bondholders. Instead, a planned rate increase should serve to settle the debt before 15 years, improve the reliability of the electric grid and help Puerto Ricans in their transition to a decentralized system. This decentralized system should provide an optimal service and respond to the challenges of our time. This will be a key step to increasing the resilience of the system in preparation for natural disasters such as María.

How to prevent PREPA’s agreement from moving forward?

The agreement must first pass through the legislature, the energy commission and the bankruptcy court before being approved. We must alert the public so that they know what is being proposed and act to prevent the approval of this agreement. Only by doing this will we be able to protect our energy future.

 

 

 

 

 

 

 

Jose Jimenez Tirado / Getty Images file

The Basics of Integrated Resource Planning in California

UCS Blog - The Equation (text only) -

Photo: Elena Koycheva/Unsplash

Energy experts geek out over a process known as Integrated Resource Planning. It’s not widely followed by the general public, but Integrated Resource Plans (“IRPs”) determine where consumers’ electricity will come from, how clean that power will be, and whether states will meet their clean energy and climate goals. In California, IRPs are key to decarbonizing the electricity sector and turning the state’s climate goals into reality.

Why this process?

The purpose of IRPs is to develop a path forward that meets renewable energy goals and global warming emissions reduction targets. Current law requires 60% of California’s electricity to come from renewable sources, such as wind and solar, by 2030. Current law also requires California to reduce global warming emissions to 40% below 1990 levels by 2030. Electricity providers must spell out in their IRPs how they will meet these goals while simultaneously minimizing costs, ensuring grid reliability, and minimizing the impact of air pollution on California’s disadvantaged communities.

In California, integrated resource planning was mandated by a 2015 state law. The law requires investor owned utilities, community choice aggregators, and almost all electric service providers to develop an IRP every two years and submit those plans to the California Public Utilities Commission for approval. Publicly owned utilities are required to develop a plan every five years and submit them to the California Energy Commission.

How does it work?

To ensure that California achieves all its clean energy and climate goals, the California Public Utilities Commission (CPUC) has developed an integrated resource planning process that repeats every two years. (The California Energy Commission has a separate process for publicly owned utilities that is not discussed here.) The CPUC’s process goes like this:

  1. The CPUC sets a global warming emissions reduction target for California’s electricity sector. The “40% below 1990 levels by 2030” requirement applies to the entire state, and since it is easier to reduce emissions from the electricity sector than from other sectors of the economy (e.g. transportation, agriculture, and industry), the electricity sector contribution to the state-wide requirement must be frequently reevaluated to ensure that California’s emissions reductions remain on track.
  2. The CPUC performs electricity grid modeling of the entire state to determine the amounts and types of new resources (e.g. wind, solar, and batteries) that are necessary to achieve the global warming emissions reduction target while meeting future electricity needs. This modeling is used to develop an overall plan for the state’s electricity sector.
  3. Electricity providers create individual IRPs, illustrating how they will reduce their global warming emissions by providing customers with additional clean electricity while minimizing costs, ensuring grid reliability, and minimizing air pollution in California’s disadvantaged communities. Electricity providers must demonstrate that they are doing their part to reduce emissions as part of the statewide plan.
  4. The CPUC collects all the individual plans from electricity providers and puts all those plans together. The CPUC then compares their original plan (in Step 2) to this new plan to make sure that California will still meet its goals if the state’s electricity providers all follow their individual plans.
  5. Lastly, the CPUC brings all this planning to life by implementing new policies and authorizing electricity providers to develop clean energy projects.

At the end of the day, the last step is the most important part. No matter how much planning you do, planning by itself doesn’t reduce global warming emissions. California’s electricity providers must follow through with their plans and buy more clean energy in order to achieve all the goals of the integrated resource planning process.

The CPUC’s integrated resource planning process has five steps. The entire process repeats every two years.

What’s the latest?

The California Public Utilities Commission recently completed the first two-year cycle of its integrated resource planning process. Importantly, the approved plan does not call for any new natural gas power plants – instead, it paves the way for 12 gigawatts of new, clean resources to come online by 2030, including solar, wind, geothermal, and battery storage. (For reference, California already has roughly 30 gigawatts of renewable generation capacity, which generates approximately one-third of the state’s electricity.)

The CPUC has also set the stage for the next cycle of the integrated resource planning process with a couple new features:

  • Natural gas power plant study: The next cycle will focus more closely on existing natural gas power plants and the extent to which they are required for maintaining reliability over the coming decade. Last year, the Union of Concerned Scientists conducted a study that indicated, out of all the gas plants in the California Independent System Operator territory (which covers 80% of California), roughly a quarter of those gas plants could be retired without negative consequences. This coming integrated resource planning cycle will include a similar study to better understand how many natural gas power plants really need to stay online through 2030.
  • Planning for 100% clean electricity: California passed a law last year that sets a goal for all electricity sold to customers to be 100% carbon-free by 2045. The next cycle of the integrated resource planning process will begin to study the investments necessary to achieve this bold goal.
  • Procurement track: The CPUC has stated their intention to begin a “procurement track” that will run parallel to the integrated resource planning process. The main motivation behind the procurement track is to make sure that California is developing the clean energy projects necessary to meet its renewable energy and decarbonization goals – if progress stagnates, the Commission would be able to mandate more clean energy projects through the procurement track. The procurement track will serve as a safety net that ensures sufficient clean energy progress even if an individual electricity provider fails to follow its IRP, or if, collectively, the IRPs of all the state’s electricity providers do not add up to meet the state’s renewable energy or decarbonization goals.

With the next cycle of the integrated resource planning process already underway, California is continuing to rapidly decarbonize its electricity sector with the integrated resource planning process helping to show the way.

Photo: Elena Koycheva/Unsplash

Children’s Health Research Centers Protect Our Kids. The EPA Just Defunded Them.

UCS Blog - The Equation (text only) -

Photo: EPA

Almost 19 years ago, the Environmental Protection Agency (EPA) entered into a partnership with the National Institute for Environmental Health Sciences (NIEHS) to invest in children’s health. The EPA lauded this history just last October, noting the immeasurable value and singular focus of improving the health of children across every community. The partnership established a joint program to competitively fund community-based Children’s Centers across the country—centers where teams of researchers and child health experts come together to study and reduce environmental health risks to children today and into the future. Research on toxic substances linked to illnesses such as asthma, cancer, autism, and attention deficit hyperactivity disorder (ADHD) that can rob children and their families of the many joys of childhood and impact the child for a lifetime.

Over its history, the partnership program has awarded 46 grants totaling over $300 million to Children’s Environmental Health and Disease Prevention Research Centers from coast to coast (see map of grantees over time, pp. 10-11). And the research has been prolific (the EPA’s word, not mine), with more than 2,500 publications contributing to the store of knowledge on environmental impacts on children’s health.

In its 2017 impact report, the agencies detail the compelling need for this research and the impressive accomplishments to dateSo it’s a real disappointment—although not really surprising—that the EPA has decided to pull out of this decades-long partnership and stop funding grants to children’s environmental health research centers. More on this below, but first….

The need for children’s health research

Children are not little adults; they are uniquely vulnerable to environmental risks. Their organs and systems are rapidly developing; they eat, drink, and breathe more than adults relative to body mass; and their behaviors make them more susceptible to environmental exposures. In addition, prenatal exposure to environmental toxicants can result in preterm births, low birth weight, and birth defects that can impact a child’s health and quality of life well into the future.

The American Lung Association (ALA) reports that nearly 141.1 million people live in counties where monitoring shows unhealthy levels of either ozone or particle or both. That’s 43.3% of our nation’s population breathing unhealthy air. This helps explain why some 6.2 million children in the US have asthma (for more on childhood asthma, see info from the ALA and the Centers for Disease Control). In addition to the very real social impacts that asthma has on children (and their parents)—like missing school, being unable to fully engage in outdoor activities, the often frightening visits to emergency departments, and the need to take medication—are the enormous economic costs. A recent study of the economic costs of pediatric asthma in the US found direct costs totally $5.92 BILLION, with average annual costs per child ranging from $3,076 to $13,612.

There are equally compelling data on other serious health outcomes in children exposed to environmental toxins. From cancer and neurodevelopmental disorders to acute and chronic impacts associated with lead, arsenic, pesticides, and toxins in consumer products, like phthalates and BPA.  

The accomplishments of the Children’s Centers

The EPA has lauded the accomplishments of the partnership, noting that “Through their groundbreaking work, the Children’s Centers have pushed the boundaries of clinical, field, and laboratory–based research. The centers have been hubs for research across disciplines critical to children’s health—from medicine, toxicology, genetics, epidemiology, and biology to social science, statistics, and informatics. The research has been disseminated through thousands of publications in diverse and peer-reviewed journals. The research findings lay a critical foundation for reducing health risks and improving quality of life for children and adults” (Impacts Report, page 12).

Examples of groundbreaking research include:

  • Effects of the pesticide chlorpyrifos on children’s brains
  • Effects of maternal exposure to air pollution on preterm birth and reduced birth weight
  • Associations between leukemia and exposure to tobacco smoke, pesticides, paint, organic solvents, polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs), and PAHs
  • Cognitive and behavioral effects associated with prenatal exposure to airborne polycyclic aromatic hydrocarbons (PAHs)
  • Potential links between air pollution, pesticides, occupational exposures, phthalates, and risk of autism spectrum disorder in children
  • How chemicals in plastics and household items affect reproduction and onset of puberty

In addition, the centers go well beyond publishing their research in scientific journals. They serve as community resources, engaging in outreach, communication, and collaboration with community partners and organizations to disseminate vital scientific information on the ground. Their research also provides critical information for public health and environmental policy (for example, see here, here, here, here, here).

Another step backwards on children’s health

While the EPA’s decision to stop funding grants through the Children’s Environmental Health Centers may seem like a head-scratcher (or like looking a gift horse in the mouth, as my dear old dad would have said), it’s actually not so surprising.

Findings from long-term studies of the effects of chemicals on child health often pose economic, regulatory, and reputational headaches for the chemical industry and other producers. And industry interests are certainly top of mind for the Trump administration. The decision to cut this funding is just the latest in a pattern of rolling back public health and science-based protections at the EPA.

For example, the agency has decided that it’s no longer “appropriate and necessary” to regulate mercury and air toxics emissions from power plants. Yes, mercury! That well-known bad actor, especially hazardous to pregnant women, to the neurological development of their fetuses and to young children—causing impairments that can last a lifetime.

And let’s not forget the agency’s decision to override its own science on damaging effects of the pesticide chlorpyrifos on children’s developing brains. The ninth Circuit Court of Appeals saw it differently and ordered the EPA to finalize its proposed ban of chlorpyrifos, determining that EPA’s 2017 decision to refuse to ban the chemical was unlawful because it failed to justify keeping chlorpyrifos on the market, while the scientific evidence very clearly pointed to the link between chlorpyrifos exposure and neuro-developmental damage to children, and further risks to farmworkers and users of rural drinking water. 

Research today—healthy kids and adults tomorrow

Protecting our children from environmental harms is surely something we can all agree on. Doing the science to understand, reduce, and prevent these risks to children’s health takes knowledge, skills, interdisciplinary collaboration, and commitment. Researchers in the centers have certainly demonstrated all of that over the past two decades. But they need financial resources to continue this important and evolving work.

While we call out the EPA for its regressive decision, let’s also give a shout-out to the National Institute for Environmental Health Sciences (NIEHS). Though it alone will be hard pressed to close the financial gap left by the EPA’s abrupt exit from the partnership, the agency has publicized plans to fund five grants for new “Children’s Environmental Health Translation Centers.”

And let’s join together as advocates for children’s health and our collective future to let our elected leaders know that continued funding for such centers of excellence is in our national interest. To me, it’s a no-brainer.

Photo: EPA

How Scientists are Responding to On the Media’s Reporting on Researcher Harassment

UCS Blog - The Equation (text only) -

The prolific public radio show On The Media last week explored how open records laws are used to disrupt public interest research at public universities and examined the challenges of creating laws that allow for scrutiny while protecting free speech. The reporter, Alana Casanova-Burgess, treated the issue with the complexity it deserves, and in the days since, I’ve heard from several professors around the country by email and phone.

In case you haven’t followed the issue, a recap: companies and activists increasingly seek private correspondence of public university researchers through open records requests, hoping to find information they can take out of context to confuse the public and discourage public interest research. North Dakota and Rhode Island recently modernized their open records laws, and a California legislator introduced a bill to improve California’s law. The legislation was recently put on hold while supporters and opponents work out their differences.

For some researchers who heard the show, the chilling feeling was all too familiar. Several academics doing great research reached out to me to thank me for being interviewed for the story. Not one wanted to go public for fear of putting a target on their back.

“I teach at a public university in Texas,” wrote one scientist. “Although I don’t get any grant funding, I am super careful about what I put on my syllabus and in emails. All I know is how much I self-censor—not an easy thing for me to do when I teach about [public] health.”

One of my undergraduate degrees is in rhetoric and communication theory, so how information and misinformation spreads has always been a strong interest and area of study for me. And I’m so thrilled that the California legislation has helped amplify a national conversation about this issue. Through On the Media and other reporting, millions of people have been introduced to the issue of harassment of researchers. And through this public conversation, we are finding where there is common ground.

“You have a bunch of groups that all think they’re doing what’s good for inquiry and democracy,” Berkeley Law Professor Claudia Polsky, who wrote a recent law review article about this issue, told the show. “We’re all making the same core moral claim but it leads us to a very different perspective on records requests.”

That’s one of the reasons I believe there is a long-term solution to this problem: since we all want to protect research and promote accountability, we just need to agree, to the extent practicable, on the right mechanisms that can get us there. UCS won’t support legislation that goes too far and compromises accountability. But we still want to see laws that protect the ability of scientists to pursue policy-relevant research regardless of the results.

Acuerdo de Autoridad de Energía Eléctrica Es Pésimo Para Puerto Rico

UCS Blog - The Equation (text only) -

Read in English >  

Un nuevo acuerdo sobre la deuda de la Autoridad de Energía Eléctrica (AEE) de Puerto Rico representa un gran retroceso para el futuro de la isla.

 

No es nueva la noticia que la Autoridad de Energía Eléctrica (AEE) se encuentra en bancarrota y que la prioridad para la administración del gobernador Roselló es su privatización. Tampoco es nueva la preocupación de los puertorriqueños sobre este proceso debido a las nefastas implicaciones que puede tener.

Estas incluyen el aumento desmedido en las tarifas de electricidad y la exacerbación de problemas para la salud pública y el medioambiente por el manejo indebido de cenizas, la contaminación del aire y las emisiones causantes de la crisis climática.

Estas preocupaciones están siendo confirmadas con el reciente anuncio del acuerdo entre la Junta de Control Fiscal (JCF), la mayoría de tenedores de bonos de la AEE, una aseguradora de bonos de la AEE y el gobierno de Rosselló sobre la reestructuración de la deuda de la AEE.

Para explorar las implicaciones de este acuerdo hablé con el Dr. Agustín Irizarry, catedrático del Departamento de Ingeniería Eléctrica de la Universidad de Puerto Rico en Mayagüez.

 

¿En qué consiste el acuerdo?

Por medio de este acuerdo se pondrá un cobro llamado “cargo de la deuda” para cubrir el déficit heredado de la AEE. Todos los usuarios de la AEE tendrán que pagar este cargo a partir de este verano y hasta el año 2067.

Adicionalmente, el cargo de la deuda aplicará a quienes actualmente cuenten con ó quienes instalen en el futuro sus propios sistemas de generación eléctrica.

 

¿Porqué es preocupante el acuerdo de la AEE?

Por múltiples causas:

  • Los puertorriqueños pagarán más del doble del valor de la deuda de la AEE. El acuerdo establece que por 47 años los puertorriqueños deberán pagar el cargo de la deuda para cubrir un déficit de cerca de $9.000 millones. La tarifa actual de 22 centavos por kilovatio hora (kWh) subirá 2,8 centavos/kWh en el año 2020 (antes de las elecciones); a partir del año 2043 subirá 4,55 centavos/kWh y permanecerá así por más de 20 años. Esto quiere decir que por una deuda de cerca de $9.000 millones, los puertorriqueños pagarán más de $23.000 millones sin incluir entre $100 millones y $200 millones para cubrir gastos administrativos.
  • Los usuarios de generación autónoma también deberán pagar el cargo de la deuda. Este es un momento en el que la gente está buscando alternativas para no volver a quedar meses y meses sin energía como les pasó a tantos luego del huracán María. Por eso es que después del huracán todo el mundo quiere en su casa un sistema solar con batería para almacenar la energía. El sistema eléctrico centralizado ha demostrado tener una enorme vulnerabilidad y por lo mismo se deberían promover sistemas de generación autónomas. Por el contrario, el acuerdo establece que el cargo de la deuda aplicará también a quienes cuenten con ó instalen sus propios sistemas de generación. Quienes generen su propia energía con paneles solares a partir del 30 de septiembre del 2020 deberán empezar a pagar el cargo de inmediato. Y quienes hayan instalado su propia generación antes de esta fecha y estén conectados a la red, deberán empezar a pagar el cargo de la deuda por la energía que produzcan a partir del año 2040.
Imágenes satelitales de Puerto Rico por la noche antes y después del huracán María.

Puerto Rico a oscuras después de María.

¿Qué alternativas hay?

El acuerdo como está concebido garantiza el pago de la deuda, pero no ofrece ninguna alternativa para:

  • incrementar la confiabilidad de la red eléctrica,
  • reducir la contaminación del aire mejorando la salud de los puertorriqueños y
  • reducir las emisiones que generan la crisis climática.

Desde el año 1989 las tarifas de electricidad no han subido y esto en parte ha contribuido a la falta de inversión en la infraestructura eléctrica, con su correspondiente impacto en la calidad del servicio.

Lo que debe suceder es que no se debe firmar un acuerdo que solo beneficia a los bonistas. En lugar de firmar este acuerdo, un aumento planificado de tarifas debe servir para saldar la deuda antes de 15 años, mejorar la confiabilidad de la red eléctrica y ayudar a los puertorriqueños en su transición a un sistema descentralizado que brinde un óptimo servicio y que responda a los retos de nuestro tiempo. Esto será un paso clave para aumentar la resiliencia del sistema en caso de desastres naturales como María.

 

¿Cómo evitar que el acuerdo de la AEE avance?

El acuerdo debe pasar primero por la legislatura, la comisión de energía y el tribunal de quiebra antes de ser aprobado. Debemos alertar a la ciudadanía para que sepa lo que se está proponiendo y actúe para prevenir la aprobación de este acuerdo. Solo así podremos proteger nuestro futuro energético.

 

 

 

 

Jose Jimenez Tirado / Getty Images file

If 3M Really Cares About the PFAS Science, Here’s How They Should Move Forward

UCS Blog - The Equation (text only) -

Photo: Dylan McCord/US Navy

Well, well, well, 3M. I’m glad to hear you are concerned about the science of PFAS, but let’s put some walk to that talk.

Let me explain. As I was googling several different PFAS-related search terms last week, I kept seeing this targeted ad at the top of the list titled “3M | Believe Science, not opinion: We proactively minimized PFAS impact, investing $50 million in carbon filtration systems.” When I clicked on it, it brought me to an ad on the Washington Post website titled “Why We Support Regulation of PFAS. Here’s How to Move Forward” written by the Senior Vice President of Research and Development and CTO of 3M, John Banovetz, that reads like an opinion piece (but presumably without the rigorous review required to get on the actual op-ed pages of the Post) and uses strong messaging of supporting science and “thoughtful” regulation.

Well, 3M, I know you think that your company has pulled out all the stops by spending a grand total of $100 million in testing water sources and $50 million in installing carbon filtration systems, but no one is impressed. Not only is it pocket change for a company with $33 billion in sales in 2018, but it doesn’t even come close to covering the past, current, and future health costs of the toxic chemical burden imposed on millions of bodies thanks in part to your company. Clearly you’ve noticed the magnitude of this issue, since your 2018 financial statement spends over six pages listing the many lawsuits involving your company related to PFAS.

Since you don’t seem to have a grasp on what being a responsible company looks like, I have created this handy roadmap for you to follow as you think about how you can help inspire confidence in drinking water. Here’s how 3M needs to move forward:

1. Stop lying about the science (and everything else)

We already know you covered up the science on the impacts of PFOA and PFOS decades ago, so why not just admit what you have long known—that PFOA and PFOS are hazardous and linked to a range of health problems and data on replacement PFAS are pointing to similar impacts. You can post ads in every newspaper in the country that assert your interest in science and science-based policies, but you won’t gain any trust until you start being honest about the mess you’ve made and start contributing to solutions that aren’t just band-aids. You can also start helping contribute to knowledge about the scope and magnitude of the PFAS contamination issue by endorsing legislation like the PFAS Detection Act, which would charge USGS with conducting nationwide sampling of PFAS in waterways, wells, and soil.

2. Clean up your mess

Once you start leveling with the public about your role in this public health emergency, you can support proactive legislative solutions instead of lobbying against them, pledge to swiftly and thoroughly clean up already contaminated superfund sites, and work with scientists across the country to figure out how best to remediate these sites and safely dispose of PFAS contamination. A good start would be to endorse the PFAS Action Act which would designate PFAS as hazardous substances under the EPA’s Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA). This would help feed into the PFAS Right To Know Act which would require the agency to list PFAS under the EPA’s Toxic Release Inventory which would let communities know how much PFAS are released by nearby facilities. Giving communities information on where and how much PFAS are being released and then creating a trigger for cleanup of sites contaminated with PFAS would get the ball rolling on protecting our water.

3. Pay for the damage done

As you’re navigating cleaning up the sites that are clearly your responsibility, you’ll pay what you owe instead of shifting the burden onto small communities that are currently paying for their local water utilities to filter their water or connect them to the local water supply for contamination that’s not their fault. You’ll also pay for the technology that water utilities need to test for the variety of PFAS used in your products. You put it into the environment, it’s now your job to figure out how to find it so it can be rooted out. Remember it’s your fault, so you need to fix it. The House has introduced infrastructure legislation, the LIFT America Act, that would jumpstart this process by providing $2.5 billion in grants to PFAS-impacted communities to help clean up their water. What about a dollar-for-dollar match from 3M?

4. Don’t let it happen again

In order to protect public health and the environment and ensure the long-term viability of your company, you need to seek alternatives for the entire class of PFAS. The phase-out of PFOA and PFOS was just a baby step. The body of evidence of chemicals within this class make a strong case for regulating PFAS as a class of chemicals since their toxicological properties are similar. You need to accept this and dedicate research and development resources to developing alternatives while you phase out manufacture and use of short-chain PFBS and whatever’s next in the PFAS pipeline.

A suite of legislative solutions is just the start

According to your Washington Post piece, you support the EPA setting a national standard for PFOA and PFOS but have issues with the pieces of legislation on the table. There are some very important bills out there that will allow EPA to use the science to inform regulatory pathways under the Safe Drinking Water Act, Clean Water Act, and CERCLA. These bills would not create shortcuts to regulate PFAS without agency officials making decisions based on the best available science. Whether that science is convenient for you is another story.

If you really believe that science should determine PFAS regulation, then we agree and the next steps are very clear. I spent last Wednesday on Capitol Hill with a group of people adversely affected by PFAS. There were stories of disease and death, stories of lives that were altered forever by you and your industry’s negligence with these chemicals. You would be best served to listen to these stories, take them seriously and take action to ensure more people don’t face the same fate.

On Wednesday, May 15th I was fortunate enough to meet this inspiring group of people impacted by PFAS contamination who traveled to DC from across the country to advocate for legislative action on PFAS.

There are people who can’t drink their water, can’t swim in their lakes, can’t fish or hunt on their land, can’t sell their property, can’t pursue their dreams, and can’t say goodnight to their loved ones thanks to the chemicals you unleashed into the environment. You owe it to them to change course.

Xcel Energy’s Plan to Eliminate Coal and Boost Solar in Minnesota

UCS Blog - The Equation (text only) -

Photo: Zbynek Burival/Unsplash

Today, Xcel Energy released a preliminary plan to phase out its remaining coal-fired power plants in Minnesota and replace them primarily with wind, solar, and energy efficiency—moving the company forward toward its goal of 100% carbon-free electricity by 2050.

Part of the plan involves a consensus proposal joined by the Union of Concerned Scientists, other clean energy organizations, and the Laborers International Union of North America.

Below are some of the noteworthy items included in the consensus proposal and Xcel’s plan—and how they relate to Minnesota’s clean energy future.

1. Coal plant retirements

Xcel Energy will propose a 2028 retirement date for the Allen S. King coal-fired power plant and a 2030 or earlier retirement date for the Sherco 3 coal unit. Significantly, these plants are the company’s last coal-burning power generators in Minnesota for which Xcel has not yet announced retirement dates.

In the meantime, Xcel will also reduce coal usage at the Sherco 2 unit through committing to seasonal operation of the plant, a concept that my colleague Joe Daniel has written about here. Seasonal operation of coal plants has helped other utility customers save money and promotes grid flexibility to enable Xcel to integrate more renewable energy in the future.

Reducing and phasing out coal burning produces major carbon pollution reduction benefits as well as reducing public health impacts through lower soot and smog emissions (see our Soot to Solar report from last fall on Illinois coal plants).

2. Massive growth in solar power

Minnesota currently has close to 1,100 megawatts of installed solar power statewide. As part of the agreement with clean energy organizations, Xcel will propose the acquisition of 3,000 megawatts of solar power to add to its system by 2030. This is enough to provide 20 percent of Xcel’s energy, powering the equivalent of more than 417,000 homes and furthering reduce carbon pollution from the electric sector. This would add to Xcel’s addition of 1,850 megawatts of wind power by 2022.

3. Big commitment to energy efficiency

According to the Center for Energy and Environment, in 2018 Xcel achieved a record amount of energy efficiency: more than 680 gigawatt-hours of electricity savings, or about 2.35% of sales (well exceeding the state’s 1.5% energy savings target). In the consensus proposal, Xcel commits to include achievement of electric savings above that 2018 amount for the entire decade (2020-2029) in its planning.

This ambitious goal is based on the Minnesota Department of Commerce’s statewide energy efficiency potential study and would allow Minnesota to potentially join other states that are achieving 2-3% per year in efficiency savings.

Investing in energy efficiency helps utilities avoid more expensive measures and helps reduce customers’ energy bills which promotes energy affordability and reduces the energy burden.

4. Support from labor

The Laborers District Council of Minnesota and North Dakota (“LIUNA Minnesota”) joined the consensus proposal alongside UCS and other clean energy organizations. As part of the agreement, Xcel will commit to a request-for-proposals process for solar projects that maximizes local job creation and participation in apprenticeship programs.

5. The role of gas and nuclear

Gas-fired power plants are not clean resources and investing in them is a risky proposition for electric utilities. The Union of Concerned Scientists is part of a group challenging approval of Minnesota Power’s plan to build a new gas plant in Superior, Wisconsin.

However, we and other signatories to the consensus proposal are supporting Xcel’s acquisition of the Mankato Energy Center, an existing gas plant currently owned by Southern Company. Why?

Xcel already buys power from the Mankato plant, and the acquisition is being pursued in combination with the above aspects of an overall plan to decarbonize the company’s generation portfolio. Our analysis is that acquisition of the Mankato plant will not have significant impacts on greenhouse gas emissions and will help Xcel phase out its Minnesota coal plants by:

  • Reducing system costs associated with early coal retirements and incentivizing the decarbonization of sectors outside the electricity sector;
  • Displacing the need for large additions of gas combustion turbine generation in the 2030s and 2040s; and
  • Putting a large carbon emitter (the Mankato plant) under the oversight of the Minnesota Public Utilities Commission, an important step in ensuring beneficial resource planning for a carbon-free future.

Utilities are unfortunately rushing to build new gas infrastructure despite there being enough gas capacity online to meet demand. Still, Xcel Energy is not backing off from its commitment to be net carbon-free by 2050 and emissions from the Mankato plant will fall under that cap if the acquisition is approved and Xcel owns the plant.

With respect to nuclear, while it is not part of our consensus proposal, Xcel’s preliminary plan also includes an expectation of relicensing its Monticello nuclear plant and operating it at least until 2040. (To date, no nuclear reactor in the United States has received approval from the Nuclear Regulatory Commission to extend its operating license beyond 60 years, but three applications are currently pending.) This concept will require close examination by stakeholders and regulators on whether it is the most cost-effective path toward a 100% carbon-free electricity future and whether the plant can continue to operate safely beyond 60 years.

What are the next steps?

The consensus proposal will be reviewed by the Minnesota Department of Commerce and other stakeholders in a proceeding currently pending before the Minnesota Public Utilities Commission.

Stakeholders also can weigh in Xcel’s preliminary plan prior to the company’s integrated resource plan filing slated for July 1, 2019.

Finally, the measures outlined by Xcel Energy show that a low-carbon electricity system is achievable in Minnesota and should further support the legislature’s consideration of clean energy measures that I blogged about earlier this month, including establishing a goal for 100% carbon-free electricity by 2050.

Photo by Zbynek Burival on Unsplash

California’s Infrastructure Earns a C-. We Need More Equitable and Climate-Safe Infrastructure Now

UCS Blog - The Equation (text only) -

Last week, California’s infrastructure got its report card. Engineers from Region 9 of the American Society of Civil Engineers (ASCE) evaluated the state of our roads, dams, electric grid, schools, and other critical infrastructure, as they do every six years. This time around, the Golden State earned a grade of ‘C-,’ or “mediocre and requires attention.”

As the mother of an infant, this assessment struck a special chord with me. I count on the quality and reliability of our roads, water and wastewater systems, and electric grid to help me keep my daughter safe from harm and provide an environment where she can thrive. Many other parents do, too.

These expectations seem reasonable. They will, however, become even harder to meet in the face of continued underinvestment and disinvestment in communities and more frequent and severe climate-related extreme events here in California and beyond. These issues must be key considerations in infrastructure decisions and solutions moving forward.

Making the grade

California’s ‘C-’ grade is better than the nation’s grade of ‘D+’ but worse than its previous 2012 evaluation of ‘C’ – despite billions of dollars in investments since then at the state and local levels. What’s more interesting and sobering are the individual sector analyses. Our roads, energy and stormwater systems, and levees all received poor grades (‘D’, ‘D-,’ ‘D+,’ and ‘D,’ respectively). A couple of the many stated reasons include:

  • Electricity outages affected nearly 4 million Californians per year on average between 2008-2013, and roughly 3 million people in 2017. One study found that California had the most reported outages of any state in 2015, 2016, and 2017.
  • A significant portion of our stormwater drainage infrastructure pre-dates the 1940s and requires repair or replacement for continued use and protection of communities.

ASCE estimates it would take investments of hundreds of billions of dollars over the next couple decades to upgrade these and other critical sectors to a ‘good’ condition or ‘B’. Recent State and local bonds and voter-approved propositions, including SB 1 for transportation and Prop 1 and Prop 68 for water, provide a sizable down payment towards this goal.

Moving beyond averages and historical trends

California is a massive state and home to nearly 40 million people. While a single grade can serve as a helpful benchmark, it also masks the varying quality of infrastructure throughout the state that contributes to disparities in health, economic opportunities, and quality of life. As a result of decades of underinvestment and disinvestment in low-income communities and communities of color, families are left relying on infrastructure (or lack thereof) that fails to meet even basic needs. One example is low-income unincorporated communities in the San Joaquin Valley who lack access to clean, safe and affordable drinking water due to pollution, groundwater depletion, and insufficient wastewater treatment and disposal systems. Examples exist for transportation, schools, and other sectors as well. Infrastructure solutions should address these inequities and prioritize investments in the communities that need them most.

In addition, we need our infrastructure to function during extreme weather events, from floods to droughts and wildfires to heat waves. The best available science reminds us that no sector or region will be left untouched as these events become more severe and frequent due to climate change. Efforts to improve and expand our critical infrastructure must plan for this new reality, rather than continuing to assume that the past is a good predictor of the future. The good news is that the work of the AB 2800 Climate-Safe Infrastructure Working Group provides a useful framework for the necessary fundamental shifts in design, planning, investments, operations and maintenance.

Remembering why

Budget conversations are continuing in Sacramento. Discussions on a federal infrastructure package are moving forward with the introduction of the LIFT America Act, thanks to the leadership of the House Energy and Commerce Committee Chairman Frank Pallone, Jr. It’s important for policymakers to keep in mind during this process the people that the state’s and nation’s infrastructure is meant to serve. UCS will be watching closely to see what and who they prioritize.

We look forward to working with Governor Newsom’s administration and the US House of Representatives and US Senate on equitable, clean, and climate-safe solutions. Our state and nation must invest in such infrastructure as if our safety, quality of life, and livelihoods depend on it – because they do.

Hot Arctic and a Chill in the Northeast: What’s Behind the Gloomy Spring Weather?

UCS Blog - The Equation (text only) -

When temperatures hit the 80s Fahrenheit in May above latitude 40, sun-seekers hit the parks, lakes, and beaches, and thoughts turn to summer. By contrast, when temperatures lurk in the drizzly 40s and 50s well into flower season, northerners get impatient for summer. But when those 80-degree temperatures visit latitude 64 in Russia, as they just did, and when sleet disrupts Mother’s Day weekend in May in Massachusetts, as it just did, thoughts turn to: what is going on here?

Hot arctic

Before we jump into the science, let’s take a quick look at the unusual spring weather. This past weekend, Russia was the scene of record-high temperatures. A city above the Arctic circle—Arkhangelsk—recorded a high of 84 degrees Fahrenheit on May 11 at the Talagi Airport weather station. The average high temperature for Arkhangelsk this time of year is around 54 degrees Fahrenheit.

Gloomy weather

Meanwhile in the Northeast US, try having a conversation that doesn’t loop back to the endlessly gloomy, chilly, unseasonable weather. When gloomy weather becomes such a dominant topic of conversation in a region, a form of citizen science is occurring, and it tells you something: it is unusual, it is anomalous, it is downright wacky.

Many locations are not seeing the sun nearly as much as normal memory serves—and science confirms—for this time of year.  The Long Island town of Islip, New York, recorded its longest streak of rainy days on record from April 20 to May 7. It rained for 21 days this April in Boston.

It’s not just in the Northeast: repeated rain events resulted in much of the contiguous US being ranked in the 99th percentile for soil moisture on May 14, including many of the Plain states (South Dakota, Nebraska, Kansas, Oklahoma, and Texas) and most states eastward. This is a continuation of a high soil moisture ranking percentile pattern (see Jan – April 2019 in Figure 1). Soil moisture ranking percentile is from the 1948-2000 Climatology

As of this writing, there are headlines with exasperated tones wondering when winter will truly depart, including:

In that third article, Jason Samenow describes the abnormal late May forecast for snow, hail, tornadoes, flooding, and excessive heat to different parts of the contiguous US over upcoming days.

US Monthly Soil Moisture ranking percentile

Figure 1. Continental US Monthly Soil Moisture ranking percentile for Jan-April 2019. Repeated rain events resulted in a large portion of the contiguous US being ranked in the 99th percentile for soil moisture on May 14. Source: CPC NCEP NOAA

Damages

Unfortunately, the consequences of these gloomy, chilly, and rainy or snowy conditions are very real in terms of damages, both personal and in the larger economy. People are taking time away from work—lost labor hours—to deal with them. People are pumping water out of basements and throwing away cherished items lost to water damage.

Some of the flooding is from intense storms like the two rare interior US bomb cyclones that caused flooding and prompted governors to spring into action, calling on the National Guard. There is a current backlog of unmet disaster relief requests. Some of the flooding is from water tables rising since relentless repeated rain events don’t allow the soil enough time to dry out.

The natural and human-driven aspects of flooding are critical to tease apart so we can better prepare our communities for the flood risk of today and the changing flood risks of the decades ahead. This is especially important when investing dollars in infrastructure that are anywhere near surface water or groundwater (also known as the water table).

No words. It’s May 14. #snow #isurrender #uncle #youcanleavenow #vt pic.twitter.com/8d5ouh51G7

— Anson Tebbetts (@anson_ag) May 14, 2019

Eurasian October snow cover extent indicator

It may seem counter-intuitive, but the story of the strange weather unfolding this spring in the US is related in part to snow last October in Eurasia. This indicator—the Eurasian October snow cover extent indicator—is proving to be worthy of additional attention by US weather geeks. The good news is that the scientists who were paying attention to the Eurasia snow extent behavior during October, along with a host of other indicators, gave advanced warning of the emerging US winter and spring weather pattern for 2018/2019. Winter sports enthusiasts rejoiced and sought the snow-peaked slopes of Colorado and Utah.

The bad news is it can feel extremely bouncy going through record-breaking cold and record flooding, with temporary relief periods over these past months. It can feel like riding a seesaw. But the lasting memory of the major pattern is what becomes the talk of the region. Terrific winter snowpack, tragic flooding, and gloomy northeast.

You may wonder about the Eurasian snow extent indicator and the broader connections. I encourage those who want to know, to spend some time clicking on the links here or links in earlier blogs that point to even more information (see here, here, here, and here). These describe the details regarding how Arctic sea ice decline, particularly in the Barents-Kara sea ice, north of Scandinavia and Russia, contributes to ocean and atmosphere behavior. Which contributes to Eurasian snow cover extent behavior. And ultimately a wavy jet stream with episodic cold outbreaks over winter and spring in the Northern Hemisphere, including the US.

Here is an example of the science as Judah Cohen explains, “There is a growing consensus that it is Barents-Kara sea ice in the late fall and early winter that has the greatest impact across Eurasia.  Therefore, low Barents-Kara sea ice in November for example, favors a strengthened Siberian high, increased poleward heat flux, a weak stratospheric Polar Vortex and finally a negative Arctic Oscillation. An important point regarding the Siberian high is that it strengthens or expands northwest of the climatological center.  For low snow cover and/or high sea ice the opposite occurs.”  Translation, a weakened polar vortex means more cold outbreaks deep into US territory like this past winter and spring.

We know that burning coal, oil, and gas and the resulting global warming has caused dramatic declines in Arctic summer sea ice extent (minimum occurs in September). It takes longer to cool the warmer than normal Arctic ocean enough to grow new sea ice or thicken remnant ice in the following October and November. Over each successive decade, we are more likely to experience low Barents-Kara sea ice extent over more years, causing weather geeks to keep monitoring jargon indicators: Sea ice extent, Eurasian Snow Cover Extent, Stratospheric Polar Vortex, El Niño Southern Oscillation, North Atlantic Oscillation, Arctic Oscillation, and more to improve US seasonal outlooks.

This is little consolation to those throwing out their flood-soaked cherished items from Kansas to Maine this spring season.

Photo: Climatereanalyzer.org CPC NCEP NOAA

A Stroller Debacle at CPSC Politicizes Child Safety and I Have No Chill

UCS Blog - The Equation (text only) -

I’m a self-proclaimed transparency nut. But now that I’m a mom, my need for information has grown exponentially. I want a label on baby food that tells me how much added sugar is in it. I want to know whether my daughter’s car seat or mattress contains organohalogen flame retardants. And I certainly want to know whether the stroller I’m using to cross busy DC streets is safe. But apparently that last bit is none of my business and that’s okay with some federal regulators who care more about acquiescing to industry wishes than keeping kids safe.

President Trump’s CPSC turns child safety into a partisan issue

The Washington Post recently reported that despite the evidence and staff scientists’ opinions that the Consumer Product Safety Commission (CPSC) should recall a jogging stroller shown to result in injuries to children (and their parents), the commission worked with the company, Britax, to avoid the measure. From 2012 to 2018, over 200 documented injuries came to CPSC by way of its reporting mechanism, saferproducts.gov, leading agency staff scientists to pursue an investigation that lasted nearly a year. The agency’s health sciences division found that children could suffer “potentially life-threatening injuries” from the common issue of front-wheel detachment. CPSC staff ran engineering tests, put together injury reports, and pored over epidemiological data, eventually starting the recall process by issuing a preliminary determination that the front wheel of the stroller was a “substantial product hazard.”

But right as this was happening, a transition was occurring at the agency from a Democratic to a Republican chair and majority of the 5-member commission. President Trump named Ann Marine Buerkle acting chair of the CPSC and she awaits Senate confirmation for the position. Buerkle was appointed to the CPSC by President Obama in 2013 and has a history of siding with companies peddling unsafe products. According to sources within the agency, she kept information on the ongoing investigation from the Democratic commissioners long enough that key decisions about the potential recall would happen as more Republican commissioners were appointed, including Dana Baiocco and Peter Feldman.

When it came time to vote on the settlement with Britax, the two minority commissioners wrote a dissent that called it “aggressively misleading” to consumers. The company got off the hook by promising to initiate a public-safety campaign and offer replacement parts to customers. But the cherry on top of this story is that the replacement parts that Britax sent to customers to deal with the strollers in question were also defective. Not only did the company achieve getting out of the hassle of a recall but they have since maintained that there was no defect in the product and have accused those parents reporting injuries of using the product wrong. I mean, come on! This is exactly why the strollers should have gone through the full recall process to begin with.

Further, the value of a child’s life should not be decided based on political affiliation. Republicans and Democrats alike should be able to band together to hold companies accountable to keep our kids safe, not align with the companies who seem to care more about playing the blame game than engineering safe products.

The value of consumer product regulation

There is nothing in this world that I want protected more than my daughter’s life. That’s why I value the mission of the CPSC and the work that has been done to help improve the safety of consumer products since its inception. Last year, UCS wished the Consumer Product Safety Improvement Act a happy 10 year anniversary. This regulation addressed a long list of issues with product safety and transparency and gave the agency the power it needed to enforce provisions to keep us safe. We’ve come so far in getting rid of paint in children’s toys, requiring a set of standards for crib and other child furniture manufacturers, and in making it easier for consumers to share their experiences with the agency directly. It’s a relief to know that there’s a government agency that is holding companies accountable for the safety of the products they put on the market and that we buy for ourselves and our children. It’s one less thing for parents to worry about.

That’s part of why it’s so infuriating to see how CPSC commissioners with agendas have thwarted the very mission of the agency. Former commissioner of the CPSC from 2013 to 2018 Marietta Robinson wrote in a letter to the editor related to the Washington Post report, “The agency was formed more than 45 years ago for the very purpose of protecting consumers from unreasonably dangerous products such as the Britax stroller.” Without an official recall, people who buy these strollers from 3rd party sellers or used strollers on Craigslist or Facebook marketplace are rolling the dice. I say this as someone who is currently browsing these sites for used strollers and finding listing upon listing for these strollers without any disclaimer about their safety issues.

With no posting on the CPSC’s website, consumers have to rely on a Washington Post investigation to make a purchasing decision. This is unacceptable. It’s a clear demonstration of the importance of regulators looking out for public health and safety, not the bottom lines of the regulated industry. CPSC commissioners need to listen to their staff recommendations and stop politicizing consumer safety measures, and Senators need to take a long, hard look at Buerkle’s history and this case in particular when her confirmation vote comes up.

Photo: John and Christina/CC BY-NC-SA 2.0 (Flickr) Craigslist

5 Reason’s Why HB 6, Ohio’s Nuclear Plant Subsidy Proposal, Should Be Rejected

UCS Blog - The Equation (text only) -

Photo: Nuclear Regulatory Commission

Last November, UCS released Nuclear Power Dilemma, which found that more than one-third of existing nuclear plants, representing 22 percent of total US nuclear capacity, are uneconomic or slated to close over the next decade. This included the Davis-Besse and Perry plants in Ohio that are owned by Akron-based FirstEnergy Solutions. Replacing these plants with natural gas would cause emissions to rise at a time when we need to achieve deep cuts in emissions to limit the worst impacts of climate change.

When we released our report, my colleague Jeff Deyette described how a proposal backed by FirstEnergy to subsidize its unprofitable nuclear plants in Ohio was deeply flawed and did not meet the conditions recommended in our report. By providing a blatant handout to the nuclear and fossil fuel industries at the expense of renewable energy and energy efficiency, ironically, the latest proposal to create a “Clean Air Program” in Ohio (House Bill 6) is bad for consumers, the economy and the environment.

Here are five reasons why this proposal is flawed and should be rejected:

1. HB 6 doesn’t protect consumers

HB 6 would provide incentives to maintain or build carbon-free or reduced emission resources that meet certain criteria. The state’s Legislative Budget office estimates the new program would cost $306 million per year, collected through a dedicated monthly charge on consumer electricity bills. Monthly costs range from $2.50 for a typical residential customer to $2,500 for large commercial and industrial customers.

HB 6 doesn’t require FirstEnergy Solutions to demonstrate need or limit the amount and duration of the subsidies to protect consumers and avoid windfall profits as recommended in our report. It simply sets the starting price at $9.25/MWh and increases that value annually for inflation.  In 2018, Davis-Besse and Perry generated 18.3 million megawatt-hours of electricity, according to the U.S. Energy Information Administration. This means that FirstEnergy Solutions nuclear plants would receive approximately $170 million per year in subsidies, or 55% of the total. As explained below, the rest of the money would likely go to upgrading Ohio’s existing coal and natural gas plants.

2. HB 6 is a bait and switch tactic to gut Ohio’s clean energy laws

But here’s the rub. HB 6 would effectively gut the state’s renewable energy and energy efficiency standards to pay for the subsidies for Ohio’s existing nuclear, coal and natural gas plants. It would make the standards voluntary by exempting customers from the charges collected from these affordable and successful programs unless they chose to opt-in to the standards. This could result in a net increase in emissions and a net loss of jobs in Ohio over time.

This political hit job is outrageous, but not at all surprising. It is just another attempt in a long series of efforts by clean energy opponents to rollback Ohio’s renewable and efficiency standards over the past five years. When combined with stringent set-back requirements for wind projects that were adopted in 2014, these actions have a had a chilling effect on renewable energy development and explain why renewables only provided a paltry 2.7% of Ohio’s electricity generation in 2018 (see figure below). In contrast, renewables provided 18% of U.S. electricity generation in 2018, and wind power provided more than 15% of electricity generation in 11 states.

The sponsors of HB 6 go one step further and make the false claim that their proposal will save consumers money. While the charges appearing on consumer bills might be less, this ignores the much greater energy bill savings consumers have been realizing through investments in energy efficiency. In addition, the cost of wind and solar has fallen by more than 70 percent over the past decade, making them more affordable for consumers and competitive with natural gas power plants in many parts of the country. It also ignores the energy diversity benefits of renewables and efficiency in providing a hedge against natural gas price volatility. Many Ohio legislators continue to put their heads in the sand and refuse to embrace the new reality that renewables and efficiency are cost-effective for consumers.

Energy efficiency programs are especially important for low-income households. By lowering their energy bills, they have more money to spend on food, health care and other necessities. It also reduces the need for assistance in paying heating bills. Unfortunately, legislators like Energy and Natural Resources Committee Chair Nino Vitale are proposing to provide handouts to large corporations at the expense of easing the energy burden for low-income households, which are also disproportionately affected by harmful pollution from coal and natural gas power plants.

3. HB6 creates a false sense of competition

While renewable energy technologies are technically eligible to compete for funding under HB 6, several criteria would effectively exclude them:

  • It excludes any projects that have received tax incentives like the federal production tax credit or investment tax credit, which applies to nearly every renewable energy project.
  • Eligible facilities must be larger than 50 MW, which excludes most solar projects, and wind projects have to be between 5 MW and 50 MW, which is smaller than most existing utility scale wind projects in the state.
  • Eligible projects must receive compensation through organized wholesale energy markets, which excludes smaller customer-owned projects like rooftop solar photovoltaic systems.

When combined with the rollback to the renewable standard, this absurdly stringent criteria would create too much uncertainty for renewable developers to obtain financing to build new projects in Ohio.

4. HB 6 will increase Ohio’s reliance on natural gas

While HB 6 could temporarily prevent the replacement of Ohio’s nuclear plants with natural gas, gutting the renewables and efficiency standards would undermine the state’s pathway to achieving a truly low-carbon future by locking in more gas generation as coal plants retire.  Over the past decade, natural gas generation has grown from 1.6% of Ohio’s electricity generation to more than 34% in 2018 (see figure). A whopping 40,000 MW of new natural gas capacity was added during this time, mostly to replace retiring coal plants. In contrast, the share of nuclear and renewable generation has only slightly increased by 2-3% each.

Ohio’s Increasing Reliance on Natural Gas for Electricity

 

While natural gas has lower smokestack emissions than coal, the production and distribution of natural gas releases methane emissions—a much more potent greenhouse gas (GHG) than carbon dioxide. To achieve the deep cuts in emissions that will be needed to limit the worst impacts of climate change, Ohio will need to reduce its reliance on natural gas. Gutting the state’s renewables and efficiency standards would take away the most cost-effective solutions for achieving this outcome.

5. HB 6 includes no safety criteria or transition plans

HB 6 does not require FirstEnergy’s nuclear plants to meet strong safety standards as a condition for receiving subsidies, as recommended in our report. While Davis-Besse and Perry are currently meeting the Nuclear Regulatory Commission’s (NRC) safety standards–as measured by their reactor oversight process (ROP) action matrix quarterly rating system–both plants have had problems with critical back-up systems during the past two years that put them out of compliance.

The nuclear industry has been trying to weaken the ROP for years. For example, the industry has been advocating for combining the first two columns of the action matrix, which would essentially put all nuclear reactors in the top safety category. My colleague Ed Lyman, acting director of the UCS Nuclear Safety Project, is working to stop the NRC from changing the ROP to make it a less meaningful and transparent indicator of plant safety. Our report recommends that policymakers monitor the situation and adjust subsidy policies if the NRC weakens its standards.

HB 6 also does not include any transition plans for affected workers and communities to prepare for the eventual retirement of the nuclear plants. These plans are needed to attract new investment, replace lost jobs and rebuild the tax base.

A better approach

On May 2, House Democrats announced an alternative “Clean Energy Jobs Plan” that would address many of the problems with HB 6. The plan would modify the state’s Alternative Energy Standard (AES) by increasing the contribution from renewable energy from 12.5% by 2027 to 50% by 2050 and fix the onerous set-back requirements that have been a major impediment to large scale wind development. It would expand the AES to maintain a 15% baseline for nuclear power. In addition, it would improve the state’s energy efficiency standards, expand weatherization programs for low-income households, and create new clean energy job training programs.

This proposal is similar to the laws recently passed in Illinois, New York and New Jersey that provided financial support for distressed nuclear plants while simultaneously strengthening renewable energy and energy efficiency standards. While our report shows that the subsidies for some of these nuclear plants may have been too generous, these policies have prevented plants from closing and resulted in a wave of new investment in wind, solar, and efficiency projects.

With more than 112,000 clean energy jobs in 2018, Ohio ranks third in the Midwest and eighth in the country. Ohio added nearly 5,000 new clean energy jobs in 2018.  While most of the clean energy jobs are in the energy efficiency industry, Ohio is also a leading manufacturer of components for the wind and solar industries.

To capitalize on these rapidly growing global industries, lawmakers in Ohio should reject HB 6 and move forward with a real clean air program that ramps-up investments in renewables and efficiency and achieves the deep cuts in emissions that are needed to limit the worst impacts of climate change.

Three Ways Federal Infrastructure Policy Can Speed Up Our Clean Energy Transition

UCS Blog - The Equation (text only) -

Photo: John Rogers

May thirteenth marked the beginning of Infrastructure Week and, as you might have heard, there might be at least one thing that Republicans and Democrats agree on: the need to invest in our nation’s aging infrastructure to remain competitive and build a more resilient, equitable system. This includes the electricity sector, where we must decarbonize our electricity supply, address growing threats to system resilience from climate change, and invest in the research and development of technologies that will power our growing clean energy economy. Here’s three ways a federal infrastructure policy package could help make this happen.

Unlock investments in our electric transmission system

Transmission lines are the backbone of our electricity supply. As we transition to clean energy, we also need to invest in a more efficient and resilient transmission system.

Transmission lines are critical to delivering electricity from where it’s generated to where it’s consumed, and as the nation transitions from centralized fossil-fueled power to more dispersed renewable energy resources, we need to invest in our transmission system to efficiently carry renewable energy to our light switches and build resilience against challenges such as extreme weather events and cyberattacks.

Research shows that these investments provide benefits to consumers that outweigh the costs. But a number of hurdles remain, including complex and often dysfunctional planning and approval processes, and a failure of focused leadership at the top – namely Congress and the Federal Energy Regulatory Commission (FERC).

To address these issues, Congress should declare it a national priority to upgrade our nation’s electricity transmission system and direct FERC, which oversees our bulk electric supply, to prioritize transmission planning in furtherance of a zero-carbon, more resilient electricity supply.

Congress should also authorize and fund the Department of Energy (DOE) to provide technical assistance to state and local authorities that evaluate and approve transmission projects and to develop a national transmission plan that includes recommendations on how to take advantage of existing rights of way like railroad corridors and interstate highways.

Accelerate battery storage deployment

Battery storage can make the electricity system more reliable, affordable, secure, and resilient to extreme events – all while smoothing the way for high levels of renewable energy. This is why experts agree that energy storage should be a top federal priority – both to speed up deployment of current technologies and develop the next generation of this resource.

Current storage technologies are ready for targeted cost-effective deployment to enable renewable energy integration, offset transmission system investments, and replace fossil-fuel-powered plants – particularly those located in urban environments and having significant public health impacts on surrounding communities. To achieve all that battery storage can offer for a clean, resilient electricity supply, Congress should fund tax incentives for battery storage investments to incentivize the private sector while also providing grant programs for deployment in underserved communities where battery storage can displace fossil fuels and reduce local pollution.

Congress also has a role in funding a diverse body of research on the next generation of storage technologies that would put the United States back in a global leadership position, attract private investments, create jobs, and provide significant value to the electricity sector.

Support the infrastructure build out that will fuel the offshore wind boom

The U.S. offshore wind industry about to take off, but federal investment in our infrastructure are necessary to make sure we’re ready.

The U.S. offshore wind industry is experiencing significant growth. Robust winds, relatively shallow waters, and lots of energy demand near the coast combine to make the Central and Northern Atlantic prime for offshore wind development. Several east-coast states – led by New York, New Jersey, Massachusetts, and Maryland – are moving to procure offshore wind, pushing U.S. demand to more than 17,000 megawatts (MW). Recent estimates put the value of the U.S. offshore wind supply chain at nearly $70 billion with the potential to create hundreds of thousands of jobs.

But building out the offshore wind industry requires coordination among federal, state, local, and tribal authorities, and a multitude of interests including commercial and recreational fishing, the Department of Defense, seagoing navigation, compliance with protections for migratory birds and marine mammals – just to name a few. At the same time, U.S. waters offer a new set of technical challenges compared to the European offshore wind industry that has matured over the past several year.s And at this early point in the U.S. offshore wind industry’s growth, we don’t have the ports, ships, and crews necessary to support the industry.

All of this calls for a proactive and robust federal role in the build out of our offshore wind industry. Ongoing coordination of stakeholders to identify prime offshore wind sites and open them for development while maintaining environmental safeguards is necessary. Research and development of the next generation of offshore wind turbines and the transmission grid to carry that clean energy to load centers must be funded. And federal funding to states and local communities is critical to not only build the ships, ports and other equipment necessary for offshore wind development, but to do it in a way that improves the efficiency and lowers the environmental impacts on local communities.

Infrastructure touches nearly every aspect of our lives – including our electricity supply and the potential to transition to a clean, equitable, and more reliable and resilient system. A federal infrastructure package presents an opportunity to pass ambitious climate solutions at the federal level. These should be national priorities, and any federal infrastructure package should reflect this urgent priority.

Photo: John Rogers Photo: James Ferguson/Wikimedia Commons Photo: Derrick Jackson

ExxonMobil, Chevron, and ConocoPhillips Climate Risk Reports Miss the Mark

UCS Blog - The Equation (text only) -

Photo: nickton/CC BY-NC 2.0 (Flickr)

In the next three weeks, the CEOs of major fossil fuel companies around the world are going to stand before their shareholders and tell them everything is fine when it comes to climate change.

To back up this preposterous claim (everything is not fine), the CEOs will point to their in-house climate risk analyses, which all ignore the need for fossil fuel companies to drastically and rapidly reduce their greenhouse gas emissions in order to keep the global average temperature increase to 1.5°C and avoid the worst impacts of climate change. As we’ve shown in our 2018 Climate Accountability Scorecard, UCS statements, and blog posts, major fossil fuel companies continue to make insufficient progress on climate.

A set of shareholders at ExxonMobil is so tired of the company’s failure to act that they have issued a call for shareholders to vote against all board members – a move saved for extreme events, including a proxy vote recommendation for Wells Fargo after employees had been pushed to open fraudulent accounts in 2017.

To prepare ourselves, we’ve done a deep dive into three major climate risk reports: ExxonMobil’s 2019 Energy & Carbon Summary (our expectations were low); Chevron’s Update to Climate Change Resilience; and ConocoPhillips’s Managing Climate-Related Risks, and created a detailed table comparing the reports on their climate statements and actions.

Overall, the oil and gas companies miss the mark in these reports, downplay the urgency of climate change and the depth of emissions reductions that are needed, and generally assume that they’ll continue to come out on top. We’ve summed up a few of the highlights below.

What’s a climate risk report?

In the last few years, shareholders have successfully pressured companies to report on their climate change risks. For example, how ExxonMobil might adapt if policymakers enact regulatory policies, like a carbon tax or cap-and-trade system; if solar and wind become so cheap that fossil fuel demand declines; and if sea level rise and heat waves impact refineries and other company facilities.

Companies usually only publish these reports after extensive shareholder engagement, or if such a report is requested through a shareholder proposal at the annual meeting (where shareholders vote on directors and the CEO’s pay package) and receives a majority of shareholder votes.

In 2017, 62% of ExxonMobil’s shareholders called for the company to issue a report outlining the company’s strategy for operating in a 2 degree centigrade-constrained economy. Since then, Chevron, Anadarko, ConocoPhillips, and a few other oil and gas majors have issued similar reports because of shareholder resolutions or engagement.

Climate risk reports light on details

Each of these companies is telling shareholders that it is the best equipped and best prepared to handle any sort of climate risk — be it regulations, a change in demand, or hurricanes/flooding/drought/fires/rising seas/insert your favorite climate impact here.

While companies are disclosing more climate risks than they have previously, they still haven’t listed any specific, measurable metrics that would allow shareholders to verify the companies are doing enough.

Most significantly, none of these three companies has laid out an emissions reductions plan that encompasses the full life cycle of its oil and gas products, from extraction, production, and refining to transport and use of its products.

As landmark climate science reports have stressed, not all fossil fuel assets are burnable if the world wants to avoid the worst effects of climate change. Perhaps that explains why ExxonMobil quietly downgraded its confidence in having “90 percent” of its assets produced to “the substantial majority,” which is both extremely vague and concerning for investors.

Unambitious emissions reductions goals

All three of these companies have put out some sort of quantitative emissions reductions goal. ConocoPhillips was one of the first carbon majors to come out with a firm target, even if it is underwhelming.  Chevron, after years of refusal, has put out a startlingly unambitious methane goal and linked it to high-level bonuses, and ExxonMobil has merely “announced greenhouse gas reduction measures that are expected to result” in a 15 percent decrease in methane emissions by 2020.

ConocoPhillips and Chevron have only put forward intensity targets, which means they can hit their targets by decreasing emissions per barrel even if their total emissions increase.  None of these three companies has included the emissions from the end use of its products – when they are ultimately burned – in its targets, even though these emissions make up around 80 percent of each company’s total emissions.

Undervaluation of renewables

The ExxonMobil, Chevron, and ConocoPhillips reports undervalue the role of renewables, claim that oil will be a big part of the energy mix no matter what, and are full of undeserved self-congratulations.  Most importantly, none of these three companies takes responsibility for the emissions that come from the burning of its products or acknowledges the need to urgently and drastically reduce emissions.

At this point, it’s an established fact that solar and wind are becoming the lowest-cost option for energy.  Just look at New Mexico, where a renewable energy company put forward the most cost-effective plan for supplying electricity to the state.  ExxonMobil proceeds to undervalue the expected penetration of renewables, and announced that it’s doubling down on technological improvements to keep us below 2C, but also that those tech options are not currently working.  This seems like a questionable strategy for a company that spent only $9 billion on low-carbon investments in the last 19 years, but $30 billion on oil and gas exploration in 2019 alone.

Chevron dedicated over half its “update” report to “actions and investments”, which include a fair number of renewable energy venture investments, but with no details to the dollar amounts invested, the time frame for expected implementation, or the emissions reductions anticipated. This section of Chevron’s report also includes the dubious claim that it is contributing to the “Zero Hunger” Sustainable Development Goal because its natural gas operations produce nitrogen, which is used in fertilizer, as a byproduct.

ConocoPhillips’s report avoids the whole “how will you reduce emissions” bit almost entirely, which seems like an odd choice for a company whose products are among the top 10 contributors of greenhouse gas emissions since the start of the Industrial Revolution.

Still squirreling out of responsibility for reducing emissions

ExxonMobil quietly admitted that its products are part of the problem of climate change, and there is only so much it can do without making major changes to its business model (a little late to the party on that one).  Chevron, meanwhile, subtly claimed that it isn’t the worst fossil fuel company out there and therefore everyone should stop asking it to align its business model with the Paris agreement, which Chevron simultaneously claimed to support.

Shareholder showdown at 2019 annual meetings

UCS will be attending the annual meetings for all three companies, including the virtual meeting for ConocoPhillips this morning.  Overall, ConocoPhillips has continued to engage shareholders this year and has no climate-related shareholder proposals on the ballot.  Chevron has successfully engaged with a number of investors and had several shareholder resolutions withdrawn, with a company commitment to address the issue, and managed to have a shareholder resolution calling for Paris-aligned climate targets excluded by the SEC.  ExxonMobil is facing down what could be a shareholder revolt, with prominent institutional shareholders calling for votes against all board members, support for a shareholder proposal to separate the role of CEO and board chairman, and support for a climate-related board committee.  We’ll be reporting back on our in-person attendance at the Chevron and ExxonMobil annual meetings later this month.

Photo: nickton/CC BY-NC 2.0 (Flickr)

Improving Transparency and Disclosure of Conflicts of Interest for Science Advisory Committees

UCS Blog - The Equation (text only) -

Members of the USDA Advisory Committee on Beginning Farmers and Ranchers, in a December 2010 photo. USDA photo

On Wednesday this week, the Senate Committee on Homeland Security and Government Affairs will hold a mark-up hearing in the Federal Advisory Committee Act Amendments of 2019 introduced by Sen. Portman (R-OH). And before you stop reading, yes this is a science issue. The proposed amendments are intended to improve the transparency of the federal advisory committee process, including science advisory committees of scientists from outside government, and to disclose and reduce the impacts of conflicts of interest on those committees.

My colleagues and I have written extensively about recent problems with science advisory committees: Many aren’t meeting, some are rife with conflicts and some really have lost the capacity to provide independent advice for agencies across the government. That’s a serious problem for science-based policymaking. Science advice plays a crucial role in helping ensure our government makes science-based decisions on everything from air pollution standards to new drug approvals to worker safety protections.

What’s new

The proposed amendments require agencies to open nominations for committee positions, select and publicize from those nominations, and clearly distinguish independent scientists from those representing a particular interest group. They also require disclosure of conflicts of interest to the agency and the public and greater transparency of the meetings themselves. Also, political party affiliation cannot be used as a criterion for selection for a committee. This would be a useful requirement, since such political litmus tests have been used to distort and stack advisory committees under previous administrations.

When it comes to addressing conflicts of interest in government science advice, disclosure and transparency are a good thing but require effort. Under the bill, scientists serving on committees would need to file conflict of interest statements and meet the government ethics rules. I have had to do so for numerous committees, and I can’t say it is enjoyable to do the paperwork. But, if you believe the committee’s work is important, you sigh and fill it out. And then realize whining about it was more work than just doing it.

And yes, agencies would have to put in the effort to make more information public, but this kind of transparency would bring agencies more in line with the spirit of the Federal Advisory Committee Act. And in the overall work of an agency, this is not that big a burden.

But there is opposition, it seems particularly from the biomedical community. The NIH is already exempt for the purposes of grant proposal reviews, but for other advisory committees, disclosing conflicts of interest is viewed as an overwhelming hurdle that will discourage participation on panels. I just don’t see it.

Starting to fix what’s broken

The Federal Advisory Committee Act Amendments are an attempt to fix an advisory process that is, in this Administration, too often captured by regulated industry. Conflicts of interest are the core of that problem, and transparency is one way to push back. Are the amendments perfect? No. There are still issues of diversifying panels, clarifying roles of committee members with conflicts of interest, adequately recognizing participation, and better institutional support and encouragement for panelists, as well as being transparent in the least burdensome way. You can see our ideas on improving advisory committees here.

But these amendments go in the right direction. We, as scientists, need to realize the need to continue to build and maintain public trust in our work, and ultimately decisions based on science. Spending a little time on disclosure will not go amiss.

 

The National Academies Illustrates the More Nuanced Value of Transparency in Science

UCS Blog - The Equation (text only) -

Photo: Another Believer/Wikimedia Commons

Ever think about reproducibility in science? Turns out you’re not alone! The National Academies of Science (NAS) just spent a year and a half studying the status quo and have released some important findings. An NAS committee released a report this week that EPA Administrator Andrew Wheeler, Department of Interior Secretary David Bernhardt and OMB Acting Director Russell Vought should really read, titled Reproducibility and Replicability in Science. The group of experts was charged with answering questions about reproducibility and replicability, mandated by the 2017 American Innovation and Competitiveness Act. There are two key takeaways that are incredibly important for federal agency heads to understand as they are issuing sweeping policies that include language about these scientific concepts under the guise of transparency.

Reproducibility and replicability are important but not the be-all end-all of good science

The NAS committee was charged with defining reproducibility and replicability across scientific fields. Reproducibility is obtaining consistent results using the same input data, computational steps, methods, and code, and conditions of analysis. Replicability is obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data. While the report acknowledges that reproducible and replicable studies help to generate reliable knowledge, it also is very clear throughout that these standards can be features of a scientifically rigorous study, but are not necessarily essential. The committee writes, “A predominant focus on the replicability of individual studies is an inefficient way to assure the reliability of scientific knowledge. Rather, reviews of cumulative evidence on a subject, to assess both the overall effect size and generalizability, is often a more useful way to gain confidence in the state of scientific knowledge.” There are many reasons why a study might not be able to be reproduced or replicated, not the least of which to protect the privacy, trade secrets, intellectual property and other confidentiality concerns associated with the underlying data. Challenges also arise when studying environmental hazards. We must use observational data for studies of air and water pollution and it is often not possible or ethical to recreate the conditions under which people were exposed to a contaminant.

As my colleague, Andrew Rosenberg, explained in a recent blog:

“Maybe we all learned that doing an experiment in a lab many times over can give you confidence in the results and that is the “scientific method.” Made sense in grade school. But lots and lots of critical scientific information and even analyses are not “reproducible” in this sense. Take, for example, the impact of a toxic pollutant on a local community. Should we release it again to see if it is really harmful? Or the study of a natural disaster? Should we wait for it to happen again to reproduce the results? The Environmental Data and Governance Initiative illustrated the many real-world examples of scientific studies that are neither feasible nor ethical to reproduce.”

In the EPA’s proposed restricted science rule issued last April, EPA argues that part of the reason for the policy is to allow regulators to better determine that key findings are “valid and credible.” It claims that the benchmark upon which validation and credibility are measured is reproducibility and replication of studies. But as EPA fails to understand and the NAS committee rightfully points out, “reproducibility and replicability are not, in and of themselves, the end goals of science, nor are they the only way in which scientists gain confidence in new discoveries.” The report explains that policy decisions should be based on the body of evidence, rather than any one study (replicable or not), and likewise, that one study should not be used to refute evidence backed by a large body of research. Further, systematic reviews and meta-analyses, whereby large bodies of evidence are evaluated, are an important method of increasing confidence of scientific results. The EPA and other agencies should have the flexibility to use their own criteria to judge the rigor and validity of the science informing rules as applicable, and should not rely on reproducibility and replicability as the principal criteria of scientific credibility.

Challenges of transparency and reproducibility in science are best handled within the research community, not the White House or EPA

Improvements in transparency can be and are being made by researchers, journals, funders, and academic institutions and the report gives many neat examples of ongoing efforts. It certainly is not one agency’s job to solve issues around science transparency. Indeed, they couldn’t do this even if they tried. The recommendations of the report are aimed at scientific institutions to work on educating researchers and ensure best practices in recordkeeping and transparency that may lead more reproducible and replicable studies. Nowhere does it suggest that federal agencies that are users of such science should be involved in deciding how transparent authors must be. The scientific community needs to drive the bus. End users of scientific information are not in a position to address challenges in the scientific community at large, especially considering the lack of infrastructure and resources needed to ensure privacy protections for sensitive data within agency rulemaking. Instead of making sweeping transparency requirements that would limit the government’s ability to use the best science, the report recommends that funding agencies invest in the research and development of open-source tools and related trainings for researchers so that transparency is fostered at the beginning of the scientific process instead of being used as an opportunity to exclude crucial public health studies that have already been conducted.

No crisis of reproducibility, no time for complacency

During the report release webinar, the study authors summarized their findings by saying that there wasn’t a crisis of reproducibility, nor was it a time to be complacent about issues related to transparency in science. This is a fair assessment of the situation and one that should be reexamined by the EPA as it reviews the 590,000 public comments it received on its restricted science proposal. There are absolutely ways we can use technology to improve recordkeeping and transparency throughout the scientific process so that researchers can better build on one another’s findings and advance knowledge. Smart minds at NAS and elsewhere are already working on this. The committee report highlights some of the ways this is happening thanks to the leadership of academic institutions, funders, and journals. Government has a role to play in helping to fund the infrastructure that will foster more open and accessible science and arm researchers with the tools to abide by best practices. The EPA, DOI, and OMB should listen to the scientific community and learn how best to accomplish that task. There is absolutely no role for the White House or federal agencies like the EPA to issue sweeping, prescriptive rules that limit the way that science is used to inform regulations.

Photo: Another Believer/Wikimedia Commons Source: NAS

Pages

Subscribe to Union of Concerned Scientists aggregator - Combined UCS Blogs