UCS Blog - The Equation (text only)

How Dangerous is New EPA Chief Andrew Wheeler? Very. Here’s Why.

Photo: Senate EPW

With Scott Pruitt’s resignation as administrator of the Environmental Protection Agency amid a slew of ethics scandals, environmentalists who long campaigned for his ouster should be careful what they wished for.

That is because the acting administrator of the EPA is now Andrew Wheeler, formerly the agency’s second-in-command. Nominated by President Trump and narrowly confirmed in April by the Senate, Wheeler came into the job as the polar opposite of the EPA’s stated mission “to protect human health and the environment.”

Andrew Wheeler: Coal lobbyist

Andrew Wheeler comes to the top EPA post as an unabashed inside man for major polluters on Capitol Hill. Wheeler lobbied for coal giant Murray Energy, serving as a captain in that company’s bitter war against President Obama’s efforts to cut global warming emissions and enact more stringent clean air and clean water rules.

When Pruitt sued the EPA 14 times as Oklahoma attorney general between 2011 and 2017 on behalf of polluting industries, a top petitioner and co-petitioner in half those cases was coal giant Murray Energy. Wheeler was its lobbyist from 2009 until last year.

Notably, Wheeler accompanied Murray Energy’s CEO, Robert Murray, to the now-notorious meeting last year with Energy Secretary Rick Perry, the one in which Murray handed Perry a 16-point action plan ostensibly designed to “help in getting America’s coal miners back to work.” That plan ultimately became the framework of a proposal by Perry to bail out struggling coal and nuclear power plants (Wheeler was also a nuclear industry lobbyist).

That particular proposal was shot down by federal regulators, but with Pruitt’s help, the Trump administration has made inroads on most of that plan’s 16 points, with devastating consequences to the environment—including the US pullout from the Paris climate accords, the rejection of Obama’s Clean Power Plan, and slashing the staff of the EPA down to a level not seen since the 1980s attacks on the agency by President Reagan.

Wheeler has denied helping Murray draw up that document, but he certainly shares its sentiments, telling a coal conference in 2016, “We’ve never seen one industry under siege by so many different regulations from so many different federal agencies at one time. This is unprecedented. Nobody has ever faced this in the history of the regulatory agenda.”

Andrew Wheeler: Longtime Inhofe aide

If it weren’t enough that a top coal lobbyist is now at the helm of the agency charged with protecting the nation’s environmental health, it bears noting that Wheeler’s vigorous lobbying career came after serving as a longtime aide to the Senate’s most vocal climate change denier, Oklahoma’s James Inhofe.

After the Trump administration announced Wheeler’s nomination to the agency in April, Inhofe hailed Wheeler as a “close friend.” That closeness was evident last year when Wheeler held a fundraiser for Inhofe, as well as for Senator John Barrasso of Wyoming, chair of the Senate Environment and Public Works committee, which advanced Wheeler’s nomination by a party-line 11-10 vote. The Intercept online news service reported that Wheeler held the fundraisers even after press accounts revealed that he was under consideration to be Pruitt’s second in command.

Up until now, Wheeler has largely managed to escape the harsh scrutiny that has forced the withdrawal of some Trump appointees—such as Michael Dourson, whose close ties to industry doomed his nomination to oversee chemical safety at EPA, or Kathleen Hartnett White, who spectacularly flamed out with her blatant skepticism about the sources of climate change, once calling carbon dioxide, a key greenhouse gas, the “gas of life.”

In contrast to these colleagues, Wheeler has so far stuck to slickly dry, brief statements that climate change is real, while agreeing with Trump’s pullout of global climate change accords. He even tried to play the good Boy Scout. After Tom Carper of Delaware recited Scouting’s commitment to conservation, Wheeler said, “I agree with you that we have a responsibility in the stewardship of the planet to leave it in better shape than we found it for our children, grandchildren, and nephews.”

Wheeler’s long track record of lobbying suggests precisely the opposite. But Pruitt’s reign was so mercifully short that many of his efforts to roll back critical vehicle emissions standards and the Clean Power Plan, and end full scrutiny of toxic chemicals common in household products, were only in beginning stages. When Wheeler was a lobbyist behind the scenes, it was easy for him to help industry erode the EPA’s science-based mission of protecting public health and the environment.

As the face of an EPA roiling with disillusion and dissent among its scientists, he will not find it so easy to do the bidding of his former masters. This is his chance to act like an administrator for the people, not an abdicator on behalf of industry.

Note: This post is adapted from an earlier version that appeared April 6, 2018, when Andrew Wheeler was nominated to be deputy administrator for the Environmental Protection Agency.

Utilities Should Invest in Electric Vehicle Infrastructure

Photo: SanJoaquinRTD/Wikimedia Commons

For more than a century, our cars and trucks have been fueled almost exclusively by oil. Today, electric vehicles (EVs) give us the potential to power our vehicles with a diverse set of energy sources, including clean and renewable energy. But to make that happen, we need to build the infrastructure that can keep our vehicles fueled and make owning an electric vehicle as convenient as a conventional car.

Across the country, many utilities are stepping up to build the EV infrastructure that we need. Some recent investments include:

  • The California Public Utilities Commission recently approved $738 million in electric vehicle infrastructure proposed by PG&E, SCE and SDG&E, inincludingundreds of millions for charging heavy duty vehicles such as buses and trucks.
  • Utilities in Maryland have recently proposed a $104 million investment in charging infrastructure that would create 24,000 charging stations across the state.
  • The Massachusetts Department of Public Utilities recently approved a $45 million investment by Eversource. A comparable investment by Massachusetts’ other major utility National Grid is still pending in front of the DPU.
  • Ohio has recently approved a $10 million pilot for electric vehicle charging stations.

These investments raise important public policy questions. What electric vehicle infrastructure is most important to speed up adoption? How should we design electricity rates to maximize the value of electric vehicles to ratepayers and the grid? How can our infrastructure best support all types of electric vehicles, including heavy duty electric vehicles such as trucks and buses? How can we use infrastructure to support electrification of shared vehicle fleets?

Today, the Union of Concerned Scientists is releasing a fact sheet outlining 10 principles that we see as particularly important to guide utility investment in electric vehicle infrastructure. In this fact sheet, we argue that utility investment in electric vehicle charging infrastructure is important public policy and ultimately a good deal for ratepayers.

Why should utilities invest in electric vehicle infrastructure?

Electric vehicles (EVs) represent both an enormous opportunity and a significant challenge for our utilities. Converting our vehicle fleet to electricity could add as much as 1,000 terawatt hours of demand onto our electric grid, an increase of about 25 percent of current levels. If managed correctly, this large and flexible load could significantly increase the efficiency of our electric system, which would benefit not only EV drivers but also all ratepayers, providing lower costs.

In the long run, widespread deployment of EVs could also be a source of energy storage, filling a critical need as our electricity system moves away from fossil fuels toward intermittent sources of power, such as wind and solar. Without proper management of EV charging, however, the additional power needed to fuel EVs could require significant new capacity, increasing pollution and imposing additional costs on ratepayers.

Building more EV infrastructure will help more people and businesses make the switch to electric vehicles, saving money and reducing emissions. Consumer studies have consistently found that inadequate access to charging infrastructure remains one of the most pressing obstacles to EV adoption. We have had over a hundred years to build the massive infrastructure necessary to support our gasoline and diesel vehicles. Creating an EV charging network that can compete with our oil infrastructure will require tens of thousands of new charging stations.

What principles should guide utility investments?
  • Provide chargers where people live and work. Most EV charging happens at home, and as affordable, long-range EVs are becoming available, overnight home charging can provide drivers with all the charge they need on most days. So providing universal access to home charging is a top priority. Workplace charging can be a valuable perk that can spur adoption through personal and professional networks.
  • Create a network of high-speed chargers along highways. While most charging will happen at home, a network of fast chargers along highways—capable of recharging an EV in 30 minutes or less—will be a critical component of our infrastructure, allowing EV drivers to access charging for road trips and emergency uses.
  • Maximize benefits to ratepayers and the grid. EVs can provide significant benefits to ratepayers and improve the efficiency of the electric grid if electric vehicle charging occurs during times of low demand or high production of renewable energy. Utilities should create policies that encourage drivers to charge their vehicles during these ‘offpeak’ hours.
  • Establish fair electricity rates for EV charging. EV charging rates should be fair, transparent and provide value to EV drivers. High demand charges can make it difficult to create a viable business model for high speed charging stations, which can be particularly important for electrification of heavy duty and shared vehicles.
  • Support electrification of trucks and buses. Heavy-duty vehicles such as trucks and buses are major contributors to global warming pollution as well as to local air pollution, such as emissions of NOx and particulate matter that cause significant health problems. Investments in charging infrastructure and station equipment can help make these technologies cost effective for fleet managers and transit agencies.
  • Support electrification of new mobility services. Ride hailing services such as Uber and Lyft play an increasing role in our transportation system and must be electrified. Utilities should work with these companies and others to ensure that they have the charging infrastructure and rate design that they need to move to EVs.
  • Ensure low-income communities benefit from electrification. Integration of EVs into ride- and car-sharing networks, installation of more charging stations in apartment buildings, and electrification of transit and freight vehicles can help ensure that low-income residents benefit from the transition to electric transportation.
  • Create an open and competitive market for EV charging. Utilities should work with the auto industry and suppliers of charging equipment to ensure that we retain a competitive market for EV charging that encourages innovation and consumer choice and provides EV drivers with a consistent, high quality experience.

Taken together, universal access to residential charging, widespread availability of workplace charging, and high speed chargers along critical transportation corridors can make driving an EV cheaper, cleaner, and more convenient than any other car. And inducing smart charging and integration with renewables can ensure that the transition to EVs makes our grid stronger and more efficient – and save ratepayers millions in the process.

We encourage utilities and agencies to move forward with ambitious projects to build out EV infrastructure and create the clean transportation system that we need.

Photo: SanJoaquinRTD

Keep Your Paws Off: Three Ways Congress is Preying on Endangered Species Protections

The endangered marbled murrelet.The endangered marbled murrelet. Photo: R. Lowe/USFWS

It seems there is a doggedly persistent contingent of lawmakers in Congress whose life goals include defunding, weakening, ignoring, and overhauling endangered species protections. Their tactics are varied: sidelining science in favor of industry interests, attaching harmful riders to “must-pass” spending bills, and introducing legislation whose insidious intentions are masked by semantics. Here is a quick rundown of current endangered species attacks:

  • Last week, the Union of Concerned Scientists sent a letter to the House Conference Committee for the National Defense Authorization (NDAA) asking them to oppose Utah Representative Rob Bishop’s anti-science rider from being included in the NDAA for Fiscal Year 2019. The amendment arbitrarily blocks federal Endangered Species Act (ESA) protections for the endangered or threatened American burying beetle, sage grouse, and lesser prairie chicken. In this case, decisions to assign protective measures to vulnerable wildlife are determined at the behest of short-term political interests (i.e. oil and gas development), thereby violating the science-based process by which the ESA successfully operates.
  • This past Monday, Senate Environment and Public Works Committee Chairman Senator John Barrasso introduced draft legislation to “strengthen” and “modernize” the Endangered Species Act. It moves to allow states greater authority over endangered species decisions, including listing, delisting, species recovery plans, and habitat conservation. Why is this a bad move? State resource constraints, insufficient laws, lack of political will, and final veto power over scientific decisions are among the most notable concerns. Considering that Senator Barrasso had the support of the Western Governors’ Association, it isn’t a stretch to be worried about states taking concerted efforts to dismiss species protections in the name of development.
  • The House Interior and Environment and House Energy & Water appropriations bills for Fiscal Year 2019 both contain poison-pill riders that would prohibit the listing of the imperiled greater sage grouse and remove protections for red wolves and Mexican gray wolves.

The Fish and Wildlife Service has prevented the extinction of 99 percent of the species listed since its inception in 1973. Despite the Endangered Species Act’s many successes over the years, there are those who have trouble seeing past their own immediate interests. These attacks on the Endangered Species Act are not new, but they are as urgent as ever. Please tell your members of Congress to oppose any anti-science riders affecting endangered species. If you are a scientist, consider joining almost 1500 other scientists in signing on to our letter to Congress.

I would like to acknowledge and thank my colleague Amy Gutierrez, legislative associate for the Center, for her legislative research and input. 

Photo: US Fish and Wildlife Service

Black Lung Resurgence: Without Action, Taxpayers Will Foot the Medical Bills

Photo: Peabody Energy/Wikimedia Commons

I’ve written previously about my family’s experience with black lung and how the disease is making a frightening resurgence. A bit like a miner’s headlamp in the darkness, two recent federal reports and several federal scientific studies shine a light on the disease and its implications—and policymakers should take notice.

Critical benefits to miners and their families

Congress set up the Black Lung Disability Trust Fund in 1978 to provide benefits to coal miners that have become permanently disabled or terminally ill due to coal workers’ pneumoconiosis, or black lung, as well as their surviving dependents. The Trust Fund still protects miners and their families when no liable company could be identified or held responsible. This might happen if a miner had multiple employers, or if the responsible company went out of business. The U.S. Department of Labor, which manages the Trust Fund, estimates that in FY 2017, 64 percent of beneficiaries were paid from the Trust Fund, totaling $184 million in benefits. The Trust Fund provides critical benefits to miners and their families in cases where mining companies can’t or won’t pay.

The Trust Fund is financed primarily through a per-ton excise tax on coal produced and sold domestically. The original legislation set the tax at 50 cents per ton of underground-mined coal, and 25 cents per ton of surface-mined coal (but limited to 2 percent of the sales price). Unfortunately, Trust Fund expenditures have consistently exceeded revenues, despite several actions by Congress to put the Trust Fund on solid financial footing. In other words, to meet obligations in any given year, administrators are forced to borrow from the U.S. Treasury. Moreover, in 1986 Congress set the levels of the excise tax at $1.10 per ton of underground-mined coal and $0.55 per ton of surface mined coal (up to a limit of 4.4 percent of the sales price)—but at the end of this year, the tax levels will revert to their original 1978 values. For these reasons, Congress requested a review of the Trust Fund’s finances and future solvency from the General Accounting Office (GAO), an independent, nonpartisan agency that works for Congress to assess federal spending of taxpayer money.

GAO offers a wake-up call

The GAO concluded its report and released its findings last month—and the results should serve as a wake-up call to Congress. The chart below shows the impact on the Trust Fund of having to borrow year after year to make up for the shortfall in excise tax revenue relative to benefits payments, that is, the accumulation of outstanding debt.

This front-page chart of the GAO report shows that if the excise tax decreases to 1978 levels (according to current law) at the end of 2018, the Trust Fund’s debt will exceed $15 billion by mid-century.

GAO looked at the impact of a few different policy choices, including adjustments to the excise tax rate and debt forgiveness, both of which Congress has used in previous changes to the Trust Fund. In 2008, for example, about $6.5 billion in debt was forgiven (hence the large decrease in debt in the chart above). Unfortunately, that didn’t solve the Trust Fund’s solvency problem, because subsequent coal excise tax revenue was less than expected, thanks to the 2008 recession followed by declining coal production resulting primarily from increased competition with natural gas.

GAO calculated how much money would need to be appropriated by Congress to balance the Trust Fund by 2050 under various assumptions for the excise tax. The chart below summarizes the results succinctly: Increasing the current excise tax by 25 percent would require no debt forgiveness, but allowing the current tax to expire would require $7.8 billion of taxpayer money to balance the Trust Fund by 2050.

Figure 10 from the GAO report (p.30), showing the scale of the problem of outstanding debt in the Trust Fund. Analysts calculated the level of debt forgiveness needed to balance the Trust Fund by 2050, assuming that Congress makes a single lump sum payment in 2019 to pay down the debt. In other words, the bottom bar means that, if Congress allows the current tax rate to expire but also forgives $7.8 billion in existing debt in 2019, then by 2050 the Trust Fund would be balanced (meaning that the remaining debt would have been repaid and annual payments would equal annual revenues).

Assumptions matter

As with any projection of what might happen in the future, the results depend on the assumptions made by the analyst. GAO conducted a credible and sound analysis—based on reasonable, defensible, middle-of-the-road assumptions—to assess the solvency of the Trust Fund. Key drivers are projected revenues expected from future coal production and projected expenditures for future beneficiaries.

Of course, neither of these things is known with much certainty. Worse, there are compelling reasons to believe that the scale of the Trust Fund’s insolvency could be much worse:

  • For one thing, coal production could be lower than what GAO assumed, meaning less revenue from the excise tax. GAO used the U.S. Energy Information Administration’s reference case, which shows coal production essentially flat through 2050. But note that this is likely a conservative assumption: if natural gas prices remain low, or if more renewable sources of energy come online as expected thanks to continuing cost declines, coal production could continue its recent ten-year decline for the foreseeable future. And despite current federal politics, there is momentum for deep decarbonization to address the climate crisis.
  • Even more alarming, the emerging crisis of new black lung cases in Appalachia is not included in the analysis. GAO assumed that the growth rate in new black lung cases is -5.8 percent, based on historical data on the number new beneficiaries of the Trust Fund. That means that the number of beneficiaries will continue to grow, but at a slower pace than in the recent past. With the very recent surge in black lung cases combined with the fact that the disease can’t be detected in the lungs until after about a decade of exposure, this assumption is not likely to hold true.
NIOSH and NAS weigh in on science and solutions

Black lung is completely preventable, and as a result of federal standards limiting miners’ exposure to coal dust, by the late 1990s, the disease had become rare. However, as NPR has reported (here, here, here, here, and here), in just the last few years, Central Appalachia has seen a surge in new cases of complicated black lung, an advanced form of the disease. National Institute for Occupational Safety and Health (NIOSH) investigators found 60 new cases of the disease at a single radiology clinic in Kentucky in just 18 months alone. By comparison, NIOSH’s monitoring program detected only 31 cases nationally from 1990 to 1999. NIOSH researchers also identified 416 new cases in Central Appalachia from 2013 to 2017. NPR’s ongoing investigation puts the number of new cases in Appalachia since 2010 at around 2,000, roughly 20 times official government statistics.

What’s responsible for the spike in reported cases of black lung? For one thing, the national monitoring program historically has a low participation rate, and while the resurgence of the disease shows up in the national monitoring data, the cluster identified in Kentucky was discovered separately. And because it takes years for the disease to manifest in a miner’s lungs, it’s difficult to connect the disease to specific exposure or mining practices. NIOSH researchers suggest that changes in mining practices may be exposing miners to greater amounts of silica dust from cutting through rock formations to access thin or deep coal seams.

On the heels of the GAO report and the NIOSH investigations, the National Academies of Science, Engineering, and Medicine (NAS) released an independent report looking at coal industry approaches to monitoring and sampling the coal dust levels that miners are exposed to. The NAS report concludes that compliance with federal regulations limiting the exposure of miners to coal dust has reduced lung diseases over the last 30 years, but that compliance has failed to achieve “the ultimate goal of the Coal Mine Health and Safety Act of 1969”—eradicating coal dust exposure diseases such as black lung. The NAS goes on to say, “To continue progress toward reaching this goal, a fundamental shift is needed in the way that coal mine operators approach [coal dust] control, and thus sampling and monitoring.” The report recommends a systematic investigation of how changes in mining operations may have increased exposure to silica dust, the development of better monitoring devices, especially for silica, and increasing participation rates in the NIOSH monitoring program.

Congress must act—and fast

The good news is that there is the start of a solution to the funding of black lung benefits already in sight: the RECLAIM Act. If enacted, RECLAIM would free up $1 billion in existing money from the Abandoned Mine Lands (AML) fund to put people to work cleaning up degraded mine lands and spurring local economic development in communities that need it most. How is this separate fund and separate problem connected to black lung benefits?

In short, Congressional budgetary rules require that any time taxpayer money is spent, it must be offset by budget cuts or additional revenue elsewhere. RECLAIM’s champ, Rep. Hal Rogers (R-KY), identified the extension of the coal excise tax at current levels for an additional ten years to “offset” the $1 billion in spending from the AML fund. It doesn’t matter that these two initiatives are—and will remain—separate programs with their own funding streams.

But the two issues are intertwined—the surge in new cases of black lung is happening in the same region where communities are struggling to deal with the legacy of past mining operations and simultaneously trying to chart a new economic future. Addressing all these issues simultaneously is the sort of win-win-win policy solution that doesn’t come around too often.

The astute reader will notice, however, that the extension of the coal excise tax for ten years is insufficient to address the Trust Fund’s long-term solvency problem, as the charts above demonstrate. Passing the RECLAIM Act, therefore, is merely the first step to addressing the problem; but legislators must consider actually increasing the coal excise tax. This would ensure that the responsible parties—that is, coal companies—are forced to pay for the damages inflicted on real people, real families—instead of leaving taxpayers holding the bag. And with black lung set to reach epidemic levels in the coming years, Congress must act now to strengthen the fiscal health of the Trust Fund—to protect the health and well-being of miners and their families in the face of an uncertain future.

UPDATE (5 July 2018): The original version of this post misstated the year when the current coal excise tax was established. The current coal excise tax of $1.10/$0.55 per ton was established in 1986 and extended at current levels in 2008.

Photo: Peabody Energy GAO GAO

Climate Change is the Fastest Growing Threat to World Heritage

Aerial view of the great barrier reef in AustraliaGreat Barrier Reef. Photo: Lock the Gate Alliance (Flickr)

Nineteen extraordinary places were added to UNESCO’s World Heritage list this week, including Buddhist temples in South Korea, the forests and wetlands that form the ancestral home of the Anishinaabeg people in Canada, and the ancient port city of Qalhat in Oman. But amongst all the congratulations and good feeling that comes with adding sites to list of the world’s most important places, there was little or no serious talk about the implications of climate change. Last year, the 21-nation World Heritage Committee, the Convention’s governing body, raised the alarm about climate change and called for stronger efforts to implement the Paris Agreement and increase resilience of World Heritage properties, promising to revise its own decade-old climate policy. In Bahrain, however, the issue received short shrift, making it vital that the Committee make it a key agenda item at its next meeting in 2019.

Climate threats were not anticipated when the Convention was signed in 1972

Added to the World Heritage list in 2018, Pimachiowin Aki in Canada, part of the ancestral lands of the Anishinaabeg people. Photo: Bastian Bertzky/IUCN

Adopted at the General Council of UNESCO in 1972, the World Heritage Convention’s core mission is to protect and conserve the World’s most important natural and cultural heritage. Back in 1972, there was no hint that climate change would become the systemic threat to World Heritage sites that it has since proved. To be inscribed on the World Heritage List, a protected area must demonstrate Outstanding Universal Value (OUV) under at least one of ten criteria. For example, in the US, the Statue of Liberty is listed under two criteria, as a “masterpiece of the human spirit” and as a “symbol of ideals such as liberty, peace, human rights…”. Yellowstone National Park is listed under four criteria, including for its scenic splendor, unparalleled geothermal activity, intact large landscape and role as a refuge for wildlife.

If a site should come under threat from, for example, mining, deforestation or urban development, it can be added to the List of World Heritage in Danger, with the possibility of being de-listed if the problems are not addressed. This year, Kenya’s Lake Turkana was added to the Danger List, because of an immediate threat from upstream development of the Gibe III Dam in Ethiopia.

Climate change is a major threat to the OUV to many World Heritage properties, but the Danger List does not seem an appropriate tool for addressing the issue, as no one state party can address the threat on its own. Neither does the nomination process for new World Heritage sites require any assessment of whether the OUV may be degraded as a result of climate change. It seems absurd that site nomination dossiers which are extremely detailed, take years to complete and require the inclusion of comprehensive management strategies, have no obligation to include even the most basic assessment of climate vulnerability. Consequently, UCS is working with partners to try and identify ways to better respond to climate risks within the World Heritage system.

Climate change is the fastest growing threat to World Heritage

At a workshop in Bahrain last week, we asked a group of natural and cultural World Heritage site managers from around the globe whether they were experiencing climate impacts at the site where they work, 21 of 22 said yes, and 16 of the 22 described actions they are taking to monitor or respond to climate change  And that makes sense, because we know from the IPCC (Intergovernmental Panel on Climate Change), and a host of country and site-level studies that the impacts of climate change are everywhere. But it also drives home the point that this issue is not getting as much attention as it needs at the higher levels of the Convention. Climate impacts are clearly being under-reported by states parties under the official mechanisms of the Convention – the State of Conservation (SOC) reports, and IUCN’s World Heritage Outlook 2 report, published in 2017, identified climate change as the biggest potential threat to natural world heritage and estimated that one in four sites is already being impacted. This also must be an underestimate. In fact, virtually all properties must be being impacted in some way, the key question is how severe the threat to OUV is for each site, and over what time-scale?

UCS, with UNESCO and the United National Environment Program (UNEP) has published 31 representative case studies of World Heritage properties being impacted by climate change, including Yellowstone National Park and the Galapagos Islands. In Bahrain, we heard many new stories about how climate change is affecting World Heritage properties, including for example the immediate risk of flooding and erosion to the Islands of Gorée and Saint-Louis in Senegal, vulnerability to changes in rainfall patterns at Petra in Jordan, and the potential loss of cave paintings & petroglyphs in Tasmania. The historic city of George Town in Penang, Malaysia suffered unprecedented damage from a typhoon in 2017, the kind of extreme storm that the area has not normally had to face in the past.

Map showing highest level of heat stress for the 29 World Heritage reefs during the third global coral bleaching event, Image: NOAA Coral Reef Watch/UNESCO

Although there was a 2014 independent analysis of long-term sea level vulnerability to cultural World Heritage sites that identified 136 out of 700 , the only group of World Heritage properties for which a comprehensive scientific assessment of climate risk has been undertaken, are the coral reefs. There are 29 World Heritage reefs, including Australia’s Great Barrier Reef, the Belize Barrier Reef, and Papahānaumokuākea in the Hawaiian archipelago. According to UNESCO’s 2017 analysis (for Scott Heron and Mark Eakin, both of NOAA, were coordinating lead authors, along with Fanny Douvere from the World Heritage Centre), coral in 21 out of the 29 properties (79%) have experienced severe or repeated heat stress during the past three years. Projecting impacts into the future, under the IPCC’s RCP 8.5 scenario, with a global average temperature of 4.3C by 2100, twice-per-decade severe bleaching would be apparent at 25 of the World Heritage Reefs by 2040.

Why we need a Climate Vulnerability Index for World Heritage

What is needed is a simple, standardized methodology for top-line rapid assessment of climate vulnerability that would work for all World Heritage sites, whether listed for natural, cultural or mixed values. Such a tool would enable the World Heritage Committee to determine which World Heritage properties are most immediately at risk from climate change, where the problems will likely be in the future, and where resources are most urgently needed for more detailed assessment and monitoring, and to undertake resilience and adaptation activities. The methodology needs to be repeatable so that periodic reviews can be undertaken.

Island of Saint-Louis, Sénégal – a World Heritage site at immediate threat from sea level rise. Photo: Dominique Roger/UNESCO

To meet this need, a Climate Vulnerability Index (CVI) for World Heritage properties has been proposed. If adopted by the World Heritage Committee, it has the potential to influence responses to climate change at the World’s most important natural & cultural heritage sites. The concept emerged at an expert meeting on the Baltic island of Vilm, Germany, in 2017, which UCS participated in, and was proposed in the meeting outcome document.  The meeting which was called in response to a decision at the World Heritage Committee in Krakow earlier in 2017 to prioritize climate action and resilience, to investigate the implications for the OUV of World Heritage sites, and revise the Convention’s decade-old climate policy.

At the Bahrain meeting of the World Heritage Committee, the CVI concept was presented at a side event organized by two of the Committee’s three official advisory bodies (IUCN and ICOMOS (the International Council on Monuments and Sites)) in which UCS participated, and at a meeting of the ICOMOS Climate Change & Heritage Working Group co-organized by UCS at the National Museum of Bahrain. The CVI idea is gaining traction. Its value to the Committee would be that it could help quickly identify thematic groups of properties – such as Arctic sites, coastal archaeology, or high mountain ecosystems – at risk, then provide for a deeper dive into all sites within a threatened category, flagging individual sites in need of urgent action or further assessment at the national level.  Critical for the success of the CVI is that it can be applied to both natural and cultural sites, so that a methodology that works for coral reefs, can also work for earthen architecture or cave paintings.

Outside of the side events and the workshops of the advisory bodies and NGOs, where it was a bigger topic than ever before, climate change was hardly mentioned in the plenary sessions of the World Heritage Committee. Only Committee members Trinidad & Tobago and Australia substantively raised the issue, the latter offering an amendment to the Bahrain decision document which was adopted without objection, and which requires the revised climate policy to be presented at the 43rd Committee meeting in Azerbaijan in 2019. Now there is a window of opportunity for civil society to influence the policy revision, and for the vulnerability index concept to move forward. It’s an opportunity that, if taken, could influence how the World Heritage Convention deals with climate change for decades to come.

 

 

Ever Heard of Microgrids? They’re Awesome—Here’s Why

For most of us, when the power fails, the lights stay out until the grid gets fixed. Regardless of personal cost, or degree of inconvenience, or magnitude of disaster looming close behind, only the utility can re-flip that switch.

Power out, and powerless.

That is astounding.

In so many areas of our lives, we trust systems, but also make backup plans. Banks plus sock drawers, grocery stores plus canned goods, water taps plus gallons in the back; we belt-and-suspender proudly, mitigating risks on the daily.

Yet not so for electricity. When it comes to the grid, the vast majority of us solely rely upon a massive centralized system, which means we benefit from economies of scale when it works, and stagger under catastrophes of fail when it doesn’t.

Shouldn’t there be a backup plan?

Well for a growing number of people, there is.

As my colleagues and I detail in a new interactive map, more and more communities are turning to microgrids to buttress their electricity needs, enabling them to keep the power on even if the grid shuts off.

Here, a pathway to resilience: power to the people, by the people, starting from the ground up.

Why microgrids?

The devastating consequences of severe power outages have been achingly front of mind as of late. An upright world, suddenly toppled over into upheaval everywhere.

Utilities are working on ways to help the grid better handle severe storms. Credit: dakine kane/Creative Commons (Flickr)

Given our increasingly electrified day-to-day, power outages are threatening to result in costs that we just can’t afford to pay.

As a result, there’s been heightened attention on how to do better—how to keep the power on, instead of shutting off.

But that discussion has been focused nearly exclusively on the grid. On the power plants feeding it, and the types of fuel that’s feeding them. On the wires strung high above, and the pipelines buried deep below. On the trees and wind and fires and flood that knock and knock and knock.

Which is all critically important work, and something we invest a lot of time in ourselves. But the truth is, no matter how good we make the grid, the power will still go out. Less frequently, and for far shorter amounts of time, but still it will blink off. Why? Because the world’s largest machine isn’t too big to fail—it’s simply too big not to.

Thus, a conundrum: We know we can’t afford to fail, and we know that still we will. Something’s got to give.

Enter microgrids.

Microgrids are…

A power system in miniature.

They can be teeny tiny micro small, held in the space of just one hand, or they can really stretch that micro moniker far, linking whole campuses and communities as one.

Microgrids come in two main forms:

  • Islanded microgrids are fully untethered from the grid. For these systems, every day is Microgrid Day, supporting everything from pumps in pastures to highway road signs, emergency response units to whole towns unto themselves.
  • Islandable microgrids, on the other hand, are systems connected to the broader grid that can also run alone. These microgrids hum along in harmony—until the lights go out. Then, a spot of light in a sea of dark as the system shuts the failure out and solely self-supplies.

And about that supply. Here’s where the real promise begins. Because although any type of resource works, the diesel generators many have long turned to leave a lot to be desired. In addition to spewing out health-harming pollutants, they also require reliable access to fuel in the midst of surrounding disaster. What’s more, because they’re so infrequently used, they’re often prone to failure in the exact moment they’re needed most.

Students check out solar panels as part of Florida’s SunSmart E-Shelter Program. Credit: Florida Solar Energy Center.

Solar-plus-storage, on the other hand, shines brilliantly bright as the face of many future systems, cleanly and reliably and affordably bringing power to the people. And, not just when the power goes out. Indeed, these systems can actually save communities money in the many, many hours when they’re not in island mode by generating electricity and lowering bills all throughout the year.

Sure do sound like some sharp-looking suspenders to me.

But jump to take a look, and be the judge yourself!

Micro grids, mammoth potential

We recently put together the map above, highlighting microgrid stories from all across the country. We want to illustrate just a few of the ways in which microgrids have—and increasingly will—serve to bring power back to the people.

You should zoom around and explore for yourself, but here, a few quick highlights from the route: a pioneering island in Alaska; a policy in Massachusetts that looks forward, not back; a grocery chain in Texas that elicits tears of joy; and a new form of disaster response that’s powered by the sun.

And our map just scratches the surface.

Microgrids are supporting military installations and first responders, schools and hospitals, emergency shelters and wastewater treatment plants.

They keep gasoline stations pumping along evacuation routes, and experiments running in labs.

They serve individuals, they serve critical facilities, they serve communities.

And, what’s more, they have the potential to be serving many, many more. As the costs of renewables and energy storage keep plummeting, the ever more accessible these benefits-generating, resilience-boosting, risk-mitigating win-win-win solutions will be.

Our nation’s electricity grid is an incredible resource, and one we all benefit from keeping in the very best of shape.

But we don’t have to put all our eggs in one basket. There are some services, some people, some needs that simply cannot allow for electricity access to be left to chance. Especially because we don’t have to.

Microgrids are here and ready to help. Let’s make sure that when the lights go out, every community has the chance to flip that switch themselves.

dakine kane/Creative Commons (Flickr) http://www.fsec.ucf.edu/en/education/sunsmart/index.html

Pruitt’s Science Advisors Urge Him to Let Them Review His So-Called Transparency Initiative

Photo: Gage Skidmore/CC BY-SA 2.0 (Flickr)

One week after issuing its letters on EPA’s spring and fall regulatory agendas, the SAB posted a letter to Administrator Pruitt urging him to charge the SAB with reviewing the flawed restricted science rule before taking further action on the proposed rule due to the very important scientific considerations needed for transparency at the agency. This is a strong statement coming from the Administrator’s very own science advisors, 18 of whom were hand-selected by Pruitt himself.

The letter calls out the agency for not including agency and outside scientists in the development of the proposal, writing, “the precise design of the proposed rule appears to have been developed without a public process for soliciting input from the scientific community,” made clear by the fact that there are many considerations having to do with making public sensitive confidential data that were simply ignored in the rulemaking.

Among the issues the SAB intends to explore in its review of the rule are:

  • How data restrictions could have impacts on regulatory programs at the agency, thus affecting regulatory costs and benefits with long-term implications.
  • How much of the confidential human subject data can and should not ever be made public for legal and ethical reasons.
  • How reanalyses of data, like the Harvard Six Cities study, can be done rigorously without public access to data and models.
  • How expert panels are already vetting science at the EPA without reanalyzing the original data and methods.

Emails we obtained through FOIA revealed that political appointees, not scientists, crafted this policy. It serves no scientific purpose, undermines the EPA’s work, and has drawn wide condemnation from scientists, which is why it needs further scrutiny by the SAB to determine what its impacts on the agency and on the public would be.

It’s important for EPA to have this kind of advice, especially on a rule that has such extreme ramifications on the way the agency will be able to consider the best available science. Attempts to politicize, weaken, or simply ignore the SAB and other advisory committees under this administration jeopardize the ability for important dialogues like this to occur, which is why we’ve been monitoring and pushing back against these types of attacks.

We expect Administrator Pruitt to take this formal call for review seriously and defer agency action on the rule until SAB review is complete and EPA has the chance to review and respond to its recommendations. He should thus act immediately to call on his advisors and seek the input of the scientific community and the greater public that has so far been absent from the EPA’s process for this rule.

Rapid Warming is Creating a Crisis for Arctic Archaeology

An old whaling site on Svalbard, Norway. Photo: Adam Markham

There are at least 180,000 archaeological sites in the Arctic. Many are already being lost to climate change – virtually all of them are vulnerable. A new study by an international group of archaeologists and experts (including from the National Park Service and UCS) and published in Antiquity Journal, provides the first synthesis of climate threats to the Arctic region’s unique archaeological record. The cold and wet conditions in the Arctic have resulted in extraordinary preservation of organic materials such as bone, fabrics, animal skins and wooden tools for hundreds or thousands of years. But the Arctic is warming twice as fast as the global average, and the changing conditions are proving disastrous for many archaeological sites.

Working in Greenland, Jørgen Holleson (lead author of the new study and an archaeologist at the National Museum of Denmark) has demonstrated at Qajaa in West Greenland that warming soil temperatures and changes in soil moisture are accelerating microbial decay of organic archaeological materials. Also according to Holleson, at some Thule Culture grave sites in southern Greenland, where organic remains including mummies, kayaks and hunting implements were present as late as the 1970s, recent field work has revealed that little or no organic material still remains.

Coastal Erosion is washing away our heritage

Perhaps the most urgent issue in Arctic archaeology is that of coastal erosion. Permafrost thaw, changes in the freeze/thaw cycle and wave action during storms are combining to accelerate erosion processes. The loss of seasonal sea ice which protects the coastline from winter storms in some parts of the Arctic is also a major factor.

On Alaska’s North Slope, co-author Anne Jensen is engaged in a major rescue effort at Walakpa to study and document the archaeology of land occupied by semi-sedentary Alaskan Natives for at least 4,000 years which is eroding alarmingly rapidly, taking with it structures, artifacts and graves. Severe erosion is also wiping out archaeological sites on the East Siberian Sea coast and in North Western Canada where the most important sites of the aboriginal inhabitants, the Inuvialuit are endangered. “We’re losing the history of large areas of Canada” study co-author, Max Friesen of the University of Toronto told the Globe and Mail. The site of Nuvugaq on the Mackenzie River delta, for example, where 17 large houses and a communal structure used by an Inuit bowhead hunting group group known as the Nuvugarmiut, which was first reported from the Franklin Expedition in 1826, has already been completely washed away due to thawing permafrost and storms.

A 2016 photo of the remains of a large Inuvialuit house on the Tuktoyaktuk Peninsula on Canada’s Beaufort Sea coast, which has since been completely washed away. Photo: Max Friesen

Loss of sea ice, tundra fires and uncontrolled development

Also directly threatening archaeological sites in the Arctic are worsening tundra fires and the spread of shrubby vegetation as temperatures warm. Additionally, loss of sea ice in the Arctic is opening the region to more shipping traffic, military activity and industrial and urban development. It is also enabling increased tourism, including on larger cruise ships. The potential for uncontrolled tourism development causing damage to archaeology in a warming Arctic is very real. Tour companies will likely seek new landing areas for small boats carrying more visitors into fragile areas in the high Arctic, and in parts of the region there is expected to be increased pressure from tourists walking on sites, camping and using motorized vehicles.

Treasure hunting and looting of archaeological sites is also becoming a more serious problem with warming. Co-author Vladimir Pitulko of the Russian Academy of Sciences has documented “mining” of mammoth ivory at important “kill sites” in Siberia, where poachers use high pressure pumps to extract ivory from the thawing ground to sell on the black market. The increased numbers of tourists in the Arctic means that more people are able to casually pick up and keep (often illegally) artifacts they find eroded from coastal sites or melting ice patches and glaciers. And increased storm damage and erosion means that more artifacts are emerging.

A rapid assessment is needed to prioritize actions

In the face of unprecedented changes to the Arctic environment, the study authors argue that there is an urgent need to rapidly assess the vulnerability of key Arctic archaeological sites and develop strategies for prioritizing the use of scarce resources most effectively. With every storm, important archaeological remains are being washed into the ocean, whilst throughout the region organic materials are being rapidly lost to decay in warming soils after being preserved for centuries. Undoubtedly the assessment that there are 180,000 archaeological sites in the Arctic is an underestimate, and many important sites are likely to be lost or damaged before they have even been recorded. The impact of climate change on Arctic archaeology represents a catastrophe for world heritage, and one that requires urgent mitigation and adaptation action to respond to the scale of the crisis.

Ocean Conservation Is Still Significantly at Risk Despite Backtrack on NOAA Mission Change

Aquaculture pens off the coast of Maine. Photo: NOAA National Ocean Service

Last week, following press attention to a presentation by the Acting Administrator of the National Oceanic and Atmospheric Administration’s (NOAA) on new directions for the agency, Adm. Tim Gallaudet quickly backtracked and stated that the mission would not fundamentally change.

That’s a good thing, but there are other signals coming out of the Trump Administration that point to a real change in priorities at NOAA and other agencies. Some of those I pointed out earlier this week. Here I want to turn to some of the specific priorities the Admiral discussed in his presentation. One of the highlighted strategic priorities related to reducing our trade deficit in seafood. The President has spoken of trade deficits as if the US is losing money if we have a deficit, but most economists don’t see it that way. With regard to seafood, the US imports about 90% of the seafood we consume. That means we have a substantial trade deficit in seafood. On the other hand, American businesses from retailers to restaurants make a lot of money from seafood products, well beyond the imported value. They are able to market a much wider range of products and at a range of prices for a food source that generally contributes to a healthy diet. And, the production of US seafood products has made huge strides toward sustainability. That progress was hard fought, and hard won for the industry and government acting in the public interest.

So what are the priorities for the Trump Administration? Here is the slide the Admiral presented. As someone who has spent much of my career in ocean science and fisheries management several things jump out at me.

The first is “Permit Fishing in Marine Monuments” within 90 days. That will do absolutely nothing to address the seafood trade deficit, because literally it will allow access for 11 boats across the two monuments, but it will substantially undermine conservation because those few boats can do a lot of damage to fragile reefs and seamounts. And will send a broad, negative, signal to the US and the rest of the world that the US is backing away from protecting marine ecosystems particularly in offshore areas. And that will happen just at the time when the rest of the world is beginning to finally negotiate a new agreement for the conservation of the high seas.

Secondly, and no less important, the slide says that NOAA will reach 50 deregulatory actions in the next 30 days. Since virtually all of the regulatory work of NOAA is on fisheries, marine mammals and endangered species, this says to me that the agency will be in rapid retreat on actions that are conserving marine ecosystems. I was formerly the Deputy Director of NOAA Fisheries. I know for a fact that there is not some large number of useless regulations just lying around. I also know that the only way for fishing businesses and communities to be successful is if overfishing is ended and the ocean is healthy. So deregulation means we will stop protecting marine resources. Greater exploitation will result, and less sustainable oceans. There really isn’t another way to interpret this.

Then there are some truly puzzling items on the list, such as “Propose Vessel Financing Rule”. If that means reducing subsidies for fishing capacity, that’s good. If it means re-introducing federal financing, tax breaks and loan guarantees for new fishing vessels it is exactly the wrong way to go. The taxpayers have spent millions to reduce fishing capacity in an effort to end overfishing and recover depleted stocks. I sincerely hope we are not going to reverse that direction now.

And then, there are several potentially concerning points related to expanding marine fish farming or aquaculture. Release a plan, host a summit, provide grants. All of that seems OK, but the goal on subsequent slides is to increase aquaculture production three-fold in ten years. On its face that may be OK, but it depends on how, where and when. It doesn’t say. Aquaculture may be a good way to produce more seafood, but requires large inputs of wild caught fish to feed the farmed fish—and that fish meal is largely imported. And it requires for various forms of cage culture, exclusive use of ocean space. That creates its own challenges in resolving competing uses, because the US ocean is a busy place with lots of different users. And finally, it requires strict care to ensure that issues of contamination of wild stocks, disease, waste, chemical use and other issues are dealt with up front to ensure sustainability. As with all animal culture there are challenges. Is NOAA going to address those? In just 180 days?

There are other priorities that seem to suggest that NOAA will reduce protections for endangered species (“ESA streamlining rule”) and protections for marine mammals in the Gulf of Mexico (“Marine Mammal Protection Rule for Gulf Energy Development”) that also worry me greatly.

Ocean conservation, like democracy, cannot be taken for granted, you have to work at it. We need to watch how things develop at NOAA very closely and be ready to raise our voices again if the conservation part of the NOAA mission gets short shrift. Congress needs to demand answers to the real direction for NOAA programs and how the money they have appropriated is being spent. Scientists need to scrutinize the proposals and how well they are supported by scientific evidence. And concerned people everywhere should be continuing to speak out, in their communities and to their elected officials as well as NOAA regional officials about the need to safeguard the public interest. That’s NOAA’s job.

Photo: NOAA National Ocean Service

Six Key Facts Ignored in Dismissal of California Climate Suits vs. Fossil Fuel Companies

Photo: Alan Grinsberg/Flickr

This week, U.S. District Judge William Alsup dismissed lawsuits by San Francisco and Oakland seeking to hold fossil fuel companies accountable for their contributions to climate change. Judge Alsup’s ruling dangerously rested on balancing climate harms with fossil energy benefits, deferred to legislative- and executive-branch solutions that major fossil fuel companies have spent millions opposing, seriously underplayed the role of ExxonMobil and others in spreading disinformation about climate science and policy, and punted on the question of who should pay for climate damages.

With apologies to Harper’s Index, here’s a Fossil Fuel Company Climate Liability Index, followed by six key facts illuminated by these numbers.

FACT: Fossil fuel companies are the main beneficiaries of our fossil-fueled energy system and economy

The benefits of fossil fuels have not accrued equitably, and the fossil fuel industry has compounded its advantage by externalizing its costs. Major fossil fuel companies profit handsomely from an energy system dependent on their products. The five defendants netted more than $40 billion in 2017, alone. Maintaining the status quo may be in the fossil fuel industry’s interest, but a low-carbon pathway is necessary to protect the climate and renewable energy is increasingly competitive. Judge Alsup’s ruling presented a false choice between climate action and energy access, echoing the rhetoric of defendants ExxonMobil and Chevron that fossil fuels are necessary to support economic growth, health, and education in low-income countries.

FACT: Fossil fuel companies exercise undue political influence to block policy solutions

The defendant companies have invested some of their enormous profits in political contributions, direct lobbying, and indirect lobbying through such trade associations and industry groups as the American Petroleum Institute (API) and the National Association of Manufacturers (NAM). NAM’s ironically named Manufacturers’ Accountability Project (MAP), the fossil fuel industry’s attack dog on climate liability lawsuits, did a happy dance about the dismissal of the suit.

Thus, the judge’s conclusion that climate change is a matter for the legislature and the executive branch is a Catch-22 that makes my brain hurt. Yes, Congress and the White House should take decisive action to curb climate change, but the fossil fuel industry has pulled out all the stops in an effort to block strong federal policies. The industry has friends in high places in the Trump administration: Environmental Protection Agency Administrator Scott Pruitt, Energy Secretary Rick Perry, Interior Secretary Ryan Zinke, to name just a few. Just this week, Buzzfeed broke the news that Pruitt urged fossil fuel executives to apply for EPA regional administrator positions.

The defendant companies are not just responding to consumer demand. They do what they can to fix the market through undue political influence, which has forestalled the development and availability of renewable energy. And the duty of the legislative and executive branches to act does not absolve the judicial branch of responsibility.

FACT: Fossil fuel companies are investing significantly more in oil and gas exploration than in R&D for clean energy

Investments by ExxonMobil, Chevron and other oil and gas companies in low-carbon research and development are a drop in the bucket compared to their spending on oil and gas exploration and infrastructure. Even a former ExxonMobil engineer has argued that what is needed is research focused on how to supply affordable low-carbon energy while also reducing fossil fuel demand. ‘Nuff said.

FACT: Burning fossil fuels is the single most significant contributor to global warming, and scientists can increasingly quantify how much

Judge Alsup earned praise for acknowledging that the magnitude of the climate change problem is vast and urgent. Yet his description of carbon dioxide as “a gas produced by, among other things, animal and human respiration, volcanoes and, more significantly here, combustion of fossil fuels like oil and natural gas” is more than a little misleading, given that the burning of fossil fuels is the most significant contributor to global warming by far.

Scientists’ ability to quantify the damage due specifically to human-caused climate change is growing quickly. A UCS-led study published last September in the scientific journal Climatic Change for the first time links global climate changes, including the sea level rise at issue in the San Francisco and Oakland lawsuits, to the product-related emissions of specific fossil fuel producers, including the defendants. Importantly, the study also quantified the climate change impacts of emissions traced to these companies’ products from 1980 to 2010, when these major fossil fuel companies knew the risks of burning fossil fuels and not only failed to take steps to reduce those risks but also funded a concerted campaign to deceive the public and block action.

The top five investor-owned companies ranked in terms of cumulative emissions — Chevron, ExxonMobil, British Petroleum, Shell, and ConocoPhillips — are responsible for one-eighth (12.5%) of all industrial carbon emissions from 1854 to 2010.

FACT: Taxpayers are already footing the bill for climate damages

According to recent analysis by my UCS colleagues, accelerating sea level rise in the lower 48 states is putting as many as 311,000 coastal homes with a collective market value of about $117.5 billion today at risk of chronic flooding within the next 30 years, the lifespan of a typical mortgage. Chronic property flooding could translate not just into eroding property values, but also into unlivable houses and falling tax revenues that fund local schools, roads and emergency services. The properties at risk by 2045 currently house roughly 550,000 people and contribute nearly $1.5 billion toward today’s property tax base.

The contribution of any single fossil fuel company to a climate impact such as sea level rise may appear small, but the costs of dealing with and preparing for these impacts are enormous and mounting—and taxpayers alone are currently on the hook.

A few inches of sea level rise might not seem dramatic, but it could be the difference between a minor event and a human and financial catastrophe. For example, scientists have found that sea level rise contributed an additional $2 billion in damage to the havoc wrought by Hurricane Sandy in New York City.

FACT: Lawsuits and paying fines aren’t the only way that fossil fuel companies can be held accountable for making climate change worse

It is critical that San Francisco, Oakland and other communities can seek compensation through our courts for the costs of climate-related damages and preparedness. And as tobacco litigation has shown, ensuring that companies pay their fair share of the costs imposed on society by their products is not the only remedy to be secured through judicial action. To be sure, the settlements of state lawsuits against Big Tobacco included billions of dollars in payments to cover health care costs. But the settlements also ordered the public release of millions of pages of previously secret internal documents; brought an end to certain advertising, promotion and marketing practices; and shut down the tobacco industry’s lobbying and junk science shops forever. The reduction in the tobacco industry’s political and economic influence helped clear the way for policies such as the World Health Organization Framework Convention on Tobacco Control and US Food and Drug Administration tobacco regulations.

Judge Alsup’s dismissal of the San Francisco and Oakland lawsuits is a setback, but the cities can appeal—and their city attorneys have signaled that the case is not over. Likewise, lawsuits by several other communities in California, Colorado, New York and Washington state are now working their way through the courts. The public demand for major fossil fuel companies to be held accountable for damage they knew their products were causing will only grow louder as climate impacts get worse.

Uniting Young Scientists: Building a National Network for Grassroots Science Policy

According to a 2014 study by the American Institutes for Research, less than half of STEM Ph.D. graduates are employed in academic careers. Unfortunately, by nature of pursuing our degrees in academia it is difficult to identify mentors, expand networks, or practice skills for a non-academic career during graduate school. This challenge has been recognized by the National Academies of Science, Engineering, and Medicine (NASEM) in their recent report, which calls for a broad range of changes in the graduate education enterprise to make the system more student-centric and better prepare students for careers that address global societal needs.

Thankfully many early career scientists are already taking the task into their own hands. Students and postdocs are independently questioning how to best utilize their critical thinking skills in the real world, which should come as no surprise. Having recently dedicated ourselves to answering hard questions in science, it often feels like our duty to tackle the dearth of evidence-based policy making that is increasingly plaguing our country.

In search of sustainability and support

As one of these doctoral students in pursuit of a non-academic career path, I have found the grassroots support for science communication, advocacy, and policy training to be inspiring and ever-expanding. A nationwide survey that we conducted found that of the 22 early-career science policy groups surveyed, 45% have started in the past year and a half. However, many of these groups are run by the sheer willpower of their membership. Comprised mostly of graduate students, 60% of these groups operate on meager annual budgets of $1200 or less.

This is especially disappointing considering that there is significant public support for this: a Research!America survey showed that 84% of Americans believe that it is important for scientists to inform the public and policymakers about their research and its impact on society.

These student groups are essential for supporting and promoting graduate student engagement in science policy and advocacy within their communities, and are supplemented by national organizations such as the Union of Concerned Scientists and the American Association for the Advancement of Science. However, the NASEM report points out the challenges that graduate students continue to face in an uphill battle against an academic culture that lacks incentives for science advocacy and civic engagement. Research productivity and peer-reviewed publications remain the singular metrics for traditional academic success, which creates reward systems that do not adequately prepare STEM graduate students to translate their knowledge into impact in an increasingly broad range of career paths.

Introducing the National Science Policy Network

On June 18, the National Science Policy Network (NSPN) was officially launched as a national network of science policy groups led by early career scientists. Our work focuses on providing training and resources that strengthen this burgeoning science policy-community and foster a network of engaged young scientists and engineers. We will be providing microgrants to support underfunded groups, collaborating with Research!America on a nonpartisan midterm election initiative, and hosting a science policy symposium in New York City this fall.

In just one week, NSPN has attracted over 100 subscribers, representing 50 different universities nationwide within the Western, Central, Eastern, and Southern Hubs. As we continue to grow, we aim to be a grassroots advocacy network for scientific expertise, critical thinking, and data-based decision-making that supports graduate student efforts to translate science and engineering from their laboratories to government.

In the current political climate, translating and amplifying the voices of scientific knowledge are more important than ever, but most academics remain isolated in their ivory tower. Scientific leadership’s reluctance to address internal cultural problems is not new, but recent threats to restrict the role of science in democracy has catalyzed change. This vacuum of support is being filled by local groups of scientists nationwide who are taking the task into their own hands, and NSPN is here to help.

Join us at scipolnetwork.org.

 

Holly Mayton (@hollindaze) is a Ph.D. candidate in Chemical and Environmental Engineering with a Designated Emphasis in Public Policy at the University of California, Riverside, and is currently serving as a National Chair of the National Science Policy Network. Locally, she is helping create the Science to Policy program at UC Riverside and has been involved in the UC Global Food Initiative, the California Agriculture and Food Enterprise, several California state advisory committees on environmental science and public outreach, and the California Council on Science and Technology. Holly is broadly passionate about connecting food and water science to policy and advocacy outcomes, from the local to the international level.

Science Network Voices gives Equation readers access to the depth of expertise and broad perspective on current issues that our Science Network members bring to UCS. The views expressed in Science Network posts are those of the author alone.

 

Judge Should Not Have Deferred to Congress, Executive Branch in Fossil-Fuel Climate Case

U.S. Army photo by Michael J. Nevins

On Monday, a federal judge dismissed a lawsuit by San Francisco and Oakland against the five biggest privately owned oil companies for climate change-related damages. Why? He believes the problem is too big to be decided by the federal courts and that Congress and the administration should take care of it.

Fat chance of that happening anytime soon, and the courts are at least partly to blame.

In his ruling, US District Judge William Alsup agreed with the plaintiffs that there is a “vast [scientific] consensus that the combustion of fossil fuels has … materially increased carbon dioxide levels,” which has driven up average global temperatures and raised sea levels. Likewise, he noted that the oil companies “have allegedly long known the threat fossil fuels pose to the global climate,” but nonetheless funded public relations campaigns that “downplayed the risks” and disparaged climate scientists.

At the same time, however, Alsup insisted that environmental harms attributed to burning fossil fuels have to be balanced with the fact that “the industrial revolution and the development of our modern world has literally been fueled by oil and coal.”

“Having reaped the benefit of that historic progress,” he wrote, “would it really be fair to now ignore our own responsibility in the use of fossil fuels and place the blame for global warming on those who supplied what we demanded?”

The answer to the second part of the question is emphatically yes (and it doesn’t require ignoring our own responsibility).

The oil companies knew

Alsup is of course correct that industrialization would not have happened without fossil fuels. But he neglects to take into account the pernicious role the defendants—BP, Chevron, ConocoPhillips, ExxonMobil and Royal Dutch Shell—have played to block government action to curb carbon emissions over the last three decades. If the United States and other industrialized nations had begun the necessary transition to low- and no-carbon energy back then, the likely consequences of climate change would be significantly less dire.

Rising sea levels alone will wreak havoc along the California coast. San Francisco, Oakland and six other California jurisdictions that have filed similar climate lawsuits can expect accelerating sea level rise to threaten some 8,800 homes by 2045, representing $76 million annually in today’s local property taxes, according to a recent analysis by the Union of Concerned Scientists. By the end of the century, some 52,000 homes that currently contribute $435 million in annual property taxes will be at risk.

As Alsup pointed out in his ruling, the alarm bells about climate change began ringing in the late 1980s. Thirty years ago—on June 23, 1988, to be precise—NASA scientist James Hansen generated front page news when he warned Congress about higher temperatures and rising seas. That same year, the United Nations convened the Intergovernmental Panel on Climate Change (IPCC).

A year later, 50 US corporations and trade groups founded the Global Climate Coalition (GCC) to discredit climate science. Its charter members included none other than British Petroleum (now BP), Chevron, Exxon, Mobil and Shell.

Until it disbanded in 2002, the GCC conducted a multimillion-dollar lobbying and public relations campaign to undermine national and international efforts to address global warming. One of its fact sheets for legislators and journalists encapsulated its main talking points, disingenuously claiming that “the role of greenhouse gases in climate change is not well understood” and that “scientists differ” on the issue.

Thanks to a leaked internal GCC memo from 1995, we now know that the coalition’s own scientific and technical experts were telling its members that greenhouse gases were indeed causing global warming. “The scientific basis for the Greenhouse Effect and the potential impact of human emissions of greenhouse gases such as CO2 on climate is well established,” the document stated, “and cannot be denied.”

Exxon scientists, meanwhile, were aware of the threat posed by fossil fuels as early as 1977, according to a 2015 investigation by InsideClimate News. Nevertheless, the company purposely chose to emphasize “uncertainty” and, since it merged with Mobil in 1999, it has spent tens of millions of dollars on a climate disinformation campaign that continues to this day.

Courts need to take responsibility

Alsup concluded that the courts are not the proper venue to address climate damages. Given the US Supreme Court has ruled that the Environmental Protection Agency has the authority to regulate greenhouse gas emissions under the Clean Air Act, Alsup contends the issue is best left to Congress and the administration to handle.

Alsup’s conclusion presents us with a Catch-22. Kicking any decision about curbing global warming emissions to the political branches of government ignores the fact that both Congress and the current administration are tightly tied to the coal, oil and gas industries. And that hand-in-glove relationship is largely due to questionable Supreme Court decisions.

The genesis of our predicament can be traced back to the early 1800s. Since then, the Supreme Court has issued a series of rulings that have granted corporations the same rights as people. More recently, in 1976, it ruled that limits on campaign contributions violate the First Amendment, essentially equating money with free speech. And in the 2010 Citizens United case, the court ruled that the government cannot limit a corporation’s independent political donations.

These decisions have enabled the fossil fuel industry to exert undue influence over federal energy policy. Not only have coal, oil and gas companies collectively spent tens—if not hundreds—of millions of dollars over the past few decades to manufacture doubt about the reality and seriousness of climate change, they have spent considerably more on campaign contributions and lobbying to stymie efforts on Capitol Hill to combat climate change.

In the 2015-16 election cycle alone, for example, the five defendants in the San Francisco-Oakland climate case together spent $9.8 million on federal candidates and another $58.3 million to lobby Congress and the administration, according to government data collected by the Center for Responsive Politics.

Our three-branch system of government ostensibly rests on the concept of checks and balances. When Congress and the executive branch are hopelessly corrupted by petrodollars, it is incumbent upon the judiciary to compensate for this imbalance, which utterly fails to serve the public interest.

Fortunately, Judge Alsup’s ruling is not the last word. Similar climate-damage lawsuits have been filed by cities and counties in California, Colorado, New York and Washington state.

A recent press statement by Union of Concerned Scientists President Ken Kimmell puts these lawsuits into perspective.

“In almost all large-impact litigation, the courtroom doors are usually shut in the beginning, but if plaintiffs are persistent and keep knocking, the doors will open up,” said Kimmell, an attorney and former head of the Massachusetts Department of Environmental Protection. “This was true in the fights against Jim Crow and Big Tobacco, and we expect that the same tenacity will be necessary to overcome the entrenched political and economic influence of this deep-pocketed industry.”

The EPA SAB Agreed to Tell Pruitt that EPA’s Restricted Science Rule is Problematic, But Where’s the Follow-up?

The EPA’s Science Advisory Board (SAB) met earlier this month in DC to discuss a range of issues, but perhaps most prominently, to discuss whether any of Pruitt’s deregulatory actions from 2017 had scientific issues warranting SAB review. Also on the agenda was whether the SAB should have a chance to review the merits of the EPA’s restricted science proposal before it moved any farther in the rulemaking process. At the end of the meeting, the SAB members agreed (almost unanimously) that they would write to Pruitt and tell him that they did indeed want to review five spring and fall 2017 regulatory agenda items, including the glider truck rule and the Clean Power Plan, and the restricted science proposal.

It appears that the SAB has written two letters to the administrator, sent on Thursday, June 21. These two letters cover the spring and fall deregulatory agendas and closely track with what the committee discussed at their meeting. Almost one week later, and notably absent from the letters sent is anything related to the lengthy discussions that were had on Pruitt’s restricted science proposal.

Since there was some disagreement in exactly what this letter should recommend, with more seasoned SAB members arguing that the EPA should defer all action on the rule until SAB reviews and several Pruitt-appointed members pushing for review in tandem with the normal comment period process,  perhaps this letter is delayed because those discussions are still being hashed out over phone calls and emails. But every moment that the SAB is not weighing in on this proposal is a missed opportunity, as the public comment window shrinks and the regulatory finish line approaches. The feedback that the SAB could be providing the EPA would touch upon issues like:

  • How data restrictions could have impacts on regulatory programs at the agency, thus affecting regulatory costs and benefits with long-term implications.
  • How much of the confidential human subject data can and should not ever be made public for legal and ethical reasons.
  • How reanalyses of data, like the Harvard Six Cities study, can be done rigorously without public access to data and models.
  • How advisory committees like the SAB and the EPA’s Clean Air Scientific Advisory Committee are already set up to do much of the peer review of studies that EPA might use in regulatory decisions.

The EPA has already skirted all input from the scientific community in crafting this harebrained proposal, which is why SAB review as soon as possible is imperative. And the first step is getting Administrator Pruitt to charge them with conducting that review, by asking through this letter that has yet to be sent.

While we wait to see what the SAB officially asks of Administrator Pruitt on this topic, you can join the scientific community and other members of the public who will be standing up for science at the public hearing in DC on July 17 by signing up here or submit a comment to the EPA by August 17 here.

Midwest Transmission Operator Planning for a High-Renewables Future

Driven by clean energy policies, customer demand, and simple economics, renewable energy technologies are becoming the dominant part of our energy future. Studies consistently show that wind and solar technologies could produce far more electricity than we currently demand, but questions loom about the transmission system’s ability to enable this transition to clean energy and maximize its potential benefits.

A new study undertaken by the regional transmission operator serving much of the central United States is seeking answers to some of these questions. But navigating the complexity and uncertainty inherent in planning our electricity future is a daunting task.

The Mid-Continent Independent System Operator (MISO) is a federally authorized regional transmission organization charged with maintaining reliability and operating wholesale energy markets across much of central North America. MISO has initiated a study and stakeholder process—named the Renewables Integration Impact Assessment (RIIA)—to evaluate how the current transmission system responds to increasing levels of renewable energy. MISO’s approach searches for those “inflection points” at which operating the system reliably becomes significantly more complex. If these inflection points can be identified, they can help inform MISO of both when and what investments or operational changes may be necessary to maintain reliability while enabling increasing levels of renewable energy.

A unique approach to answering a common question

MISO’s RIIA study takes a different approach to exploring our clean energy future. It does not explore several issues we typically see from renewable energy studies, such as what the “optimal” mix of resources is, what the costs and benefits are of the clean energy transition, or what kind of new policies or regulations should be enacted to achieve high levels of renewable energy faster or more equitably. Many renewable energy studies also look 20 or more years into the future premised on assumptions about policies that may be enacted or how the cost and performance of various technologies may change over time. Given the uncertainty inherent in predicting the future, MISO’s study design excludes many of these typical approaches.

MISO’s RIIA study is specifically designed to minimize the uncertainty (and stakeholder disagreements) over what the future holds. The study makes no assumptions about future policy or regulatory changes. Nor will it evaluate the costs or benefits of this transition. MISO can’t eliminate all the uncertainty from this study (more on that below), but this approach helps maintain focus on identifying when investments in the transmission system (or changes to how we operate that system) may be necessary as renewable energy grows.

The figure below provides a synopsis of MISO’s proposed methodology.

MISO’s methodology layers increasing levels of renewable energy onto the current transmission system to find “inflection points” where maintaining reliability becomes increasingly complex. At those points, MISO will explore solutions that allow increasing levels of renewable energy. Source: MISO

As the figure above shows, MISO will be seeking out “inflection points” when the complexity of maintaining reliability across the system increases due to the level of renewable energy connected to the grid.

An example of an inflection point is when the system experiences significant congestion that makes it difficult to get energy from where it is sourced (for example, the wind-rich areas of Iowa or the solar-rich areas of Louisiana), to where it is needed. Another may be when there is enough solar on the system to require increased flexibility during evening hours as solar systems go offline and other resources need to ramp up. These are worthwhile questions to be asking (and seeking solutions to) now rather than waiting for issues to arise.

Seeking insight in a time of rapid change

MISO’s RIIA study responds to growing recognition that the current pace of renewable energy development will drive rapid and unprecedented change across the electric system. The figure below shows how wind and solar resources have come to dominate MISO’s interconnection queue—the backlog of electricity generation projects waiting to be approved to connect to the grid.

Across the MISO system more than 86 percent of new resources looking to connect – nearly 80,000 megawatts of new capacity (bottom right) – are wind and solar resources. Understanding how this will affect the flow of energy across the system and how to maintain reliability under these dynamics, are the central questions being explored in MISO’s Renewable Integration Impacts Assessment study. Source: MISO

Not all projects in MISO’s interconnection queue will get built, but it serves as a strong indicator of looming changes to the system that MISO must prepare for. The RIIA study will inform how MISO maintains reliability in the face of this rapidly changing portfolio of energy resources.

Can the RIIA study overcome uncertainty to be useful to near-term planning?

MISO has already made some decisions—such as not trying to project the future cost of resources or future policy and regulatory conditions—to help minimize the RIIA study’s uncertainty and focus on impacts to the current system. This will help clarify the results and identify near-term next steps. However, some critical educated guesswork about the future is still needed and MISO must be responsive to real-world changes that occur during the study process.

For example, when MISO began this process just over a year ago, the mix of wind and solar resources being developed was significantly different than where we are today, as solar continues to improve in both availability and cost-effectiveness across MISO’s system.

The mix of resources being developed across the MISO system is changing rapidly. Just four years ago, solar was relatively non-existent. Today it makes up more than half of all resources moving into the interconnection process. Being responsive to ongoing changes is critical to the RIIA study’s usefulness. Source: MISO

The figure above shows how the mix of resources being developed across the MISO system is changing, and highlights the need to be responsive to these changes. While more than half of all new resources entering the queue in 2018 are solar resources, MISO’s current assumption about the ratio of wind to solar in its RIIA study is 75 percent wind and 25 percent solar, based on data from just one year ago. This latest data raises the question of whether MISO should update its assumptions regarding the ratio of wind and solar in the renewable energy portfolios that its examining.

Another big uncertainty is where these renewable resources will ultimately be developed. While this can also be informed by the project in the queue, keeping an eye out for significant shifts in expectations—and adjusting the study process accordingly—will be important.

Departures between study assumptions and the reality we’re experiencing today threaten to undermine the relevance of the RIIA study. Conversely, continuously reacting to the myriad changes that can occur across the system threatens the ability to complete the study in a timely manner, if at all. It’s a difficult balance to achieve. Being diligent and collaborating closely with stakeholders (including renewable energy developers) is crucial for ensuring robust analytics and a clear understanding of what the results can tell us.

In all, MISO’s efforts to plan for expected system changes in collaboration with stakeholders is a good thing. Our clean energy future depends on the ability to accommodate increasing levels of renewable energy without threatening reliability or incurring excessive cost. The RIIA study is a step in that direction—one that will help all of us keep pace with the energy evolution going on around us.

Clean and Modern Transportation in Maryland: Wishful Thinking or a Possibility?

Photo: Famartin/Wikimedia Commons

The Maryland transportation system faces a myriad of challenges. Poor air quality, rising global warming emissions, and a crumbling transportation infrastructure, to name a few. To address these issues, the state is considering strategies that would lower transportation-related emissions, bring in funding and enable the state to build a modern, clean and equitable transportation system.

Why does Maryland need to invest in a cleaner, more modern transportation system?

Transportation is the largest source of CO2 pollution in the state, responsible for almost half of statewide emissions from fossil fuel combustion. The state cannot achieve the long term goals under the Greenhouse Gas Reduction Act (GGRA) without making significant reductions in emissions from transportation

Figure 1 – Maryland CO2 emissions from fossil-fuel combustion for all sectors of economy, 1990-2015

Transportation is also a leading source of local air pollution that has been shown to be the main cause of over 3,000 asthma attacks, 500 preventable deaths and $1.8 billion in combined health costs annually in the state. Communities surrounding the Port of Baltimore, such as Curtis Bay, are particularly vulnerable to the impact of transportation emissions and experience elevated  rates of respiratory illness, cancer and heart disease. A study shows that in 2010 Baltimore’s rate of asthma-related hospitalizations was almost three times higher than the U.S. average and recent data indicates that this trend has not changed.

In addition, climate change is exacerbating Maryland’s vulnerability to extreme weather events, especially along the state’s 3,000 miles of shoreline and in communities prone to flooding. Maryland is one of the states most vulnerable to sea-level rise. Climate change will exacerbate challenges to Maryland’s existing road and public transportation infrastructure, which already suffers from poor conditions and inadequate funding. One quarter of Maryland’s 32,037 miles of public roads are in poor condition. Creating a clean and modern transportation system is an opportunity to harden our critical infrastructure.

We can do better.

What do we need to do to get there?

The only way to meet the climate target by 2030 is to move away from fossil fuels, which means putting more electric vehicles (EVs) on our roads. A recent study estimates that EVs produce less than half the emissions of a comparable gasoline-powered car, even when the higher emissions associated with EV manufacturing is considered. Where you live determines the emissions from the electricity which powers your EV, but the study shows that in regions covering two-thirds of the U.S. population,  driving an EV emits less than a 50 mile-per-gallon gasoline car.

Electric buses and trucks can help relieve the burden of air pollution from diesel fuel. We can start by electrifying Maryland’s bus fleet, including at least 7,000 diesel school buses as well as light trucks, so pedestrians, bikers, 623,000 school children and people who live in low-income communities near highways will breathe cleaner air.

With electrification, more of the dollars spent on energy resources will remain in the region, helping to create jobs.  While much of the state’s electricity is still produced from fossil fuels, the cost per mile is much lower for EVs, and Maryland’s commitment to increasing renewable power means the share of fossil fuels used in the state will fall over time.

Not just that, but electrification will save drivers money on fuel and will insulate them from the fluctuating price of gasoline. In the last decade, the price of gasoline in the state has fluctuated between a low of $1.5 per gallon and a high of $4.10 per gallon. A difference of almost threefold in a household’s expenditure with gasoline is especially burdensome for low- and middle-income families.  A study shows that for the U.S., the cost of electricity to refuel an EV using the standard rate plan is often lower than the equivalent cost of gasoline and is always lower using a time-of-use rate. In Baltimore the average price of electricity as vehicle fuel is between 75 cents  and slightly over one dollar per gallon and is lower than the lowest electricity price in the last decade. The average fuel savings for a Baltimore EV driver was estimated to be over $600 per year.

It is also important to make investments in public transportation and in affordable housing near public transportation, so people can move around without driving a car, saving them money and easing the burden of traffic for all Maryland residents.

With the right investments, we can have a transportation system in Maryland that is cleaner and more resilient than our current system. A new proposal under consideration in Maryland can help the state fund some of these critical investments and reduce emissions at the same time.

Cap-and-invest

One policy mechanism under consideration for transportation in Maryland and other Northeast and Mid-Atlantic states is known as cap-and-invest. This policy places a limit, or cap, on greenhouse gas emissions from polluters and requires them to purchase allowances – or rights to emit CO2 – from the state, based on how much they pollute. By limiting the number of allowances available, the state guarantees overall emission reductions.  The proceeds from the auctions are then invested by the state in clean energy and transportation projects. Cap-and-invest also gives regulated parties the incentive to switch to less polluting products and processes, often minimizing consumer costs while giving them the flexibility to comply in a manner that best suits their circumstances.

Cap-and-invest is already working for the power sector. In 2009, Maryland and eight other Northeastern and Mid-Atlantic states collaborated to implement a successful power sector cap-and-invest program known as the Regional Greenhouse Gas Initiative (RGGI).  Thanks in part to this program, Maryland’s electricity sector reduced emissions by a third between 2009 and 2015.

Up to 2017, the allowance proceeds from RGGI have brought in $2.8 billion for the region. By investing in efficiency, RGGI has contributed significantly to emissions reductions and economic growth while saving consumers money.  In 2015 alone, RGGI-funded projects in the region have been estimated to expect to return $2.31 billion in lifetime energy bill savings to at least 161,000 homes and 6,000 businesses. In Maryland, by September 2017, RGGI had generated $574 million in cumulative funds, which  has allowed the state to make significant investments in emissions reduction, energy efficiency programs, and in reducing electricity bill costs for residents, who have saved an estimated $457 million in lifetime electricity bills.

RGGI cleaned the air in the region. In the nine Northeast and mid-Atlantic states, RGGI helped avoid up to 830 premature deaths, averted up to 9,900 asthma attacks and saved an average of $5.7 billion in health costs between 2009 and 2014. Neighboring states, such as the District of Columbia, Pennsylvania, Virginia and West Virginia also saw a decrease in mortality, respiratory and heart diseases. In Pennsylvania, for instance, the valuation of avoided health effects due to RGGI amounted to anywhere from $800 million to $1.8 billion dollars in the same period.

So far, cap-and-invest covers power plants, but not emissions from transportation. Other jurisdictions, however, including California and Quebec, have expanded cap-and-invest to transportation, resulting in billions in new funding for clean transportation. This year California will spend $695 million on clean vehicle incentives, $1.2 billion on public transportation and over $700 million on affordable housing and sustainable community programs thanks to their cap-and-invest program. It has been estimated that cap and invest for transportation in Maryland could be as high as $450 million per year.

Though RGGI has been successful in reducing emissions from the electricity sector, the transportation sector has been left trailing behind. In 1990, the state’s power sector was a larger emitter than the transportation sector. But roles were quickly reversed:  by 2015 transportation’s share had gone way up, while the electricity sector’s share had gone way down.

In 2009, Maryland’s General Assembly passed the Greenhouse Gas Reduction Act, which mandates that by 2020 the state must reduce its economy-wide greenhouse gas emissions to a level equivalent to 25% below 2006 emissions levels. In 2016 the GGRA was reauthorized and its goal extended to a 40% reduction by 2030.

Without a cap on emissions from gasoline and diesel, reaching our economy-wide 2030 goal is not likely, regardless of the success of the cap on electricity emissions.

What’s next for transportation cap and invest in Maryland?

Discussions on transportation pricing policies are under way in Maryland and other states in the region.

The Transportation & Climate Initiative (TCI), a collaboration of eleven Northeastern and Mid-Atlantic states, and the District of Columbia, works to promote clean and efficient transportation in the region while taking into account the importance of individual state priorities. TCI is hosting listening sessions in several states to bring in input on potential policy approaches, including cap-and-invest and other strategies, to reduce emissions and fund improvements in the region’s transportation system.

The Maryland Commission on Climate Change (MCCC) advises the Maryland Governor and General Assembly on how to reduce greenhouse gas emissions and on adaptation to climate change. The Mitigation Working group of the Commission focuses on market-based and other programs to reduce emissions, and discussions on carbon pricing are under way.  The MCCC  holds meetings open to the public where time is set aside for public comment. Encouraging state leadership to hold listening sessions is a highly valuable initiative.

The Union of Concerned Scientists works with a broad coalition of community-based partners on emission-reduction strategies and strategies for obtaining funding for a clean, modern and equitable transportation system, and on how to best invest these funds.

State-based collaborative efforts have become imperative in this day and age, and Maryland has a significant role to play in these efforts. The state’s commitment to clean energy and its success in developing a clean power sector has led to Maryland becoming one of the most energy efficient states in the country. This commitment, together with an active participation in a regional collaboration, are a winning combination. Building a clean, modern and equitable transportation system in Maryland is within reach and is the next step.

Photo: Famartin/Wikimedia Commons

A Tale of Four Cities: How Smart Growth Can Shape the Future of the Washington, D.C. / Baltimore Region

Sorry Ben, there are now three things certain in life: death, taxes, and bumper-to-bumper traffic on I-95 from Washington, D.C. to Baltimore. Though these three things are certain today, they may not be tomorrow. While I’d love to discuss when science will allow humans to upload their consciousness to the cloud, and download themselves into a new body (aka “sleeve”), a new study has prompted me to think about the future of regional traffic as not just dependent on autonomous vehicles or better mass transit.

Researchers at the University of Maryland National Center for Smart Growth analyzed what the Washington, D.C / Baltimore region may look like from now until 2040. The “Engaging the Future” report contrasts four possible futures against a baseline scenario in which the region adds nearly 1 million additional commuters. Under the baseline “do-nothing” scenario, commute times could quadruple despite large increases in rail ridership.

Congestion in the region, already bad, is forecast to get significantly worse. In spite of large increases in rail ridership, vehicle miles traveled and hours traveled are set to increase, which will worsen traffic – especially on highways. Source: National Center for Smart Growth

So, what can be done? The researchers played with different inputs in their model, which can all be found on page 3. For the sake of simplicity, I’ll focus on three: (1) self-driving vehicles, (2) better public transit, and (3) fuel price. Assuming growth or decline in these three factors produced the following four future scenarios for the region.

Revenge of the Nerds: Cheap fuel and autonomous vehicles incentivize driving and sprawl

This is a future of rapid economic growth driven by low fuel prices, widespread adoption of self-driving vehicles, and a retreat from government regulation in the face of such economic success. When combined, these factors increase the capacity of existing expressways, reduce the cost of driving, and make travel time more productive as commuters can watch Netflix as their car drives them to work. If most people are in self-driving cars, congestion could be reduced as cars are able to travel closer together and cause fewer crashes, allowing existing highways to accommodate more vehicles. As a result, ridership on transit plummets, emissions from transportation rise, and more farmland and forests are converted into housing.

The widespread use of autonomous vehicles increases highway capacity by 50 percent, which dramatically reduces congestion. But as residents decentralize due to new housing patterns, vehicle miles and emissions increase. Source: National Center for Smart Growth

Free for All: Self-driving cars fail to take hold, low fuel prices exacerbate sprawl as more jobs and people move into the region

This scenario assumes little government regulation and a slow but steadily growing economy led by job and population growth throughout the region. Low fuel prices mean no major investments in mass transit, but public-private partnerships are forecast to invest in new tolled highways and the construction of an additional bridge to the Eastern Shore of Maryland. In this scenario, employment and housing disperses from urban areas, and households fill the formerly protected agricultural preserves of the inner suburbs, especially in Montgomery, Prince George’s, and Baltimore Counties. Though sprawl worsens, and mass transit ridership declines, congestion and transit time improve as jobs move to the suburbs, closer to commuters.

This scenario assumes a relaxation in development restrictions, which allows new residential developments to locate in the formerly rural areas of Montgomery, Baltimore, Prince George’s and Howard Counties. Source: National Center for Smart Growth

Blue Planet: High fuel prices and strong government regulation stimulate investments in mass transit and renewable energy

This scenario assumes low levels of self-driving cars, but strong economic growth as advancements in clean technology overpower the economic drag of rising fossil fuel prices. High-tech clusters expand throughout the region, and investments in transit and renewable energy greatly decrease emissions, improve travel times, and lower regional congestion. Local governments accommodate growth by increasing residential capacity in inner suburbs, especially around the expanding transit network. The changes in travel behavior are forecast to be dramatic in this scenario. Though vehicle miles traveled increases, congestion is reduced as more public transit accommodates new straphangers. As a result, transportation-related emissions are greatly reduced as vehicles become electrified and personal transit shifts to public transit.

In this scenario, transit ridership increases 21 percent over the baseline; about half due to the expanded network and half due to high fuel prices. Unlike the baseline, many more transit trips originate in the cores and inner suburbs, with a substantial increase in reverse commutes to transit-accessible inner suburb locations. Source: National Center for Smart Growth

Last Call at the Oasis: As gas prices quadruple and economic growth slows, governments respond with more investment in core transit and electric vehicle infrastructure

The last scenario envisions a future defined by scarcity. Declining world oil reserves quadruples gas prices and accelerates the transition to electric vehicles, but not self-driving cars. The changing structure of the economy directs growth to the city cores of the region, and both households and jobs concentrate near transit stations in urban centers or inner suburbs. A quadrupling of gas prices would cause dramatic changes in travel behavior. Transit ridership would increase significantly, and electric vehicle sales would rise, helping slash emissions from transportation. In addition, considerably less forest and farm land would be developed in this scenario, since the jobs and housing would be concentrated more toward transit hubs in city centers or inner suburbs.

When vehicle operating costs quadruple, travel behavior, and ultimately land use, change in expected ways. Households cluster in the inner suburbs, close to employment and services, and near existing and new rail transit stations. Source: National Center for Smart Growth

Travel behavior is profoundly affected by a fourfold increase in fuel costs, the lack of autonomous vehicles, and the concentration of households in suburban corridors. As a result, congestion could fall dramatically in this scenario, along with auto-related pollution.Source: National Center for Smart Growth

How policy can help shape the Washington, DC / Baltimore region

This modeling effort demonstrates that the Washington, DC / Baltimore region could grow in vastly different ways. If we are to maximize the potential of self-driving cars and electric vehicles to reduce congestion and transportation-related emissions, smart policy will be needed to help drive the adoption of these technologies even if gas prices remain low.

Policies that offset the cost of electric vehicles incentivize the installation of public charging infrastructure, and push the generation of renewable energy are a good start – and already on the books across the country. Additional regulations to ensure autonomous vehicles are powered by renewable electricity and, to the greatest extent possible, operate as shared rides, will also likely be important as self-driving technology encourages people to take a car over public transit.

Policy will also be needed to keep housing and jobs from expanding too far into agricultural preserves and forests beyond the inner ring of suburbs. Placing affordable housing near transit hubs will likely remain key to keep people using public transit, even if congestion is somewhat lessened from a widespread adoption of self-driving cars.

Lastly, it’s important to recognize that neither self-driving cars nor electric vehicles are a panacea to transportation-related emissions and congestion. Even if we have cleaner vehicles, if there are more people in the region buying more vehicles and driving them more, then a decrease in emissions from fuel efficient or electric vehicles could be at least partially offset by the sheer volume of new drivers in the region. That’s why housing and regional planning policy must be taken into account when looking at the holistic future of this region – and hopefully this report informs regional planners and other policymakers as they look to expand the productivity and environmental stewardship of the region.

Ocean Agency Must Keep Its Focus on Climate Change and Sustaining Marine Ecosystems

Photo: Darla White (NOAA)

It has been a tumultuous couple of weeks for ocean aficionados like me.

The Acting Administrator for the National Oceanic and Atmospheric Administration (NOAA), Adm. Timothy Gallaudet, made a presentation to leadership at the Department of Commerce, NOAA’s home, on possible changes and priorities for the agency during this administration. The second slide appeared as follows:

The text clearly describes a shift away from scientific work on climate and efforts to conserve and manage ocean and coastal resources.

Further, the presentation went on to describe strategic priorities, with no mention of climate change and stewardship of resources, but primary focus on weather forecasting, deregulation and economic development.

To me, this mission and these priorities make little sense. In an era when the climate is changing with dramatic effects on our nation and the world, how can the principal agency tasked with understanding our oceans and atmosphere not strategically address climate? We simply can’t develop the ocean economy, including increasing fishery and aquaculture production, without both conserving resources and addressing the ongoing effects of a changing climate. As we’ve seen in the past with species like cod, haddock, some tunas and even shellfish stocks, overfishing without regard to conservation crashes fish populations and harms coastal economies. As a former NOAA scientist, then regional administrator and then Deputy Director of the National Marine Fisheries Service, I know well the challenges of managing ocean resources, including fisheries and aquaculture. We have made extraordinary progress in ending overfishing, as well as conserving marine mammals and endangered species. And we can’t let up now, because maintaining functioning ecosystems is the key to productive fisheries AND aquaculture.

Following press reporting based on the presentation slides obtained by UCS, Adm. Gallaudet swiftly backtracked. His press office issued a statement saying, “The PowerPoint was intended to share new ways NOAA could augment the DOC’s [Department of Commerce] strategic plan. It was not intended to exclude NOAA’s important climate and conservation efforts, which are essential for protecting lives and the environment. Nor should this presentation be considered a final, vetted proposal.”

He then sent the following email to NOAA staff:

June 25, 2018

Last week, I gave a presentation at an internal meeting within the Department of Commerce (“DOC”) where I shared some of my thoughts on NOAA. My presentation, which was not reviewed by the Office of the Secretary prior to the meeting, was intended to share new ways NOAA could augment the DOC’s strategic plan. It was not intended to exclude NOAA’s important climate and conservation efforts, which are essential for protecting lives and the environment. Nor should this presentation be considered a final, vetted proposal.

Secretary Ross, the Department, and I support NOAA’s mission to understand and predict changes in climate, weather, oceans and coasts; to share that knowledge and information with others; and to conserve and manage coastal and marine ecosystems and resources. We are also fully aware of the congressional mandates and will continue to adhere to them.

With gratitude and respect,

RDML Tim Gallaudet, Ph.D., USN Ret.
Assistant Secretary of Commerce for Oceans and Atmosphere and
Acting Under Secretary of Commerce for Oceans and Atmosphere
National Oceanic and Atmospheric Administration

It is a huge relief if NOAA and Department of Commerce back away from this misguided effort to redirect the agency. But we must be vigilant. Because at the same time, the President issued an “Executive Order Regarding the Ocean Policy to Advance the Economic, Security, and Environmental Interests of the United States” which rescinds President Obama’s Executive Order establishing a national ocean policy. That policy, which was essentially based on the work of two national commissions (I served on one of them), established principles of conservation, management and stewardship of our ocean ecosystems and resources, promoted regional and federal agency cooperation, and called for national programs to advance ocean science in concert with addressing the ongoing effects of climate change. The new order from President Trump seems to set no clear policy direction other than economic development, no matter how many times I read it. Economic development without conservation and management is simplistic, short-term thinking that will harm the ocean economy in short order.

And the President’s Office of Management and Budget proposed a reorganization plan that would remove the National Marine Fisheries Service from NOAA and merge it with the US Fish and Wildlife Service in the Department of Interior, effectively severing the ties between marine resource management and the ocean science agency (NB: to do so would require an act of Congress). And, the Executive Office of the President released a draft report for public comment entitled, “Science and Technology for America’s Oceans: a Decadal Vision”.

Now, Adm. Gallaudet may have backed away from his presentation changes in NOAA’s mission and strategic priorities, but the ocean science plan from the White House contains those very same priorities. There is no mention of climate science, and the second goal reads, “Goal II. Promote Economic Prosperity: 1) Expand Domestic Seafood Production; 2) Explore Potential Energy Sources; 3) Assess Marine Critical Minerals; 4) Balance Economic and Ecological Benefits; 5) Promote the Blue Workforce” with no mention of conservation and stewardship.

A plan that focuses solely on unregulated fishing and energy squanders the great progress we have made in understand our ocean and atmospheric system and recovering, conserving and managing ocean ecosystems. I hope we don’t, but I will be watching, and you should too.

 

Photo: Darla White (NOAA)

Engaged Science: 6 Tips for the Trump Era

March For Science PDX, Portland, OR, April 22, 2017. Photo: Joe Frazier Photo/CC BY 2.0 (Wikimedia)

A 2017 public opinion survey found that only about one in five American adults has a great deal of confidence in scientists. Some of the most pressing environmental challenges, including climate change, have not motivated sufficient action despite the accumulation of scientific evidence. These days, the Trump Administration routinely attacks, misrepresents, and ignores science to the detriment of the environment and our health. How can scientists improve their engagement with the public and decisionmakers to help solve these problems?

Last month, a cohort of scientists, scientists-in-training, and environmental advocates came together in Seattle, Washington to discuss these challenges in person. The workshop, led by COMPASS and the Union of Concerned Scientists (UCS), convened participants working to address environmental and public health problems from multiple angles and diverse skill sets, including public health, ecology, biochemistry, computer science, water policy advocacy, and community-based participatory research. COMPASS and UCS supported cohorts of Science Sentinels and Science and Democracy Fellows (including both of us) in order to help to build a network of empowered, mutually supportive leaders that can help advance the role of science in society, guide their peers, and support evidence-based decision making on environmental issues at the local level and beyond.

Based on some of our workshop discussions and a roundup of resources from the UCS Advocacy Toolkit, the American Geophysical Union’s Sharing Science project, and the Center for Public Engagement with Science and Technology at the American Association for the Advancement of Science, we’ve compiled six ways for scientists to improve their engagement with some of society’s most pressing and vexing environmental challenges:

1. Focus on connection, not explanation

Scientists may have advanced degrees and specialized training, but that is just one kind of expertise needed to inform our most pressing environmental and public health problems. Community members, including those on the front lines of the environmental justice movements, are often experts in the challenges that their own neighborhoods face. Instead of trying to explain or convince others that your ideas are valid, focus on connecting with people in a way that identifies common ground and builds mutual trust. Take an opportunity to listen to others before offering your perspective, and offer your ideas with humility and a collaborative spirit. Check out the UCS Guide on Engaging Communities for more information on this topic.

Figure: Adrienne Keller

2. Know your (specific) audience

One key for engagement is to define the audience for scientific outreach more specifically than just the “general public.” “The public” can be a difficult audience to craft a message for, because it is so large and so diverse. Try to identify a specific target person or group, if possible (e.g., a key legislator, community organizer, or local journalist). The COMPASS Message Box, which focuses on the distillation of a scientific problem or study into a handful of key ideas and results, can be a helpful tool for shaping messages that will resonate with a specific segment of the public. Listen carefully to your various audiences and craft messages that are succinct and responsive to their questions and interests.

3. Make your science relevant

Scientific results are often confined to academic meetings and subscription-access peer-reviewed journals, even though the research itself may have been taxpayer funded. Consider broadening the reach of your work and ideas by communicating through other channels, like social media sites (including Twitter), blog posts, YouTube videos, and audio podcasts. These mediums offer opportunities to shed light on different aspects of your work and allow for a creative outlet within which to share your talents. In these settings, you have more flexibility to contextualize your results, and doing so may even provide you with new insights to bring back to the lab bench or field site.

4. Don’t get lost in the details

Scientists are well-versed in the details of their work. While this comfort with the technicalities helps ensure research results are solid, scientists can get lost in nuanced discussions about statistics and lose sight of the bigger picture. When communicating science to the public, focus on the “so what?”, emphasizing how your research connects with your audience’s values and concerns. Using the figure below as a guide, focus on the public audience approach (on right side) that emphasizes key results above all else. And remember, most people can only remember three to five ideas at one time, so stick to your key take-home points and make them extra “sticky” by giving your quantitative results meaningful context relevant to your specific audience.

Figure adapted from COMPASS and Escape from the Ivory Tower: A Guide to Making Your Science Matter by Nancy Baron, Island Press 2010

5. Offer your perspective to journalists

Graphic: American Geophysical Union

This may come as a surprise, but most journalists are eager to hear from scientists, especially at your local paper. Proactively engage with journalists, introducing yourself by offering thoughts on a recent article or inviting a journalist to your lab or field site. Ashley Ahearn, guest panelist at our workshop and award-winning public media journalist (see her stories at PRI), encourages scientists to “meet journalists halfway”; as you actively develop a trusted relationship with a local journalist, you could become a valuable and accessible resource of technical expertise. In return, members of the press, who are practiced storytellers with a knack for accessible communication methods, can help to get a scientific message across to a wide audience. These communicators are trained to focus on eliciting the main points out of trusted technical experts, such as the novelty and implications of new research.

6. Identify policy windows for effective public engagement

As the federal government under President Trump takes on an increasingly anti-science tone, you can stand up for science by weighing in as a technical expert and constituent of your elected leaders. Keep an eye on public comment periods on Regulations.gov, and write comments that speak to the technical aspects of proposed rules. One opportunity to do this right now is the open comment period for EPA’s recent proposal to censor science, which would undermine decades of science-based policymaking that serves as the foundation for our nation’s clean air and water standards that protect our health. For more information on this proposal and how you can offer your comments, check out this blog post and how-to guide for developing public comments.

This list is just a start; we welcome your ideas for engagement on Twitter using the hashtag #SciComm. For more information on sharing your science and tips for effective public engagement, check out these resources from the Sharing Science project and Science Network (including archived webinars). For now, we’ll leave you with this helpful visual from the American Geophysical Union, which walks through some of the many options for scientific engagement.

Vijay Limaye is an environmental health scientist working as a Climate Change and Health Science Fellow at the Natural Resources Defense Council in New York City. He is broadly interested in quantifying, communicating, and mitigating the health risks associated with climate change, with a focus on the public health burden of global air pollution and extreme heat events. Prior to his role at NRDC, Vijay worked for three years as a scientist at the U.S. Environmental Protection Agency regional offices in San Francisco and Chicago, focusing on issues such as Clean Air Act regulatory implementation, risk communication, citizen science, and air-quality monitoring policy. Vijay holds a B.A. from the University of California-Berkeley and a Ph.D. in environmental epidemiology from the University of Wisconsin-Madison. For his dissertation, Vijay has conducted interdisciplinary research quantifying the health impacts of climate change–triggered air pollution and heat waves for populations in the United States and India.

Adrienne Keller is a PhD student in the Evolution, Ecology and Behavior program in the Department of Biology at Indiana University, where she studies forest carbon and nutrient cycling. Adrienne holds an M.S. in Resource Conservation from the University of Montana and a B.A. in Biology and Geography from Macalester College (St. Paul, MN). In addition to her research in ecosystem ecology, Adrienne is an active member of the newly formed, grassroots organization Concerned Scientists @ IU. Prior to graduate school, Adrienne was involved in science policy work as a Program Assistant with the National Council for Science and the Environment in Washington, D.C. Adrienne has also enjoyed working with K-12 students in a variety of settings, including leading cross-cultural immersion programs for high school students to Africa and Latin American with the Student Diplomacy Corps and teaching field ecology courses in the Galapagos Islands with Ecology Project International. 

Science Network Voices gives Equation readers access to the depth of expertise and broad perspective on current issues that our Science Network members bring to UCS. The views expressed in Science Network posts are those of the author alone.

Getting More Wind and Solar is 100% Possible, But Not 100% Straightforward. Here’s Why

With wind and solar prices beating the cost of fossil-fuel generation in many places, we have a great opportunity to replace and modernize our energy supply with more renewables—and we can do so reliably. The Union of Concerned Scientists congratulates grid operators who have demonstrated that replacing old generation with wind and solar does not cause reliability problems. In the United States and in Europe, grids have run without coal, and with wind at 60% of the total mix. The director of reliability assessment of the North American Electricity Reliability Corporation has stated that with planning, any level of renewables on the grid could work.

Regional record for use of renewable energy in a single hour. Chart UCS.

Renewables and storage substitute for conventional generation

To really nail the energy transition, and increase the buildout of wind and solar, renewables and storage will have to substitute for conventional generation in increasingly technical ways.

In fact, several grid practices are vitally important for growth of large-scale renewables. They include:

  • expanded transmission,
  • increased operational flexibility (for example: incorporating renewable forecasts with existing schedules), and
  • increased coordination with neighboring utility areas through centralized dispatch or consolidation.

Operators making steady progress with these practices have hit renewable energy production records.

Value beyond wind and solar contributions today

The number one product from wind and solar today is Energy. The wind blows, cheap energy flows. The sun shines, cheap energy results.

The grids that host lots of renewables demonstrate that variability is not a show stopper. The economics of power contracts, renewable energy credits, and production tax credits all reward maximized energy production.

The challenges can be seen when demand is not so high, and the renewables are more abundant. The grid still requires a physical balance of supply and demand. In those times grid prices are low or negative based on marginal cost of the next unit. Very low prices can signal curtailment risk and discourage buyers and sellers from adding more renewables.

UCS took up analysis of several scenarios with over 50% annual energy from renewables to find how to reduce predicted curtailment. Our examination identified practices that can lower the curtailment of wind and solar as renewable energy becomes a larger part of the energy mix.

Market prices for wind and solar beating fossil fuel prices demonstrates technology advances.

Adding more wind and solar, or adding more gas?

When studies and decisions consider new energy supplies, they start with the present power system. Discussing the value and impact of a new plant investment, assuming nothing else changes, is a necessary early step.

But what happens next is very important. Any new supply, (gas, wind, solar, coal, or nuclear), has integration and transmission needs which are managed with a range of strategies. Understanding when a new plant will operate, how much transmission is needed, whether there will be exports to neighboring utility areas—those are all are central considerations to finding the value of the new plant.

Some solutions, like building new transmission to deliver from supply-rich areas to population centers with demand, require time and money. Limiting over-supply by dispatch and turning down more expensive supplies is expected and normal but can reach the point where too much of a good thing becomes its own challenge. A lot of new wind in an area with plenty of hydro and existing wind, for example, needs transmission and export options if there aren’t any fossil-fuel units to turn down.

What is role of fossil fuel in oversupply and curtailments?

Whenever demand is not at its highest, some generation is idle. When grid operators believe that flexibility and ancillary services are available only from fossil units, they keep fossil generation running, even if that crowds out renewable generation.

To get this flexible reserve from a gas generator, the unit is turned on and run at least at its minimum level. For combustion turbines, that minimum production level is generally 35% of generator capability and 70% for a combined cycle plant. Because that flexibility is only available with the unit producing at or above those levels of energy, running combined cycle units at 70% will crowd out renewables, causing more curtailment. This has been verified in Hawaii and California, as well as replicated in studies.

How does this affect the future growth of wind and solar?

Expectations of curtailment will discourage both the buyers and seller of future renewable generation. When existing contract structures focus on maximum energy production, the value proposition is to sell more commodity into increasingly well-supplied situations. In these cases, both supply and demand interests are bypassing the opportunity to operate renewable resources for ancillary services and reserves.

Where a utility has more insight and ability to adapt reserves practices, more techniques can be developed to make greater use of the renewables.

As more wind and solar are built, we will see high penetrations of renewables with relatively lower demand and resulting lower prices during more hours.  These are the times when the ability to obtain ancillary or essential services from renewable generation is most important and most beneficial to pushing gas offline. This also coincides with when the risk of curtailment is greatest.

What’s holding back the solutions we can implement?

It’s not an issue of technology. Storage and renewable energy technologies can provide essential services, ancillary services, or reserves. These capabilities in wind and solar have been demonstrated by technology providers,  illustrated by industry experts, and even narrated by the California ISO to its Board. The trajectory of advanced storage on the grid, providing reserves and services around the world, is narrated in these slides.

Where do we go from here?

The contracts and revenue structures used today are the obstacle. Bilateral agreements between buyers and sellers to a different contract would make the difference.

Examples from the industry offer alternatives. Contracts for conventional generation function without assuming all revenue is based on production. Contracts for energy storage are emerging for capacity and performance, with revenues separated from total hours of utilization. When confronting the challenge of expanding the role of wind and solar in the energy supply, the revenue model used by other technologies that provide services other than commodity energy will be useful.

For folks that want to take this gradually, perhaps start with a contract that splits the payments during the year. In the months with curtailment risk, capacity payments make sense. The rest of the year, use energy payments to maximize production.

As the grid supply changes, and wind and solar are a larger fraction of the supply, the buyers and sellers of renewable energy will want to maintain the highest values for the renewables installations. A key strategy for pushing the fossil energy out of the dispatch is to make the fossil generators redundant and necessary. When the fossil units are being used for ancillary services, and wind or solar is curtailed, let’s make the problem become the solution. Cut the fossil generation, use the curtailment of the renewables, and thereby increase the demand for more wind and solar.

Dangerous Air Alert: New Analysis Shows How the Trump Administration Could Hide the Health Risks of Bad Air Days

We all check the weather forecast for sun, rain, UV, allergies, and other information that might affect us as we spend more time outside in the summer. That includes alerts on bad air days, when air pollution levels are high enough to be potentially dangerous, especially for children, those with respiratory concerns like asthma, and the elderly.

Indeed, there is a nice little numbered, color-coded scale for air quality that warns us when extra caution is needed. Ever wonder where that comes from?

The standards used to determine air quality refer to the average amount of a pollutant in the air. Keeping air quality below a standard determined to be bad for your health is an obviously good idea—as is being aware of when the air is bad or unhealthy. Alerts of bad air days are those that exceed the standard, telling us to watch out!

But now, changes underway at the EPA may make it less likely you’ll see a bad air day warning—even when the air is still unhealthy to breathe. That’s because the Trump administration is planning on reconsidering the standard for ozone, despite the fact that the science clearly shows that doing so would cause harm.

If the administration is successful in its efforts, we determined how many fewer bad air alerts you’d get if you lived in 19 different metropolitan areas.

Ozone is dangerous stuff

Ozone is a critical pollutant in those bad air days the weather forecasters tell us about.

In response to clear scientific evidence that ozone causes harm, especially to children and those with respiratory ailments, the national standard for ozone was lowered in 1997 from 120 to 80 parts per billion (ppb).

Then in 2014, EPA proposed strengthening the standard again to between 65 and 70ppb. The agency received 430,000 public comments on the proposal, with the scientific evidence clearly pointing to the need for a stronger standard. In 2015, the EPA compromised and set the standard at 70 ppb.

That move was opposed by several states, mostly those with large oil and gas industries, and the US Chamber of Commerce and other business groups. In 2017 EPA Administrator Scott Pruitt, who had opposed the standard on behalf of Oklahoma, announced he would delay implementation for the 70 ppb standard pending a new review. Sixteen states objected and Pruitt allowed the standard to go into effect but remained intent on re-reviewing it.

On June 21, the US House of Representatives Committee on Science, Space, and Technology held a hearing on regulating ozone pollution. The committee chairman’s opening statement makes it clear that some still seem to believe that protecting public health and having a vibrant economy are incompatible—his statement was full of language about the need to rollback the ozone standard from 70 ppb.

And unfortunately, the Trump administration is taking a lot of actions that undermine the progress we have made on cleaning up the air we breathe, as my colleagues Drs. Gretchen Goldman and Juan Declet-Baretto have written.

It’s clear that the EPA seems bound and determined to re-review the basis for the 70 ppb ozone standard, and are likely to argue it is too stringent. Indeed, some of the agency’s leading science advisors including the Chair of the Science Advisory Board, Dr. Michael Honeycutt, have even argued—contrary to public health sciencethat the health impacts of ozone aren’t that bad .

Hiding the health risks of bad air days

If industry and their allies have their way and the standard was raised to, say, 75 ppb as they have previously argued, what do we know about the impacts on public health?

We did a simple analysis to answer the question, “How many days in major cities would be considered ‘safe’ under a new standard even when they weren’t according to our current standard?”  Put another way, if you followed the weather warnings of bad air days to potentially limit your kids’ time outdoors, how many days would you believe the air to be healthy when it really wasn’t because the standards have changed?

The numbers for major cities should worry us all. Here’s our look at 19 major metropolitan areas across the country—and how many fewer bad air alerts they would have received since 2015 with a weaker ozone standard in place. In multiple places, nearly a month’s worth of days would have been unhealthy without warning–an indicator of the potential impact that weakening the ozone standard would have over the coming years.

Click on any city to see how many fewer days would have been classified as bad air days, even though the air was still unhealthy to breathe (2015–present).

google.charts.load('current', { 'packages': ['geochart'], // Note: you will need to get a mapsApiKey for your project. // See: https://developers.google.com/chart/interactive/docs/basic_load_libs#load-settings 'mapsApiKey': 'AIzaSyD7z1BmqE3PDkTKUWCIPKcR7_pbhAAjjmQ', }); google.charts.setOnLoadCallback(drawMarkersMap); function drawMarkersMap() { var data = google.visualization.arrayToDataTable([ ['City', 'Days'], ['Los Angeles, CA', 91], ['Phoenix, AZ', 63], ['Las Vegas, NV', 45], ['Dallas, TX', 39], ['New York, NY', 37], ['Atlanta, GA', 28], ['Chicago, IL', 28], ['Houston, TX', 28], ['Pittsburgh, PA', 24], ['Philadelphia, PA', 22], ['Cincinnati, OH', 21], ['St. Louis, MO', 21], ['Cleveland, OH', 19], ['Washington, DC', 19], ['Detroit, MI', 15], ['Indianapolis, IN', 15], ['Kansas City, MO', 11], ['Boston, MA', 10], ['Miami, FL', 7], ]); var options = { region: 'US', displayMode: 'markers', colorAxis: {colors: ['#ffc600', '#ff522b']}, sizeAxis: {minSize: 6, maxSize:15, minValue: 0, maxValue: 100} // height: 500, // width: 900, }; var chart = new google.visualization.GeoChart(document.getElementById('chart_div')); chart.draw(data, options); //create trigger to resizeEnd event jQuery(window).resize(function() { if(this.resizeTO) clearTimeout(this.resizeTO); this.resizeTO = setTimeout(function() { jQuery(this).trigger('resizeEnd'); }, 500); }); //redraw graph when window resize is completed jQuery(window).on('resizeEnd', function() { drawMarkersMap(data, options); }); };

The number of “bad air” ozone days was calculated using downloaded data from the EPA’s website that listed daily Air Quality Index (AQI) values. Ozone AQI values are subdivided into six categories based on the EPA’s ozone standards and its effect on human health. In a selection of major metropolitan areas, including several where we and our partners and supporters live and work, we calculated the numbers of days from 2015 to the present that would have been labeled as “moderate” instead of “unhealthy for sensitive groups” under a weakened ozone standard (75 parts per billion), thereby escaping detection by the public.

From high to low, the cities with the most number of days affected are: Los Angeles (91), Phoenix (63), Las Vegas (45), Dallas (39), New York (37), Atlanta (28), Chicago (28), Houston (28), Pittsburgh (24), Philadelphia (22), Cincinnati (21), St. Louis (21), Cleveland (19), Washington, DC (19), Detroit (15), Kansas City (11), Boston (10), and Miami (7).

What does that mean in terms of overall health impacts?

According to the EPA’s own estimates, after 2025, changing the standard is conservatively expected to annually result in:

  • 280,000 lost school days
  • 390,000 asthma exacerbations
  • 440 to 880 premature deaths

Economically, they estimate that weakening the standard would result in the loss of $2.9 to 5.9 billion annually after 2025. Clearly this does not benefit anyone in the long-run.

This is just one example of the many ways the Trump administration is working to harm our health—and it shows yet again the importance for all of us to be vigilant, and to call out and comment, when we see it happening. You can learn more ways to become engaged in our Action Center.

Pages