UCS Blog - The Equation (text only)

The Midwest’s Food System is Failing. Here’s Why.

Photo: dvs/CC BY 2.0 (Flickr)

If you’ve perused the new UCS 50-State Food System Scorecard, you’ve probably noticed a seeming contradiction. As shown on the map below, the heavily agricultural states in the middle of the country aren’t exactly knocking it out of the park when it comes to the overall health and sustainability of their food and farming systems. On the contrary, most of the leading farm states of the Midwest reside in the basement of our overall ranking.

OVERALL STATE FOOD SYSTEM RANKINGS

So what’s that about? A couple of reasons stand out to me.

First, much of what the Midwest grows today isn’t really food (much less healthy food).

It’s funny. But not really.

It’s true. While we often hear that the region’s farmers are feeding America and the world, in fact much of the Midwest’s farm output today is comprised of just two crops: corn and soybeans. There are various reasons for that, including some problematic food and farm polices, but that’s the reality.

Take the state of Indiana, for example. When I arrived there in 1992 for graduate school (go Hoosiers!), I bought the postcard at right. That year, Indiana farmers had planted 6.1 million acres of corn, followed by 4.55 million acres of soybeans. Together, the two crops covered more than two-thirds of the state’s total farm acres that year.

The situation remains much the same today, except that the crops have switched places: this year, Indiana farmers planted 6.2 million acres of soybeans and “just” 5.1 million acres of corn. Nationwide, soybean acreage will top corn in 2018 for the first time in 35 years.

Regardless of whether corn or soy reigns supreme, the fact is that most of it isn’t destined for our plates. Today, much of the corn goes into our gas tanks. The chart below shows how total US corn production tracked the commodity’s use for ethanol from 1986 to 2016:

Reprinted from the US Department of Energy’s Alternative Fuels Data Center, https://www.afdc.energy.gov/data/10339.

The two dominant Midwest crops also feed livestock to produce meat in industrial feedlots, and they become ingredients for heavily processed foods. A 2013 Scientific American essay summarized the problem with corn:

Although U.S. corn is a highly productive crop, with typical yields between 140 and 160 bushels per acre, the resulting delivery of food by the corn system is far lower. Today’s corn crop is mainly used for biofuels (roughly 40 percent of U.S. corn is used for ethanol) and as animal feed (roughly 36 percent of U.S. corn, plus distillers grains left over from ethanol production, is fed to cattle, pigs and chickens). Much of the rest is exported.  Only a tiny fraction of the national corn crop is directly used for food for Americans, much of that for high-fructose corn syrup.

All this is a big part of why, when UCS assessed the extent to which each US state is producing food that can contribute to healthy diets—using measures including percentage of cropland in fruits and vegetables, percentage of cropland in the top three crops (where a higher number means lower diversity), percentage of principal crop acres used for major animal feed and fuel crops, and meat production and large feeding operations per farm acres—we arrived at this map:

RANKINGS BY FOOD PRODUCED

As you can see, the bottom of our scorecard’s “food produced” ranking is dominated by Midwestern states. This includes the nation’s top corn-producing states—Iowa (#50) and Illinois (#48), which together account for about one-third of the entire US crop. It also includes my one-time home, Indiana (#49), where just 0.2 percent of the state’s 14.7 million farm acres was dedicated to vegetables, fruits/nuts, and berries in 2012.

Now let’s switch gears to look at another reason the Midwest performs so poorly overall in our scorecard.

Today’s Midwest agriculture tends to work against nature, not with it.

In addition to the fact that the Midwest currently produces primarily non-food and processed food crops, there’s also a big problem with the way it typically produces those commodities. Again, for a number of reasons—including the shape of federal farm subsidies—the agricultural landscape in states such as Iowa, Illinois, and Indiana is dominated by monoculture (a single crop planted year after year) or a slightly better two-crop rotation (you guessed it, corn and soybeans). These oversimplified farm ecosystems, combined with the common practice of plowing (aka tilling) the soil before each planting, degrade the soil and require large applications of fertilizer, much of which runs off farm fields to pollute lakes and streams. Lack of crop diversity also leads to more insect pests, increasing the need for pesticides. Moreover, as corn is increasingly grown in dry pockets of the Midwest such as Kansas and Nebraska, it requires ever-larger quantities of irrigation water. Finally, the whole system relies heavily on fossil fuels to run tilling, planting, spraying, and harvesting equipment.

No wonder that whether we look at resource reliance (including use of commercial fertilizers and chemical pesticides, irrigation, and fuel use) or, conversely, implementation of more sustainable practices (reduced tillage, cover crops, and organic practices, among others), most Midwest states once again lag.

                RANKINGS BY RESOURCE RELIANCE

 

RANKINGS BY USE OF CONSERVATION PRACTICES

 

 

But Midwestern farmers want to change the map.

To sum up: in general, the Midwest is using up a variety of limited resources and farming in ways that degrade its soil and water, while falling far short of producing the variety of foods we need for healthy diets. Not a great system. But there are hopeful signs that the region may be starting to change course.

For example, in Iowa, more and more farmers are expanding their crop rotations to add oats or other small grains, which research has shown aids in regenerating soils, improving soil health, and delivering clean water, while also increasing productivity and maintaining profits. Diversifying crops in the field can also help to diversify our food supply and improve nutrition.

Back in my alma mater state of Indiana, farmers planted 970,000 acres of cover crops in 2017—making these soil protectors the third-most planted crop in the state. And in a surprising turn of events just last week, Ohio’s Republican governor signed an executive order that will require farmers in eight Northwest Ohio watersheds to take steps to curb runoff that contributes to a recurring problem of toxic algae in Lake Erie that hurts recreation and poisons Toledo’s drinking water.

A recent UCS poll provides additional evidence that farmers across the region are looking for change. Earlier this year, we asked more than 2,800 farmers across the partisan divide in seven states (Iowa, Illinois, Kansas, Michigan, Ohio, Pennsylvania, and Wisconsin) about federal farm policies that today incentivize the Midwest agricultural status quo. Nearly three-quarters of respondents indicated they are looking for a farm bill that prioritizes soil and water conservation, while 69 percent supported policies (like farm-to-school supports) that help farmers grow more real food for local consumption. More than 70 percent even said they’d be more likely to back a candidate for public office who favors such priorities.

Speaking of the farm bill, things are coming to a head in Congress this summer over that $1 trillion legislative package that affects all aspects of our food system. As the clock ticks toward a September 30 deadline, the shape of the next farm bill is in question, with drastically different proposals passed by the House and the Senate. Critically important programs—including investments that could help farmers in the Midwest and elsewhere produce more healthy food and farm more sustainably—are at risk.

WHAT YOU CAN DO:

Leaders from the House and Senate need to come together to hash out their differences and agree on a compromise before the current farm bill expires. As they negotiate behind closed doors this summer, urge them to prioritize proven, science-based policies and programs that will alleviate hunger, improve nutrition, sustain our land, soil, and water, and help farmers prosper. Add your name to our petition to farm bill negotiators today!

 

The Endangered Species Act is Itself Endangered

The endangered margay. Photo: Proyecto Asis/Flickr

In the last two weeks, both the Senate and House have introduced bills proposing damaging amendments to the Endangered Species Act (ESA), the leading piece of science-based legislation used to protect and recover biodiversity in the United States. Notably, Senator John Barrasso, chairman of the Senate Committee on Environment and Public Works (EPW) and long-time critic of the Act, released a discussion draft of the bill he’s been working on entitled, “the Endangered Species Act Amendments of 2018.” The changes to the Act would introduce more routes for political interference under the guise of increased transparency, while relegating science to an afterthought instead of the basis upon which Endangered Species Act decisions are made. An EPW hearing is scheduled for tomorrow morning, where representatives from Wyoming, Colorado, and Virginia will testify before the committee on the proposed changes to the Act.

Here are some of the most concerning pieces of the misguided Barrasso proposal and what you need to know:

Section 109: State feedback regarding United States Fish and Wildlife Service employees

This section requires State agencies working with the Fish and Wildlife Service (FWS) on species conservation, management, and recovery or other interactions relating to implementation of the ESA, to provide annual performance feedback to the FWS Director regarding the responsiveness and effectiveness of the individual FWS employee(s) to state and local authorities and various other stakeholders.  This is nothing more than an intimidation tactic that could lead to scientists either being punished for saying things others don’t want to hear, or self-censoring for fear of putting their jobs in jeopardy. It opens the possibility for states hostile to conservation work to give negative feedback unfairly, or to simply bring allegations against employees to undermine their work, with no mechanism to refute or respond on behalf of the federal public servants. Ultimately, this limits the ability of FWS scientists to independently assess the science and make evidence-based recommendations to protect imperiled species, therefore rendering the Endangered Species Act less effective.

Section 301: Policy relating to best scientific and commercial data available

This section gives a green light to the politicization of the science-based determination of whether a species needs protections. It establishes a policy where the Secretary of the Interior, not a scientific expert, could assign greater weight to some data. The goal of this section is to automatically give State, Tribal, and local information greater weight regardless of its scope or quality.  Of course, such data is currently considered, but it should not be given undue consideration. In the event the Secretary finds the State, Tribal, or local data inconsistent with the “best scientific and commercial data available”, he or she will be required to provide a written explanation to the State, Tribal, or local government as well as Congress, and include it in the administrative record. This could discourage the agency from saying that the information is weak because of the political cost of doing so.

Section 302: Transparency of information

In an effort to slow the species listing process, this section would require all raw data be released on the listing. Furthermore, any state or local information used for listing decisions must be approved by said state or local government before publishing.  Again, this would lead to FWS or the states censoring the scientific information used to determine if a species needs protections.  And it would increase the procedural requirements for assembling the scientific information, slowing the process.

This section is a deliberate misinterpretation of the process we have now and will succeed only in making the Endangered Species Act process more difficult. It has been drafted under the false premise that FWS does not already heavily involve or communicate with all stakeholders, including state, local, and tribal governments.  And it implies with no justification that the federal agencies are “hiding something,” which further politicizes the process.

The Endangered Species Act has prevented 99% of species listed under the law from going extinct. The decisions on whether species need protection are based solely on the best available science. Giving greater authority to states that often lack the resources, political will, and national perspective to protect species is, to put it simply, a bad idea. Statutes like the Endangered Species Act are in place to set a national commitment, in this case for saving endangered or threatened wildlife from extinction by focusing first on science. But the changes proposed by Senator Barrasso would politicize the process and add undue procedural burdens, putting wildlife at risk for short-term political gains.

As both the House and Senate try to rush through changes to the Endangered Species Act, call your member of Congress to tell them that a law meant to protect our precious wildlife resources and habitats should not be politicized.  These endangered species and all of our natural resources depend upon stopping species extinctions.

Photo: Proyecto Asis

The EPA Should Not Restrict The Science They Use To Protect Us

EPA office building with agency flag

On Tuesday morning, the Environmental Protection Agency is holding their only hearing on their proposed rule that would restrict the science that the agency is allowed to consider in developing health and safety protections. My colleagues and I have written extensively about this proposal. On Tuesday, I will have the opportunity to speak directly to the agency about this proposal. I will have five minutes. Here is what I intend to say:

“Good morning. I am Dr. Andrew Rosenberg, Director of the Center for Science and Democracy at the Union of Concerned Scientists. We advocate for the role of science in public policy. I am here today to ask that you rescind this proposed rule because it would only restrict EPA’s ability to use the best available science to fulfill its mission of protecting public health and the environment, while doing nothing to improve transparency in decision-making.

First and foremost, this proposal is fatally flawed because it provides almost no justification or analysis of the impacts of the proposed change in policy. There is no cost benefit analysis of the rule with respect to the agency and external researchers, nor how it would affect EPA’s mission-critical work. Additionally, the proposal would effectively prevent the EPA from using many kinds of scientific studies vital to its decision-making. This includes, but is not limited to, studies that rely on personal health data, confidential business information, intellectual property, or older studies where the authors or data sources may not be accessible. Without the ability to use this scientific information, EPA would be unable to meet its mission and statutory obligations. This proposal would make it significantly harder for EPA to use the best available science to protect the public, including from:

  • Harmful emissions of hazardous air pollutants, particulate matter and ozone
  • Exposure to dangerous chemicals in commerce
  • Drinking water contaminated with toxic chemicals such as PFAS or lead

Further, CBO has calculated that such restrictions would substantially increase costs and burdens to an agency that is already experiencing budget cuts, reorganizations, and understaffing, thus undermining the ability of EPA to make decisions based on science.

The proposed rule could also prevent the agency from addressing the impacts of dangerous chemicals at low concentrations where direct measurements are very difficult. This would have the effect of leaving Americans unprotected even when there was clear indication of harms to human health.

I have over 30 years of experience in government service, academia, and non-profit leadership. I have authored or reviewed 100s of peer reviewed scientific papers. As part of my government service, I worked as a scientist and in a policy position at a regulatory agency. In universities as a faculty member and dean. I understand how agencies use science in policymaking, how research at universities is conducted, and how these entities incorporate best practices of transparency into their scientific work. As a frequent peer reviewer I do not review the raw data for studies, since that would tell me little. I review the research questions, the methods, the summarized data, the results and conclusions in order to assess the quality of the work. EPA’s proposed rule would do nothing to improve transparency for scientists, policy-makers or the public. Crafting the rule without consulting with the scientific community is a fatal error for this proposal. Even the agency’s own Science Advisory Board has noted the need to consult with scientists in any further development of this proposal.

A further fatal flaw is that the proposed rule would replace scientific evidence with political judgement. The rule would grant the EPA administrator broad authority to exclude individual studies or entire decisions from being subject to its provisions. Decisions on what science to rely on should be made by the agency’s scientific experts based on established criteria for best available science.

Five minutes is not enough time to cover all of the problems with this proposal. At best, this proposed rule is a misguided attempt at transparency. At worst, it is a backdoor attempt to prevent EPA from protecting public health.

UCS supports real transparency reforms. We support scientific integrity policies that prevent political interference in scientific analyses and reporting. We do not believe researchers should be put in the absurd position of choosing between protecting study participant privacy or informing the EPA ‘s efforts to protect public health and safety.

On behalf of the Union of Concerned Scientists and our 500,000 supporters I urge the EPA not to move forward with this rulemaking and to continue to allow the agency’s scientists and policy analysts to use the best science available to inform their work.”

House of Representatives Boosts Massachusetts Clean Energy; What’s Next?

Photo: John Rogers

The Massachusetts House of Representatives is moving on clean energy, and that’s really important. Here’s what’s noteworthy about yesterday’s votes, and what should happen next.

The house speaks

Yesterday the house took up a pack of legislative bills that have the potential to move clean energy forward for Massachusetts and the region.

  1. Renewable energy – The house unanimously approved an increase to the state’s renewable portfolio standard (RPS), to boost it from its current requirement on utilities of 25% renewables by 2030 to 35% by 2030, and drive clean energy for Massachusetts households and businesses. An amendment from the one of the state’s most vocal offshore wind champions, Rep. Patricia Haddad, would have the state look at upping its offshore wind requirement, passed in 2016 and producing important results, from 1,600 megawatts by 2030 to 3,200 megawatts by 2035.
  2. Energy efficiency – The house also passed bills that would help the #1-in-the-nation Bay State up its energy efficiency game even further. One bill would deepen efficiency efforts in general, and another would update appliance efficiency standards to keep driving innovation and cutting pollution—and save Massachusetts consumers hundreds of millions of dollars annually.
  3. Energy storage – Another bill passed by the house aims to “improve [electricity] grid resiliency through energy storage,” boosting the state’s investment in storage innovation, and requiring Massachusetts utilities to assess and improve their electricity transmission and distribution systems, including through consideration of “non wires alternatives” like energy storage.

These actions are important. In our bicameral system, nothing happens in the legislature unless both the house and senate agree on it, so the house boost is welcome.

This wouldn’t have happened without the house leadership, and we owe credit, too, to a sign-on letter led by long-time house climate champion Rep. Frank Smizik, which garnered support from more than half of the representatives.

And we’re not done.

More clean energy, closer now (Photo: Erika Spanger-Siegfried/UCS)

What’s next: Solar, senate, soon

In terms of next steps, the nearest term to-do on clean energy for the house is to pass something on solar, as called for in the Smizik letter. And not just anything, but a bill that removes the barriers that are standing in the way of solar development in various parts of the state, clarifies the legislative intent on fixed charges that the state’s utilities seem to have misunderstood, and boosts solar opportunities for low-income households.

Then we need the house and senate, which passed its own clean energy package last month, to hammer things out between the different bills.

The final package should include a strong RPS increase; removal of barriers to solar for low-income customers, customers as a whole, and our solar industry; energy efficiency’s next act; a push for energy storage; and, given carbon pollution, a boost for transportation electrification.

This all can happen before the legislative session ends on July 31, and it needs to. To get Massachusetts as quickly as possible to its clean energy future, for our clean energy economy and clean energy jobs, for cutting pollution and addressing climate change, we need leadership from our representatives and their counterparts in the senate. Yesterday was an important next step.

Photo: Erika Spanger-Siegfried/UCS

Intimidation, Disinformation, the Formula Industry and the Next Dietary Guidelines

Photo: Bradley Gordon/Flickr

It’s nearly time for the federal government to update its Dietary Guidelines for the public, and this time around the recommendations will include legally mandated dietary guidance for pregnant women, infants, and toddlers (from birth to age 24 months). With that in mind, my colleagues and I were troubled to read of a dust-up over infant formula that occurred at the World Health Organization this past spring.

According to attendees of the World Health Assembly in Geneva, the United States advocated for industry positions as it negotiated a draft resolution on infant and young child feeding, threatening countries with trade retaliation if they introduced the resolution as written. This led to Ecuador who had originally drafted the resolution to pull out from introducing it. Fortunately, Russia stepped in to reintroduce it and member countries worked together to ensure the passage of a version with strong language in support of breastfeeding over breast milk substitute therein, however the final version was missing some important provisions, including one that would give member countries the ability to ask the WHO director general for support in “implementation, mobilization of financial resources, monitoring and assessment” and legal and regulatory enforcement of the code and those countries seeking to halt “inappropriate promotion of foods for infants and children.”

This type of inappropriate interference from the infant formula industry and the willingness of the US to aggressively push for its positions by employing threats of trade restrictions does not bode well for the what lies ahead for the Dietary Guidelines, the process for which kicked off this year. Like with all science-based processes in federal policymaking, there is an opportunity for undue influence to occur to obscure the facts in order to achieve outcomes that maintain the status quo. And undue industry influence is not a stranger to this process. For example, in the 2015 guidelines, the final recommendations failed to incorporate all of the Dietary Guidelines Advisory Committee’s  (DGAC’s) evidence-based recommendations that food system sustainability be incorporated into the guidelines, after the big food industry players, most notably the meat industry, opposed the scientific conclusion. Already, the Infant Nutrition Council of America has been actively engaged in the start of the Dietary Guidelines 2020 process, and has lobbied the USDA and HHS on the issue this year. While it makes sense that they’re weighing in on this process, there is no room for inappropriate influence and false characterization of the science.

The formula industry’s long, sordid history spreading misinformation

Three companies dominate the infant formula market: Nestle, Abbott Laboratories, and Mead Johnson. They are members of the Infant Nutrition Council of America, the trade association representing the infant formula industry. There’s a long history of the infant formula and baby food manufacturers pushing back against science-based policies that would limit their ability to make health claims on or sell their products to limited demographics. As a result, we’ve seen delays to evidence-based added sugar labels, missed opportunities to tighten the language on health claims in children’s foods, and even the language in government breastfeeding campaigns toned down.

The infant formula industry used this same disinformation playbook tactic as in the recent WHO proceedings decades ago. In 1977, there was a massive boycott of major formula maker Nestle that urged participants not to buy Nestle products until the company stopped misleading advertising that favored bottle-feeding over breastfeeding. The company then ardently fought against a WHO/UNICEF Code of Marketing of Breast-Milk Substitutes which, once passed in 1981, prevented formula companies from targeting mothers and health care providers with promotions and health claims on packaging. When it passed, 118 countries voted to approve. The United States was absent from that list of countries, presumably because of industry sway.

Breaking down the science on breastfeeding

Leading scientific authorities on maternal and children’s health at The American Academy of Pediatrics, The American Public Health Association, and the American College of Obstetricians and Gynecologists all promote exclusive breastfeeding for the first six months of life as the preferred method of infant feeding due to the health benefits for both mother and child. The literature on breastfeeding has revealed its association with a variety of beneficial health outcomes including decreased risk of asthma, obesity, type 1 and 2 diabetes, sudden infant death syndrome, and respiratory tract infections for the infant and decreased risk of type 2 diabetes and breast and ovarian cancers for the mother. Not only is it healthful, but it is cost-effective. A 2013 Lancet series on maternal and child nutrition estimates that universal breastfeeding would prevent the deaths of over 800,000 children and 20,000 mothers, saving $300 billion globally each year. According to researchers at Harvard Medical School, in the United States alone, if 90% of families breastfed exclusively for 6 months, it would save $13 billion per year in healthcare costs and prevent 911 deaths.

It’s imperative that moms are supported in breastfeeding as an option, some moms are unable to for a variety of reasons and formula is the best alternative. Having breast milk substitutes as alternatives is crucial, but spreading misleading information about the benefits of formula over breastfeeding and marketing accordingly to certain demographic groups is completely irresponsible.

Despite what President Trump and others might argue about the need for infant formula for poor women in developing countries, the data has shown that it may actually be more feasible for women to produce healthy breast milk than to have access to clean water to mix with powdered infant formula to feed their infants. A 2018 National Bureau of Economic Research study found that the availability of formula actually increased infant mortality by 9.4 per 1,000 births and estimated that, as a result, 66,000 infants died in low- and middle-income countries just in 1981.

The 2020 Dietary Guidelines must preserve scientific integrity

UCS submitted comments to HHS and USDA in April on the Dietary Guidelines process urging the agencies to “maintain a high degree of integrity, autonomy, and transparency to ensure that the guidelines represent the best available science and avoid any bias that could work against the interests of public health.” In other words, the US government cannot allow the makers of infant formula to pressure them into weaker dietary guidelines that go against the best available science. Ultimately, we need access to accurate information so that we can make dietary decisions that help us achieve optimal health through nutrition, and we are counting on our government to rely on evidence, not industry talking points on matters of our children’s health. We will continue to monitor this process as the Dietary Guidelines Advisory Committee is formed in the coming months to ensure that scientific integrity at the agencies is upheld.

 

Photo: Bradley Gordon

Congress Must Extend and Reform the National Flood Insurance Program

The National Flood Insurance Program (NFIP) is up for re-authorization by the end of July. As flood risks grow around the nation, it’s time for Congress to reform and update this vital 50-year old program to better protect people and property. Without appropriate action, a warming climate coupled with rapid development in floodplains will raise the human and economic toll of flood disasters while taxpayer dollars are squandered on risky, business-as-usual investments.

Why the NFIP is so important

Last year’s devastating hurricane season brought unprecedented flooding to Texas, Florida, and Puerto Rico. This year, we’ve already seen terrible floods across the nation, in the Midwest, in Ellicott City, MD, in California and many more places. The NFIP is critical to getting people back on their feet after these types of disasters. And now Congress must pass reforms to the program also help ensure that it works to limit harms going forward.

In previous blog posts here and here, I’ve explained how the NFIP is more than just an insurance program, it’s intended to be a floodplain management and flood risk mitigation program. And today, with just over 5 million flood insurance policies in force, it’s the single largest source of flood insurance for homeowners and small businesses—making it vital for the economic well-being of communities.

Why reforms to the NFIP are essential

Unfortunately, over the years, Congress has failed to make adequate investments in accurate flood risk maps. That means that many Federal Emergency Management Agency (FEMA) flood risk maps are seriously outdated and even the updated ones don’t reflect future conditions such as projections of sea level rise. It has also underfunded and failed to incentivize measures to encourage homeowners and communities to reduce their flood risks.

Outdated maps, subsidized flood insurance premiums and repeated payouts for business-as-usual rebuilding in floodplains after disasters have masked communities’ awareness of their flood risks and blunted incentives to reduce those risks and limit development in areas prone to flooding.

The NFIP was originally conceived as a program that would help homeowners access affordable flood insurance coverage (at a time when the private sector was increasingly unable to provide this service) and reduce future flood risks by incentivizing risk-mitigation measures and discouraging development in floodplains.  It was never designed to cope with the types of extreme flood disasters the nation has experienced recently, relying as it does on affordable insurance premiums and modest Congressional appropriations for its budget.

A series of major storms—including Hurricanes Katrina, Rita, Sandy, Harvey, Irma and Maria—have had a dire effect on the program’s finances, forcing it to borrow ever-increasing amounts from the US Treasury. Last year, $16 billion of the NFIP’s debt to the Treasury was forgiven, the first time this has happened. The program’s debt stands at about $20.5 billion now, although claims from last year’s hurricane season are still not fully resolved.

Meanwhile flood risks are growing in many places around the nation. A recent report from the Union of Concerned Scientists finds that, in just the next 30 years, hundreds of thousands of coastal homes and commercial properties worth billions of dollars are at risk from chronic flooding worsened by sea level rise. In many inland areas, heavy rainfall events are also on the rise due to climate change, contributing to growing flood risks in non-coastal communities.

Another recent study found that the total US population exposed to serious flooding is significantly higher than previously estimated. According to the study: “Nearly 41 million Americans live within the 1% annual exceedance probability floodplain (compared to only 13 million when calculated using FEMA flood maps).”

Both along the coasts and in inland floodplains, growing development in flood-prone areas is exacerbating exposure to flood risk by putting more people and property in harm’s way and reducing the ability of our landscapes to naturally absorb water.

All these challenges together are threatening the viability of the NFIP in its current form. But with the right reforms, the NFIP can play a vital role in making our nation more flood-resilient. What’s more, Congress can ensure that taxpayer dollars invested through the program are spent wisely to limit the costs of future disasters.

How Congress can fix the NFIP

These five reforms to the NFIP would go a long way to making the program more effective, equitable and science-based, while ensuring taxpayer dollars are well spent:

  • Updating flood risk maps nationwide using the latest technology and to reflect the latest science, consistent with the recommendations of the Technical Mapping Advisory Council. Congress will also need to appropriate sufficient funds to make this possible.
  • Phasing in risk-based insurance premiums and expanding the number of people carrying insurance to ensure adequate coverage for the growing numbers of homes exposed to flood risk, and to put the program on a more financially and actuarially-sound footing.
  • Addressing affordability considerations for low- and moderate-income households through targeted vouchers, rebates, grants and low-interest loans for flood mitigation measures. FEMA’s recently-issued affordability framework provides some useful guidance, as do reports from the National Research Council.
  • Providing more resources for homeowners and communities to invest in reducing their flood risks ahead of disasters, including expanding funding for voluntary home buyout programs especially in places that flood repeatedly. Budgets for FEMA’s pre-disaster mitigation program and flood mitigation assistance programs should also be expanded.
  • Ensuring that a well-regulated private sector flood insurance market complements the NFIP without undermining it, including mandating that private insurers contribute to flood mapping fees and provide coverage at least as broad as NFIP policies.
Bills before Congress

There are several bills under congressional consideration currently, including three in the Senate—the Cassidy-Gillibrand bill (S.1313 – Flood Insurance Affordability and Sustainability Act of 2017), the Sustainable, Affordable, Fair and Efficient National Flood Insurance Program Reauthorization Act (SAFE NFIP) 2017 co-sponsored by a bipartisan group of senators, and the Crapo-Brown bill (The National Flood Insurance Reauthorization Act of 2107)—and the House (the 21st Century Flood Reform Act).

More details on the bills’ provisions are here.

None of these bills on their own deliver the full set of reforms needed and there are clearly deep differences in the House and the Senate versions.

Of particular concern are attempts in the House bill to promote private flood insurance at the expense of weakening the NFIP, rather than ensuring that the private insurance market and the NFIP work side-by-side to increase the number of people with robust insurance coverage.

Efforts to move toward risk-based insurance premiums must be accompanied by strong affordability provisions for low and fixed income households, as well as enhanced resources for flood mitigation measures. Without these provisions, those who can least cope with the impacts of flooding will be unable to afford insurance or unable to take steps to reduce their risks. There is bipartisan support for better flood risk maps, but Congress must commit to adequate budgets for FEMA to carry out this important work.

Reasonable people on both sides of the aisle should recognize that communities need help coping with growing flood risks, and a robust, reformed NFIP must be an important part of the solution.

Legislation requiring the US Government Accountability Office (GAO) to study the issue of voluntary home buyouts is also pending and should be passed.

Time to stop punting on much-needed reforms

Since the end of the last fiscal year in September 2017, the NFIP has had six short-term re-authorizations—the latest of which ends on July 31. Each time, Congress has failed to wrestle with much-needed reforms. The version of the Farm Bill that recently passed the Senate included a provision for  “straight re-authorization” to extend the NFIP for six months without any reforms. It is unclear as of now if the House will adopt a similar proposal.

Congress must stop punting on much-needed reforms to the NFIP so that the program can serve the nation well in the decades ahead. Communities on the frontlines of worsening flood risks need help now and they don’t have unlimited time to wait as Congress dithers.

 

Monsanto Drags IARC Into the Depths of Its Disinformation Campaign on Glyphosate

Photo: Kennydu69/CC BY-SA 3.0 (Wikimedia)

Industry lobbyists have learned that a tried and true way to delay or block unwanted policy proposals is to attack the science supporting those policies and the integrity of the institutions that have conducted the science. We’ve seen this time and time again as plays in the disinformation playbook.

Language from the House of Representatives’ draft HHS fiscal year 2019 appropriations bill.

One of these examples is continuing to play out right now. Monsanto and the American Chemistry Council have launched a full-throttle attack on the international scientific body, the International Agency for Research on Cancer (IARC), after it issued a review of the scientific literature in 2015 that concluded that the herbicide, glyphosate, is a probable carcinogen. The latest development in this years-long effort? A rider on the House version of the HHS appropriations bill that would prevent the National Institutes of Health from lending any financial support to IARC unless it agrees to push for reforms at IARC that have been called for by Lamar Smith and the House Science Committee at the bequest of the chemical industry.

So why all the fuss about IARC and its glyphosate review?

IARC is an arm of the World Health Organization and funded by 24 governments, and predominantly by the NIH National Cancer Institute. It has been reviewing the evidence on potentially carcinogenic agents for over four decades and has been continually improving its process to maintain rigor, objectivity, and transparency.

Enter glyphosate. Glyphosate is the active ingredient in Monsanto’s best-selling weedkiller, Roundup, and is used on the majority of commodity crops in the United States because it is effective at controlling a variety of weed types. Any change in the safety determination of this chemical would shake up the messaging that the company has used for years. Monsanto got to work quickly using several plays in the disinformation playbook to control the science and the narrative.

Monsanto’s campaign to tarnish IARC’s credibility

IARC’s monograph volume 112 evaluated glyphosate and four other herbicides by reviewing the published, peer-reviewed scientific literature available and classifying it as a “probable carcinogen.” It was published in March 2015.  A complex campaign to challenge the IARC study and IARC itself had also begun from Monsanto even before the monograph came out since they were tipped off by a former EPA employee on the document’s conclusions months beforehand. Documents released in 2017 revealed that as a part of their plan, they would “get someone like Jerry Rice (ex-IARC) to publish paper on IARC: how it was formed, how it works, hasn’t evolved over time, they are archaic and not needed now.” They would try to form “crop protection advisory groups,” conduct scientific papers on animal carcinogenicity for which “majority of writing can be done by Monsanto” to keep costs down. Monsanto even ghostwrote at least one opinion piece about IARC that was published in Forbes.

In early 2017, the American Chemistry Council (of which Monsanto is a member) started an organization called the Campaign for Accuracy in Public Health Research aimed at setting the record straight on cancer determinations for certain items, including glyphosate, red meat, and cell phones by promoting “credible, unbiased, and transparent science as the basis for public policy decisions.” On its website, there are several pieces that attack IARC’s process. This appeared to be almost directly a response to the IARC’s 2015 classification as glyphosate as a probable carcinogen.

Not only was an assault launched on the institution, but the scientists at the helm of IARC and those who composed the glyphosate workgroup have been harassed and their integrity challenged. The conservative advocacy group and known FOIA abusers, Energy and Environment Legal Institute (E and E Legal) filed a series of open record requests to IARC panelists asking for deliberative documents about the glyphosate monograph, to which IARC has told scientists not to release the documents because IARC is the owner of those materials, seeking to defend panelists’ right to debate evidence openly and critically which does not need to be subject to public scrutiny.

The House of Representatives Science Committee, led by the fossil fuel and chemical industry’s favorite champion Lamar Smith, has sent multiple letters to IARC Director, Christopher Wild, questioning the integrity of glyphosate workgroup to which he has responded (in November 2017 and January 2018) and defended both the participating scientists and the institution and its process as upholding the “highest principles of transparency, independence, and scientific integrity.”

This whole campaign is eerily similar to the Sugar Association’s effort to derail a World Health Organization (WHO) report that recommended a 10 percent limit on calorie intake from added sugars back in 2003. The report, produced by the WHO and the Food and Agriculture Organization (FAO) in consultation with 30 health experts, reviewed the scientific literature and concluded that added sugars “threaten the nutritional quality of diets” and that limiting sugar intake would be “likely to contribute to reducing the risk of unhealthy weight gain.” In a letter to the WHO, the president and chief executive officer (CEO) of the Sugar Association demanded that the report be removed from WHO websites, arguing that “taxpayer dollars should not be used to support misguided, non-science-based reports.” The letter also threatened the suspension of U.S. funding to the WHO, warning, “We will exercise every avenue available to expose the dubious nature of [the report] including asking Congressional appropriators to challenge future funding” to the WHO. In addition to attacking the WHO directly, the Sugar Association, along with six other industry trade associations wrote a letter to the secretary of HHS Tommy Thompson asking for his “personal intervention” in removing the WHO/FAO report from the WHO website and challenging the report’s recommended sugar intake limit. Unfortunately, this effort was effective in limiting the report’s influence on health policy. The World Health Assembly—the WHO’s decisionmaking body and the world’s highest health-policy-setting entity—issued a global health strategy on diet and health the following year, and the strategy contained no reference to the comprehensive WHO/FAO report.

IARC must be protected

We need more independent bodies conducting scientific reviews of the chemicals that we are exposed to on a daily basis, not fewer. And we certainly need to hang on to the institutions that currently provide us with this much-needed service. Over one hundred scientists and health professionals from US and international institutions published a paper in 2015 evaluating IARC’s role over the course of the past 40 years, outlining its role in identifying carcinogenic substances and informing important public health policy decisions.  They push back against recent criticisms, writing, “We are concerned…that the criticisms expressed by a vocal minority regarding the evaluations of a few agents may promote the denigration of a process that has served the public and public health well for many decades for reasons that are not supported by data.” They further write, “disagreement with the conclusions in an IARC Monograph for an individual agent is not evidence for a failed or biased approach.” Indeed, Monsanto doesn’t have grounds to question the integrity of an entire institution just because its findings are inconvenient.

This most recent attempt to use the appropriations process to cut funding to this scientific body is a glaring example of the way in which the disinformation playbook is employed in sometimes more subtle ways that can have dramatic impacts. Funding of our agencies should not be bogged down by ideological and political riders that can have dramatic impacts on science-based policymaking and the future of international science institutions. The language requiring NIH to restrict IARC funding if certain terms aren’t met should be stripped from the HHS funding bill and IARC should continue to receive US funding to help support all of its important work reviewing the cancer risk of environmental contaminants to inform safety thresholds across the globe.

VW Settlement: A Needed Jolt for Electric Trucks and Buses, But More Is Needed

It has been nearly three years since the Volkswagen diesel scandal first broke. Since then, a handful of settlements have been reached, one of which provides states funding to offset the extra nitrogen oxide (NOx) pollution emitted by defective Volkswagens.

A dozen states have recently finalized such funding plans and others are taking public comment on draft plans. These plans offset a majority of the pollution by providing financial incentives for the purchase of clean trucks and buses.

And rightfully so. Trucks and buses make up a small fraction of vehicles on the road (7 percent), but a disproportionately large fraction of emissions. In fifteen states, heavy-duty vehicles make up the largest source of NOx emissions from the transportation sector, despite being significantly outnumbered by cars.

For many states, the Volkswagen settlement likely represents their largest single investment in clean technologies for heavy-duty vehicles. Combined with the allure of a scandal, there’s a deserved buzz about these spending plans.

As news around the Volkswagen settlements continues, there’s two important things to keep in mind: (1) this settlement is only a fraction of the incentive funding we need to spur the deployment of electric trucks and buses, and (2) if history is our guide, incentives are only part of the equation. Solutions to global warming and air pollution ultimately rest on large scale market shifts in response to plans, commitments, and standards, such as vehicle fuel efficiency standards and renewable electricity standards.

Righting the wrongs of the Volkswagen diesel scandal

If you haven’t followed the Volkswagen diesel scandal, here’s a quick recap: In 2015, Volkswagen (and its subsidiary Audi) admitted to intentionally cheating on emissions tests affecting 580,000 diesel cars sold in the United States since 2009. The cars’ emissions of NOx (a precursor to ground level ozone, aka smog) are a head-shaking 10 to 40 times higher than what’s allowed under law.

Settlements were reached between the California Air Resources Board (which led the investigation against Volkswagen), the US EPA, and Volkswagen requiring the company to (1) buy back or fix the polluting vehicles (estimated at $10 billion); (2) invest in charging infrastructure and consumer education for electric vehicles ($2 billion); and (3) provide funding to states and tribes to offset the extra pollution emitted by the cars ($2.9 billion).*

States are taking public comment on plans to offset pollution from Volkswagens

Actions eligible to offset pollution include replacing old trucks, buses, and freight equipment. Up to 15 percent of a state’s plan can also be used for electric vehicle charging infrastructure and hydrogen fueling stations.

States have discretion as to which types of vehicles and equipment to invest in, whether zero-emission battery and fuel cell technologies or combustion technologies. Importantly, plans must consider how the investments can benefit communities that bear a disproportionate share of air pollution.

A dozen states have already approved Volkswagen mitigation plans

Plans from Wyoming, Ohio, Connecticut, Pennsylvania, Maine, Utah, and Wisconsin remain broad, with many types of trucks and buses eligible for funding. In these cases, important decisions will come as the funding is awarded to specific applicants.

Other states’ plans have provided more details. Georgia’s focuses exclusively on electric shuttle buses at Hartsfield-Jackson International Airport and new buses for the XpressGA commuter service. Minnesota’s plan uses a phased approach, evaluating its funding priorities over time. Arizona’s plan focuses exclusively on public fleet vehicles, with most funding going towards school buses. Oregon’s plan only allows funding for school buses but, unfortunately, caps funding at $50,000 per vehicle. This amount is likely not enough to encourage school districts to buy electric buses, which are the best option for children’s health.

California recently approved the largest Volkswagen mitigation plan

As the state with the largest number of defective Volkswagens, California will receive the largest amount of funding to offset the vehicles’ pollution ($423 million). California’s recently approved plan provides the strongest signal amongst states’ plans for electrification, directing $300 million towards zero-emission buses, trucks, and equipment. More than 50 percent of California’s plan will benefit low-income or disadvantaged communities.

California’s plan strikes an appropriate balance, with significant funding going towards the cleanest (zero-emission) technologies and a more measured amount ($60 million) for combustion vehicles and equipment in categories where zero-emission technology is less developed. This combination of investments is expected to more than offset the Volkswagen pollution. UCS joined many other groups across the state in supporting California’s plan.

Table showing allocations of investments in California's Volkswagen environmental mitigation funding plan

Putting the Volkswagen settlement into perspective

As large of a windfall as the $2.9 billion Volkswagen settlement is, it won’t be enough to meet our clean air and climate goals. In fact, its primary intention is to offset just the emissions from Volkswagen cars that were above the legal limit. But to meet our air quality and climate goals, we have to reduce a lot more pollution than from 580,000 Volkswagens.

State budgets: less flashy, but equally important investments

Last week, an even larger commitment to clean vehicles continued with passage of California’s annual budget and allocation of the state’s cap and trade revenues. State budgets don’t have the same intrigue or news hook of an emissions scandal, but represent opportunities for the sustained investments needed to achieve our climate and air quality goals.

While the recently approved budget for low carbon transportation ($467 million) is lower than the current year’s funding ($560 million), California has quietly invested $1.2 billion in clean vehicles over the last five years. These investments are much larger than the state’s share of the Volkswagen environmental mitigation settlement ($423 million), which will be spread out over the next few years.

California’s funding for low carbon transportation has supported everything from electric car rebates (on top of the federal tax credit) to vouchers for electric trucks and buses. Demand for the incentive funding has often exceeded the supply, indicating consumers and fleet owners are more than ready to adopt clean vehicles.

Electric truck and bus support is limited beyond California

While fourteen states provide purchase incentives for electric cars, only New York offers incentives comparable to California’s for electric trucks and buses, but from a smaller overall pot of funding ($19 million in New York vs. $180 million in California this year).

Utah and Colorado offer a tax credit for heavy-duty electric vehicles, but the credits are capped at $20,000, which doesn’t offset much of the additional cost of an electric truck. Georgia used to have a similar tax credit, but it expired.

For comparison, California and New York’s rebates are roughly $100,000 per truck or bus, depending on the size and type of the vehicle. And the rebate structure is much better for fleets, allowing the savings to be had upfront, rather than waiting for a tax credit several months later.

Federal support for heavy-duty electric vehicles has also been limited to a relatively small amount of funding for transit buses and airport shuttle buses. This is in contrast to the $7,500 federal tax credit for electric cars, which has been critical to uptake of these vehicles. Electric trucks and buses need similar incentives to spur widespread adoption.

The Volkswagen settlement could be a catalyst

While investments from the Volkswagen settlement are only a start in reaching the number of electric trucks and buses we need on the roads, they may prove critical in demonstrating the market readiness and benefits of these vehicles to justify additional investments. The availability of electric trucks and buses is increasing rapidly and public policy must keep up with these advances.

* Two other settlements – for $4.3 billion – addressed Volkswagen’s criminal and civil penalties for cheating on emissions tests and lying about cheating.

Three Revolutions and the Future of Cars: An Interview with Dr. Dan Sperling

There are a number of benefits we can expect to see with the introduction of autonomous vehicles (AVs), including more convenient transportation. One possible consequence resulting from this would be an increase in the number of miles that people drive, creating more vehicle pollution. To avoid this outcome, experts like Dr. Dan Sperling* from the University of California, Davis, are stressing the need to incentivize low-carbon vehicles (like electric cars) and an increased number of passengers per trip (sometimes called sharing or pooling).

My colleague Abby Figueroa sat down with Dr. Sperling to discuss the future of transportation and his book Three Revolutions: Steering Automated, Shared, and Electric Vehicles to a Better Future.

I extracted some key excerpts from the interview. You can listen to the complete interview here:

Abby Figueroa (AF):  So you have a book that you’ve wrote recently, “Three Revolutions” where you talk about what needs to happen next in transportation. Let’s talk about those three revolutions. Let’s start with the first one, electric vehicles. What’s going on with electric vehicles these days?

Dr. Daniel Sperling, Distinguished Professor of Civil Engineering and Environmental Science and Policy, and founding Director of the Institute of Transportation Studies at the University of California, Davis (ITS-Davis).

Dan Sperling (DS): Well, electric vehicles is a fascinating topic that I’ve spent many years on. And now as was mentioned earlier, I’m a board member for the California Air Resources Board. So California is fighting with the Trump administration over electric vehicle rules but electric vehicles are here to…not only here to stay, they’re going to dominate. There’s almost no question about it. Every car company in the world has made a major investment. They’ve got the technology, they’ve got the supply chains, they’re really just waiting for policy to really push them and consumers to start buying them. But they’re ready to go. And they’ve got the technology. So it’s really a question of how intent are we as a society in making it happen. Certainly in California, we’re really committed and we’re going to see massive introduction of electric vehicles in the coming years.

[…]

AF: So electric vehicles is the first revolution that needs to happen in transportation so that we can start reaping the benefits of reduced carbon emissions and better safety and less pollution. The second revolution you talk about in your book is automation, self-driving cars. Tell us a little bit about what’s going on in that world right now. How close are we to self-driving cars becoming a reality?

DS: Well, automation also is inevitable. It’s definitely going to happen, there’s almost no question. In this case, not just the automotive industry, but many other related companies, all the high-tech software companies, Silicon Valley companies, Google, are all making huge investments. So automation is definitely going to happen. In fact, our cars already are partly automated. Today, you can get some cars that will drive themselves on freeways right now, the Tesla, Audi, Cadillac, Mercedes.

[…]

AF: The car companies are racing forward with the technology. And the legislators and the cities are racing to try to keep up with the policies. And I think with reason people are excited and some folks are feeling more cautious and wary of it all. What’s the future looking like once we have these automation, these self-driving cars on the roads? How does that change our commute and the way we get around our communities?

DS: Well, the automated vehicles could play out in two different ways. They could be just basically superimposed on our current transportation system. In other words, we now go out and we buy our own car so now we would just go out and buy our own automated car. And so it would be the same except that it would be automated. If that were the case, that is what leads to what we sometimes call, the hell scenario…

AF: The dream or the nightmare that you called it in your book…

DS: In my book I call it, The Nightmare Scenario. And that’s because if you have an automated car, you can spend time in that car doing anything you want. You can eat, sleep, tweet, text, it can be your office. It can be your hotel room. And so you’re going to be much more willing to take long trips because you don’t mind so much being in the car. And it won’t be just being in the car more, cars will be empty part of the time. You go to a meeting, you don’t know quite when you’re gonna get out, you don’t wanna pay for parking, you just have the car circle around the block. You know, we refer to single-occupant vehicles, we’re going to have zero occupant vehicles, you know, zombie cars.

AF: That would be the nightmare scenario. That’s worse than the parking lots full of cars. It’s just cars roaming on the road with no one in them.

DS: So the other way it can play out, and that’s what we call the Heaven scenario, the dream scenario, is that these vehicles are used mostly or even totally as a mobility service, as a pooling service, meaning you take Lyft line or Uber pool and some other micro-transit companies like Via or Chariot. And you automate it and now you get rid of your cars, you don’t own cars anymore. And you just hit that button, car comes, takes you where you wanna go.

AF: Is there someone in the car with us?

DS: There’s no one in the car. And the cost is really cheap because you don’t have the driver, the automation won’t cost that much and the car will get really cheap because it’s being used so efficiently. Right now, our cars, they sit 95% of the time on average. Now we’re gonna use it 12 hours, 15 hours, 18 hours.

AF: Much more efficient.

DS: Much more efficient, so we won’t need as many. And because people are gonna pool in it, you know, there are multiple people in these cars. And these cars might not be cars like we know them now, they could get a little bigger, be more like a van, small vans. You know, probably there’ll be a differentiation of service, some people will want a more exclusive service and pay more, but the point of this is, that if we do have this pooling, that is by far the best strategy we can imagine to create a sustainable transportation system.

Because it’s cheaper, it requires less road space, less parking space, it provides more accessibility to more people, low income, physically disadvantaged, disabled.

[…]

AF: So of the three revolutions, electrification, automation/self-driving, and pooling, which one or which combination of those three are the ones that can have the best impact on our carbon emissions, the best positive climate impact?

DS: Well, if we had all electric vehicles, that would probably be the best for just reducing greenhouse gases, because there you can get, as we decarbonize our electricity system, we’re talking about a 80%, 90% reduction in greenhouse gases.

AF: And transportation is the leading cause or source of emission right now. So that’s the huge…

DS: In California, it’s over 40% of the total and nationally it’s over 30%. That’s right. So electric vehicles, if you just looked at it carbon, then electric vehicles is necessary. It’s kind of like given you have to do that. The rest of this, the pooling combined with the automation can help us reduce vehicle use. So then we can knock off another 20%, 30%, 40%, 50%.

AF: So electrification makes cars cleaner. And automation and pooling takes cars off the road.

DS: Yes.

AF: So those two things combined will help our carbon emissions again.

DS: Yeah, maybe a better way of saying it is it reduces vehicle miles traveled. It reduces vehicle use. So we’ll have less vehicles around because there’s more people in each vehicle.

AF: And they’re being more efficient. The cars aren’t parked 95% of the time.

DS: Exactly.

AF: Got it. So all the three revolutions really are interconnected, if we are to get to this dream scenario?

DS: Yes.

 

* Dr. Daniel Sperling is Distinguished Professor of Civil Engineering and Environmental Science and Policy, and founding Director of the Institute of Transportation Studies at the University of California, Davis (ITS-Davis). He holds the transportation seat on the California Air Resources Board and served as Chair of the Transportation Research Board of the National Academies in 2015-16.  Among his many prizes are the 2013 Blue Planet Prize from the Asahi Glass Foundation Prize for being “a pioneer in opening up new fields of study to create more efficient, low-carbon, and environmentally beneficial transportation systems.” He served twice as lead author for the IPCC (sharing the 2007 Nobel Peace Prize), has testified 7 times to the US Congress, authored or co-authored over 250 technical papers and 12 books, including Three Revolutions: Steering Automated, Shared, and Electric Vehicles to a Better Future (Island Press, 2018), is widely cited in leading newspapers, been interviewed many times on NPR radio, including Science Friday, Talk of the Nation and Fresh Air, and in 2009 was featured on The Daily Show with Jon Stewart.

Grendelkhan /Wikimedia Commons

An Open Letter to the Massachusetts House Leadership: Time for Climate and Energy Action

Credit: Tony Hisgett

Honorable Robert DeLeo, Speaker of the House; Honorable Ronald Mariano, Majority Leader; Honorable Patricia Haddad, Speaker pro Tempore; Massachusetts State House, Boston, Massachusetts

Dear Speaker DeLeo, Leader Mariano, and Speaker pro Tempore Haddad,

I know this is a busy time for you, but I was hoping for a few minutes of your attention.

I’m directing this note to the three of you because you’re particularly well positioned, as the #1, #2, and #3 in the House of Representatives, to make a difference on some really big opportunities (and needs) having to do with climate and clean energy. I’m also reaching out because, as a new analysis on tidal flooding projections from my colleagues here at the Union of Concerned Scientists shows, your hometowns stand to lose more than most from a stay-the-course mentality on addressing carbon pollution. The connection between your leadership and limiting climate impacts should be plenty clear.

Seawalls do the trick against king tides, but only up to a point (Credit: MyCoast.org)

What floods may come

First, on the new analysis: The study, Underwater: Rising Seas, Chronic Floods, and the Implications for US Coastal Real Estate, combines data on accelerating sea level rise (due mostly to climate change) with data on property values. It looks at high tide flooding, and specifically at properties at risk of “chronic inundation,” meaning having flooding high tides at least 26 times a year. And it figures out the overall financial value of our homes and businesses at risk in coastal communities.

The Underwater results for Massachusetts are “quite sobering.” As soon as 2045—just around the temporal corner—that chronic inundation from high tides threatens some 7,000 homes, worth a total of more than $4 billion today. For the Bay State communities themselves, that’s some $37 million today in annual property tax revenues from those homes at risk. Commercial properties add another $1 billion to the total at risk.

While you each have statewide responsibilities given your leadership positions, you don’t even have to look beyond your own towns to see some “sobering” numbers of your own:

  • In Winthrop, Mr. Speaker, just by 2045, at-risk houses have values currently totaling $160 million, with associated at-risk property tax revenues of more than $2.3 million per year. Add in Revere, and the totals are $535 million in value and $7.7 million in revenues, all at risk—higher, even, than Boston’s.
  • In Quincy alone, Leader Mariano, 2045 could see threats to homes valued at $327 million, and threats to tax revenues of $4.6 million annually.
  • For you, Madam Speaker pro Tempore, the value of Swansea, Dighton, Somerset, and Taunton homes at risk by 2045 adds up to $7.6 million.
  • Revere and Quincy have the dubious distinction of capturing two of the top three slots for number of homes at risk in Massachusetts, at 1,105 and 659, respectively (and Winthrop comes in at #6, with 440).

The longer-term picture, for coastal communities in the country as a whole and for Massachusetts specifically, are much more startling, particularly under scenarios with more climate change (based on higher emissions of heat-trapping gases like CO2).

Those regular floods are more than a nuisance, and homeowners, businesses, and communities will have to react. As my colleague Erika Spanger-Siegfried, a coauthor of the new UCS analysis, has said, different communities will be affected differently:

Some may see sharp adjustments to their housing market in the not-too-distant future; some could see a slow, steady decline in home values; and others could potentially invest in protective measures to keep impacts at bay for a few more decades.

All of those options hit the wallets of the homeowners and business people, hurt the finances of the affected communities, and affect people who depend on those areas for their livelihoods. Not reacting isn’t an option.

Keep Winthrop strong (credit: Flickr.com/acme401)

Leadership past and future

That, then, forces us to consider what we’re doing to change that longer-term picture. And that in turn brings us around to your positions as leaders of a key chamber of the legislature.

Last session, under your leadership, and with the senate, Massachusetts passed some pretty impressive stuff in the clean energy space. Your passion for offshore wind in particular, Rep. Haddad, gave us a nation-leading offshore wind target that the state is moving quickly to implement. And there was that strong requirement for long-term contracts for power from hydro or wind facilities, and more.

But you know that some stuff got left on the table, or in need of fixing. You set in motion a strong push for renewable energy, but the final version of the 2016 energy diversity bill failed to include the pull of the renewable portfolio standard that should have been paired with it. Rep. Haddad’s “Act to increase renewable energy” (H4575) looks to correct that.

Our state’s strong solar industry got a brief boost in 2016 legislation, but the thousands of hard-working Massachusetts solar workers and companies—and Massachusetts customers—quickly overran the new target that legislation had put in place. Plus the “fix” made it even harder for low-income households to get hold of solar’s direct benefits, by cutting the value of community solar. H4577 would help to address part of that, particularly if it includes amendments borrowed from other bills to fix access issues.

And other pieces make even more sense now, as technologies and markets have evolved (think energy storage and electric vehicles, for example).

That brings us to the present day. The Massachusetts climate/energy to-do list won’t be a surprise, since you and your colleagues have been hearing about it, including via thousands of messages from UCS supporters. It includes:

  • Strengthening the RPS, to levels that other states have figured out constitutes the necessary leadership (think 50% by 2030).
  • Getting solar growing again, with special attention to lower-income would-be customers.
  • Investing in energy storage, to strengthen our electricity grid and position us to deal with the peak demand times—and to keep the dirtiest power plants firmly in the OFF position.
  • Pushing energy efficiency to the next level, so that Massachusetts homes and businesses can do more with less.
  • Keeping electric vehicles moving and accelerating, so that we’re tackling transportation emissions—now our #1 source of carbon pollution—head on.

There’s more to it than that, but these are the pieces that are in front of you right now, or in front of the House Ways and Means Committee. And, as you well know, given the need to work things out with your counterparts in the senate, this is the week for action.

Connecting the dots

It’s not hard to connect the dots between the recent UCS analysis and your actions over the next few days. Indeed, it’s hard not to connect the dots.

Under your leadership, we can choose a path that ramps up Massachusetts’s contribution to addressing the climate change that is affecting your communities, your neighbors, your constituents; a path that drives job creation and innovation; a path that addresses the pollution that hits vulnerable communities the hardest.

Or we can opt instead to wait and see what we’ve got. Let Massachusetts’s solar industry limp along and hope that other jobs await those who lose theirs. Accept power plant pollution and its inequitable distribution because that’s the way it has always been. Roll with the tide—literally—when it hits again, and again.

But let’s face it: that second one really isn’t a credible option. The challenges—and opportunities—mean that these are times that call for innovation, equity, and ambition.

And you have the motivation, and the power, to make it happen. So that when high-tide flooding hits even on sunny days, or other impacts become more and more apparent, and your constituents are looking for answers, you’ll have those answers. Not just about adapting to those challenges, but about hitting them head on, putting all the pieces in place to make sure that we, right here in Massachusetts—and in Winthrop, Quincy, and Somerset—are doing our part, and then some, to contribute to global efforts to limit climate change.

The bills on clean energy, energy storage, and clean transportation before Ways and Means and before the full house need your support to get over the finish line, in the strongest forms possible.

So thank you for your leadership. We’re counting on it, including over the next few days.

Across the United States, Local Food Investments Link Harvest to Health

Earlier this month, we took a deep, data-driven dive into the state of food and farming across the US with the release of our 50-State Food System Scorecard. Although the country as a whole isn’t exactly the poster child for healthy and sustainable food systems (far from it), there’s a lot of variability in what’s happening at farms, grocery stores, and dinner tables from one state to the next—and we’re here to learn from it.

Of course, we couldn’t assess the food system without taking a good, hard look at how it impacts its end users: us. The map below shows how states stack up when it comes to diet and health outcomes.

But our food system is complex, and understanding how all of its various parts are connected—for example, mathematically demonstrating how a diet-related disease like hypertension might be linked to something like land use—isn’t easy. There are a lot of factors that influence what, how, and why we eat what we do, and the path from farm to fork is long and winding. Meaning that what a state’s farmers are doing doesn’t seem likely to strongly drive the state’s diet-related health outcomes. But being part of the same system, these two things do have a relationship (status: it’s complicated), and the wide array of data we’ve analyzed just might help us see it more clearly.

The diet and health outcomes map includes indicators related to food security, dietary intake, and diet-related chronic disease. See more at www.ucsusa.org/food-system-scorecard.

A general food rule: What happens in your state doesn’t stay in your state

For the most part, we wouldn’t expect to see a strong relationship between the types of food a state produces and the types of food its population consumes—much less any diet-related health outcomes. As I mentioned, there are a lot of things that factor into our dietary decisions, and dozens more that determine how they’ll impact our health in the long run. Plus, much of the food produced in any given state usually doesn’t stay there for long. Take the state of Washington, for example. It produces about 6.7 billion pounds of apples per year—nearly 20 billion apples. That’s enough for every adult, child, and infant in the whole state to eat an apple for breakfast, lunch, and dinner six days out of the week. (Fun to picture, but definitely not happening.) Instead, Washington exports nearly a third of its apples to countries around the world and ships a whole lot more to other states nationwide.

Could local food be changing the game?

However, the local food movement is making small shifts in the way our food system works. There are now nearly 9,000 farmers markets in the US, and facilities like and cooperatives are making it easier for farmers to join forces to supply food to local institutions like schools, hospitals, and universities. What’s more, many federal programs are working to help make these foods more affordable and accessible to everyone. Many markets now accept benefits from nutrition programs like SNAP or WIC, often offering incentives for fresh fruit and vegetable purchases. In addition to being good for farmers and low-income families, these programs offer data that can help us better understand the connections between farm, food, and health.

Photo: US Department of Agriculture/CC BY SA 2.0

As I mentioned, we wouldn’t necessarily expect food production and diet-related health outcomes to be strongly related, especially at the scale we looked at. (We call this relationship “correlation,” and a stronger correlation means two variables are more strongly related. And as any good statistician will tell you, correlation does not imply causation.) Data we evaluated from all 50 states show that these variables display some correlation, but it’s nothing to write home about.*

However, when you look at food production, food investments, and local food infrastructure together, it turns out that they’re much more strongly correlated to diet and health outcomes than food production alone. Meaning, when you take into account what a state grows, along with things like food hubs, farmers markets, and investments of federal funds to get more healthy food onto people’s plates, you start to see a clearer connection to diet and health outcomes in that state. Could this mean that the local food movement may meet some of the lofty expectations we’ve set for it, like improving public health by getting more fresh produce to people?

We shouldn’t get ahead of ourselves—it’s possible, and likely, that there’s another variable we didn’t look at that could be partly responsible for driving both. (This is typically called a “confounding variable.” See the classic example of murder rates and ice cream sales for a good explainer of this term.) In our case, a confounding variable could be something like the effectiveness of a state’s government—a well-resourced and high-functioning state government could potentially contribute to higher rankings for all the variables in question.

But it’s worth a second look. Scorecard aside, we’ve heard plenty of anecdotes that suggest these local food programs are working for farmers and families, and there’s a growing amount of evidence to back them up. And intuitively, it makes some sense that achieving better diets and health would require both a healthier food supply and the means to get that food to the people who need it most. Is local food a silver bullet? Definitely not. But if we’re ever going to achieve a food system that is truly sustainable, equitable, and health-promoting, investments in local and regional food systems and in healthy food access will likely be at least one piece of the puzzle.

Say, what else connects farming to food and health?

You guessed it—the farm bill. If you’ve followed this year’s reauthorization of this massive piece of food and farm legislation, you might know that it includes everything from agriculture research that helps farmers to nutrition programs like SNAP (the largest nutrition assistance program, with a correspondingly large target on its back). But it also includes a lot of “tiny but mighty” programs that could help connect the dots between healthy food production and healthy populations.

The ideologically motivated House farm bill, which would heap additional work requirements onto SNAP participants and would reduce or eliminate benefits for millions, passed on June 21—and leaves many of these small local food programs in the dust. The Senate bill passed a week later and, by contrast, makes much-needed investments in a range of science-based food and farm programs. In addition to maintaining the core function and structure of SNAP, the Senate bill also includes many of the local food infrastructure programs that factored into our food system scorecard—like the Food Insecurity Nutrition Incentive program (FINI), the Healthy Food Financing Initiative (HFFI), the Farmers Market and Local Food Promotion Program (as part of the newly created Local Agriculture Marketing Program), and more.

Want to see these programs fully funded in the next farm bill? So do we.

House and Senate negotiators are likely to begin meeting this month to try to merge these two drastically different bills into one that everyone can live with. That process will be challenging, and it will need to be informed by people like you.

If you’re as invested as we are in the future of our food and farming systems, now is the time to act. Sign our petition to House and Senate negotiators today. 

*Spearman correlation coefficients and associated two-tailed probabilities:

Food produced; diet and health outcomes (r = .39, p < .005)

Food produced, food infrastructure, and food investments indicators, averaged and ranked; diet and health outcomes (r = .54, p < .001)

Food infrastructure and food investments indicators, averaged and ranked; diet and health outcomes (r = .40, p < .005)

Court Says Agency Can’t Indefinitely Delay Implementation of Obama-era Rules

Photo: Allen and Allen (allenandallen.com)/Flickr

Here is a beacon of good news to temporarily brighten your dark and stormy social media feeds. The U.S. Court of Appeals for the Second Circuit struck down an attempt by the Trump Administration to indefinitely delay a rule that was set to increase the fines automakers must pay for failing to meet fuel economy targets. Though Elaine Chao and the Department of Transportation have already begun a rulemaking to rollback the fine increase that was finalized during the last days of the Obama Administration, at least they now cannot indefinitely delay the effectiveness of the rule while they go through the rigmarole of rolling it back.

How did we get here: A brief history of CAFE fines and penalties

Get your waders on and join me in the weeds of the Corporate Average Fuel Economy (CAFE) standard, a regulation administrated by the National Highway Traffic Safety Administration (NHTSA) and not to be confused with, though inextricably linked to, the EPA rules that govern tailpipe emissions.

CAFE was created by the Energy Policy and Conservation Act (EPCA), which required NHTSA to set a penalty for automakers who fail to meet any federal fuel economy target. When EPCA became law in 1975, the CAFE penalty was set at $5.00 per tenth of an mpg per vehicle sold. Though Congress passed a law in 1990 requiring federal agencies to adjust civil penalties for inflation, NHTSA updated these fines just once, in 1997, up to just $5.50. This 50-cent increase was far short of what the fine should have been if it was truly adjusted inflation. According to the Department of Labor, $5.00 in January 1975 actually had the same buying power as $15.27 in January 1997.

In 2015 Congress updated its Inflation Adjustment Act to prevent agencies like NHTSA from setting artificially low penalties. In response, NHTSA (under President Obama) recalculated CAFE fines based on a formula laid out by Congress and set a new penalty of $14.00 per tenth of an mpg per vehicle. This recalculation was slated to go into effect for model year 2019 vehicles, and was finalized on December 28, 2016 – just 2 working days before President Trump and his cadre of regulation-slashing cabinet members and agency administrators took office.

Here is what President Trump is trying to do to reduce CAFE fines and penalties

In late January 2017, NHSTA published a series of rulemakings that delayed the effective date of the penalty increase – first for 60 days, then for 90 more days, then another 14 days, and finally, in July 2017, indefinitely while the agency reconsidered the penalty increase via a separate rulemaking. The decision to indefinitely delay the penalty increase was then challenged in the Second Circuit by a host of advocacy organizations and several states attorneys general. The two major trade groups representing automakers also joined the suit on the side of the government, who argued that federal agencies have an inherent authority to indefinitely delay a rule while it is being reconsidered. A panel of three judges (two of whom were appointed by a Republican president) disagreed.

This ruling may do nothing to incentivize automakers to improve fuel economy improvements

I know this is a good news post, but I do need to sprinkle in some bad news. The bad news is that NHTSA is moving forward with another rulemaking that will flatline the CAFE penalty at $5.50 per tenth of a mpg per vehicle – effectively rolling back the penalty increase that the Obama Admin worked to put in place. In that rulemaking, NHTSA argues that the CAFE penalty is not a “civil monetary penalty” as defined in the law requiring agencies to adjust penalties for inflation, and therefore does not need to be adjusted. I can’t wait for a court to get involved in these semantics if (when) this rule is challenged.

But this ruling could help other courts rule against the Trump Administration

On the one hand, the Second Circuit ruling on the indefinite delay rule may do nothing to incentivize automakers to meet fuel economy targets, since the penalty for missing mpg targets will likely be flatlined anyway. On the other hand, the finding that federal agencies cannot indefinitely delay a rule while it is pending reconsideration is a holding that could be applicable in other ongoing lawsuits and may become a major thorn in the side of President Trump’s rollback agenda – especially if other federal circuits agree with the Second Circuit here.

For example, UCS is a plaintiff in a lawsuit challenging an attempt by the EPA to effectively indefinitely delay standards designed to prevent accidents at facilities that use or store hazardous chemicals. This case resides in the D.C. Court of Appeals, who may now look to the Second Circuit in determining whether EPA exceeded its statutory authority when indefinitely delaying a rule. You better believe our attorneys at Earthjustice sent the Second Circuit opinion on the CAFE penalties to the D.C. Court of Appeals, who is expected to issue a final ruling on the Trump Admin’s decision to delay the effectiveness of chemical safety standards in the next several months.

More broadly, the Trump Administration’s M.O is to first delay the effective date of Obama-era rules – sometimes for months, other times indefinitely – and then roll them back. If the courts agree that an indefinite delay of a rule is inappropriate, this Second Circuit decision becomes a strong piece of judicial opinion (aka jurisprudence) for future challenges to President Trump’s attempt to erode public health and economic protections across all federal agencies.

 

Explaining Land Use Implications of Autonomous Vehicles: Meet Dr. Jonathan Levine

Aerial view of urban sprawl in Nevada.Urban sprawl in Nevada. Photo by USDA NRCS

Autonomous vehicles (AVs) will change more than our streets, and over time could change the structure of cities, towns and neighborhoods.  As explained in our policy brief Maximizing the Benefits of Self-Driving Vehicles, “self-driving vehicles could increase the use of personal vehicles, exacerbating sprawl, congestion, and pollution. Alternatively, the use of self-driving vehicles predominately for shared rides could reduce the need for parking and expansion of roads, creating the potential to repurpose public space for uses such as businesses, green space, and walking and bicycling infrastructure.”

Meet Jonathan Levine, a Professor of Urban and Regional Planning at the University of Michigan whose research focuses on the intersection of transportation and and land-use policy.

How AVs and other changes in transportation affect sprawl will depend on policies regarding land use. Why is land use policy important in realizing a positive role for AVs in a clean transportation future? Meet Jonathan Levine*, a Professor of Urban and Regional Planning at the Taubman College of Architecture and Urban Planning at the University of Michigan in Ann Arbor. Dr. Levine’s research centers on the potential and rationales for policy reform in transportation and land use. He is also interested in the design of institutions for emerging transportation systems – which may be based in large measure on self-driving electric vehicles – to serve metropolitan-accessibility goals.

I had the opportunity to meet Professor Levine as both of us served on the Policy and Social Justice Panel at the U.S Department of Transportation Center for Connected and Automated Transportation’s Global Symposium on Connected and Automated Transportation and Infrastructure in March 2018. I asked him about the importance of land use considerations as autonomous vehicles become more prevalent on American roadways.

RCE: There is a lot of speculation not as to if, but when self-driving vehicle technology is coming.  People are talking about what changes this technology will bring to vehicles and our transportation choices, but your work takes a broader view, including land use.  Can you explain briefly what you mean by land use, and why this is an important element of transportation systems?

JL: Land use is the question of what happens and where across a metropolitan area.  Land-use patterns can be concentrated or spread out, centralized or decentralized, and mixed- or single-use.  The purpose of transportation is not movement per se, but access, or the ability to reach destinations.  Thinking of it in this way, an assessment of the quality of service provided by transportation systems must consider both the speed of movement and the location of destinations.

RCE: What are the potential pitfalls if we do not address land use concerns?

JL: In general, if we do not address land use, there will be an ultimate impediment to access to transportation for consumers and constituents. Two examples of this impediment include parking and zoning. In many cities, when a new residential or commercial building is constructed, there must be a minimum number of parking spots attached. This requirement of parking increases housing costs in the area. Furthermore, when zoning laws encourage low density development, that density is eventually capped and cannot increase.  Both pre-existing land-use policies would impede development of a customer and constituent base.

RCE: What are some of the benefits we could see if we get changes in land use right? How do AVs play into making these easier or harder to achieve?

JL: Policy is important in achieving positive outcomes regarding land use. Too often in history, leaps in technology in the transportation system led to increases in the footprint of cities. What AVs could potentially do is encourage infill development in the cities, reducing their outward expansion making their per-capita environmental footprints smaller. The benefits are not restricted to cities; employing AVs to operate in coordination with public transit to encourage transit-oriented development can make suburbs more attractive to live in.

RCE: Here in the Washington DC metro area, we have what some call the “East-West divide,” where much of the region’s wealth and opportunity is concentrated in the west, and much of the poverty and social burden in the east. Improving transportation connections between east and west is one way to bridge the gap, but your work points in a different direction.  Can improving accessibility to destinations address challenges like the East-West divide?

JL: AVs could potentially address this access challenge in the area. They can be flexible and adaptable and need to work with sharing to reduce costs.  The accessibility approach would focus on areas of low transit accessibility and high proportions of people who are unable to rely on the private car for many of their trips.  With transit-AV coordination, shared AVs can help fill in these accessibility gaps.  Coordination might come in the form of congestion pricing or other access controls such as high-occupancy-vehicle lanes in heavily traveled transit-rich corridors, regulations or incentives spurring AVs to fill in the gaps, and extension of transit subsidies to shared AVs under certain circumstances.

RCE: What should community leaders keep in mind as they prepare for AVs to re-shape our cities? Do you have any recommendations they should advocate for to achieve the best outcomes?

JL: The future of AVs in cities and regions is not just a matter for technology and business-model development.  Policy at the state and especially municipal level will shape AV futures for better or for worse.  In many cases, the relevant policies are holdovers from an earlier auto era; in this sense planning for AVs is already underway without community leaders being aware.  But these holdover policies need reform to position cities and regions for a desirable AV future.

In addition, community leaders should recognize that the data currently being produced by on-demand mobility are immensely valuable.  They should seek leverage to gain access to that data for planning for integrating shared mobility in the short term and AVs in the longer term.

 

* Dr. Jonathan Levine is a Professor of Urban and Regional Planning at the Taubman College of Architecture and Urban Planning at the University of Michigan. His current work focuses on the transformation of the transportation and land-use planning paradigm from a mobility to an accessibility basis. Dr. Levine was recognized along with his colleagues with the 2010 Chester Rapkin Award for best paper in the Journal of Planning Education and Research, and in 2001 the Association of Collegiate Schools of Planning and U.S. Department of Housing and Urban Development awarded him the Excellence in Urban Policy Scholarship Award. He is the author of Zoned Out: Regulation, Markets, and Choices in Transportation and Metropolitan Land Use (Resources for the Future 2006) and The Accessibility Shift: Transforming Transportation and Land-Use Planning (forthcoming, 2019).

The Time Has Come for Stronger Investment in Water Infrastructure – Especially for Underserved Communities

Photo: US Marines

When news of the Flint water crisis broke headlines, 21 million people across the country relied on water systems that violated health standards. Low-income communities, minority populations, and rural towns disproportionately deal with barriers to safe water. Drinking water challenges are complex: failing infrastructure, polluted water sources, and low capacity utility management are all part of the issue. Declining investment in water infrastructure over the last several decades has exacerbated the problem. Access to safe water is essential for human health and well being. Without serious investment in our water infrastructure we will continue to put communities at risk. As a country we must support existing funding sources for water infrastructure, develop new and innovative funding mechanism for long-term solutions, and more effectively prioritize the water needs of underserved communities. Furthermore we must support the science that helps us understand the nature and extent of these water challenges.

To be clear, the U.S. as a whole has very good water quality. The 21 million Americans without safe drinking water make up 6% of the country’s population. But this low percentage means nothing to those who can’t turn on their tap to quench their thirst, take a shower, or cook their food. A host of federal programs help reduce the number of communities without safe water. The EPA’s Drinking Water State Revolving Fund (DWSRF) and Water Infrastructure Finance and Innovation Act (WIFIA), the USDA’s Rural Development Water Program, and HUD’s Community Block Grants, provide essential funding and low-interest loans to fill gaps in state and local resources. These projects not only replace dilapidated pipes and pumps, they also provide trainings for utility operators, support partnerships to consolidate resources, and hire experts to identify the cause of contamination. These opportunities are crucial in rural municipalities where water utility operators are commonly residents who volunteer their time.

Figure 1: Spatial clusters (hot spots) of health-based violations, 1982-2015. Hot spots of health based violations by county. Higher Z-scores indicate a higher number of health violations as compared to the average. Source: Maura Allaire et al. PNAS doi:10.1073/pnas.1719805115. Copyright: National Academy of Sciences, Engineering, and Medicine.

But how do we know if the communities that need the most help are getting it? In the case of the Drinking Water State Revolving Fund, states are required to prioritize systems with the highest health risk and the greatest financial need. An EPA database helps states identify which drinking water systems have the highest number of Safe Drinking Water Act (SDWA) violations, but it does not track whether these communities are considered low-income or disadvantaged.

For communities that can’t afford to take on the debt of a low-interest loan like the ones provided through the DWSRF and WIFIA, grants offer a debt-free alternative. The USDA’s Rural Development Water Program offers about ten types of grants for rural and small communities and tribes. The 2018 Congressional Budget also included a new EPA grant solely for addressing the water needs of disadvantaged communities.

Regardless of these efforts, there are communities without safe water. From 1982-2015, the number of drinking water violations actually increased. It is unclear what proportion of this increase is due to stricter safety regulations, more polluted waterways, degrading infrastructure, operating errors, or a combination of these factors. What we do know is that most of the health violations link back to pathogens and that the communities with the most violations are low-income and/or communities of color. A study from the American Water Works Association concluded “in communities with higher populations of black and Hispanic individuals, SDWA health violations are more common…it is in the poorest of communities that race and ethnicity seem to matter most in determining drinking water quality.” Housing density is also a factor. A study from the National Academy of Sciences showed that urban and suburban areas tended to have fewer violations than rural areas.

Figure 2: Total violations per water system by housing density category and income group. Violations represent the portion of water system-year observations with violations. Low-income counties have median household income below 75% of national median household income. Source: Maura Allaire et al. PNAS doi:10.1073/pnas.1719805115. Copyright: National Academy of Sciences, Engineering, and Medicine

On top of increasing violations, investment in water infrastructure has decreased. An analysis from the Value of Water Campaign shows combined federal investment in drinking water and wastewater infrastructure has declined from 63 percent of total capital spending to 9 percent since 1977. State and local governments have also decreased their capital spending on water infrastructure in recent years. The EPA estimates we need to invest $472.6 billion in our drinking water infrastructure over the next 20 years. Majority of this need can be attributed to rehabilitating, upgrading, and replacing existing infrastructure.

Federal investment in water infrastructure must continue and grow. Federal funds for infrastructure do more than build new systems and replace pipes; they support management and maintenance to achieve long-term goals. Communities all over the country struggle to have safe water. There are people working hard to address these issues, but more work is needed. Everyone has a role to play by supporting politicians who prioritize the needs of our failing water systems and the communities that rely on them. We must also support the science that has enabled us to better understand the nature and extent of these water challenges and their disproportionate impact on underserved communities. Safe water must no longer be a luxury.

Sara holds a Master’s in Environmental Management specializing in Water Resources Science and Management from the Yale School of Forestry and Environmental Studies. She is passionate about many angles of water resources management. Currently Sara is working to reduce the loss of coastal wetlands as an ORISE Research Participant at the EPA.

Photo: US Marines

President Trump’s Supreme Court Pick: What’s at Stake for Science and the Environment?

Photo: Lorie Shaull/Wikimedia Commons

Battle lines over President Trump’s nominee for a new US Supreme Court justice are now being drawn, as they should be, over crucial issues such as a woman’s right to choose, health care, immigration, civil rights, and criminal justice. In past nomination fights, little attention has been paid to the court’s role in shaping environmental law and science-based regulation. But it would be a major mistake to overlook these issues now. The Supreme Court has an enormous impact on how US environmental laws are interpreted and enforced, and a new justice could tip the balance against science-based rules on climate change, clean air, and clean water.

This threat is especially potent now because the current court is composed of four conservative and four liberal justices who typically vote in their respective blocks, with retiring Justice Anthony Kennedy in the middle. Mr. Trump’s nominee is highly likely to align with the conservative block, and therefore to create a five-justice majority to take the court in a sharply rightward direction for decades to come. To get a sense of how much hangs in the balance for the environment, consider three cases decided in the past decade on 5-4 (or 5-3) votes in which Justice Kennedy sided with the majority.

EPA’s duty to address global warming

In 2007, the US Supreme Court issued a decision in Massachusetts v. EPA that many consider the most important environmental decision in its history. The court ruled that the term “pollutant” in the Clean Air Act included the heat-trapping gases that cause global warming. This ruling, which sounds obvious now, was momentous then; it required EPA to make a determination about whether these heat trapping gases threatened health and the environment, and if so, to regulate them under the Clean Air Act. The ruling was the legal foundation for the bulk of the climate action plan issued by President Obama in 2013, and the key regulations to implement that plan (limits on carbon dioxide from power plants, controls on methane leaks from oil and gas operations, and EPA fuel economy standards for cars and trucks). The ruling enabled President Obama to offer an ambitious US emissions reduction pledge to the world which, in turn, made possible the Paris climate agreement.

This case was decided on a 5-4 vote, with Justice Kennedy joining four liberal justices. Justice Kennedy’s “swing vote” was therefore a lynchpin to the federal government’s necessary push to address climate change.

Three of the four the dissenters to that ruling (Roberts, Alito and Thomas) are still on the court, and the fourth dissenter (Scalia) has been replaced by the like-minded Neil Gorsuch. If President Trump picks a Supreme Court nominee aligned with the four dissenters, as seems highly likely, that decision—and with it EPA’s authority to address climate change—stands at risk, either of being overruled directly, or chipped away at via subsequent court decisions. In other words, a newly constituted court could damage the federal government’s fledgling efforts to address climate change at least as seriously as the EPA under Scott Pruitt tried to do—and that is saying a lot.

The role of science in water protection

In the key 2006 Supreme Court case Rapanos v. United States, a landowner was sued by the federal government for filling a wetland, but contended that the government did not have jurisdiction over his land under the Clean Water Act. The case raised a recurring question—does the Clean Water Act apply only to standing bodies of water such as rivers, perennial streams, ponds and lakes, or does it also protect upstream wetlands and intermittent tributaries? The court’s decision was complex and confusing, as four conservative justices opted for a restrictive test for federal jurisdiction and four liberal justice supported a more expansive test. Justice Kennedy issued a concurring opinion that eschewed the jurisdiction line that the four conservative justices promoted, noting that wetlands and intermittent tributaries can have significant effect on downstream water bodies. His opinion was a paean to good science; he reasoned that to exclude these lands would conflict with the overall purpose of the Clean Water Act. As he wrote:

Important public interests are served by the Clean Water Act in general and by the protection of wetlands in particular. To give just one example…nutrient-rich runoff from the Mississippi River has created a hypoxic, or oxygen-depleted, “dead zone” in the Gulf of Mexico that at times approaches the size of Massachusetts and New Jersey [and] scientific evidence indicates that wetlands play a critical role in controlling and filtering runoff.

In Justice Kennedy’s view, upstream wetlands and tributaries could be regulated, if they had a “significant nexus” to the downstream waters. Ultimately, science, and not arbitrary lines, would determine the issue of jurisdiction.

Unfortunately, the question of federal jurisdiction has not been settled. The Obama administration issued a rule that tried to clarify the question, but that rule was put on hold by the courts and is slated for repeal. So, more litigation is likely, possibly before the Supreme Court, and the question is this: Will a replacement justice demonstrate the same respect for science when considering the issue? If not, we could be left with a highly restrictive interpretation of the Clean Water Act that does not do justice to the complex science involved and fails to ensure clean water.

Drawing the line on governmental compensation for environmental regulations

The constitution provides that government may not take private property unless there is a lawful purpose and the government pays compensation to the landowner. The provision was put place to prevent physical seizures of property, but it has long been understood that sometimes a government regulation can be deemed a “taking” if it “goes too far” by leaving the landowner with no viable use of the property.

This was the issue in the Supreme Court tackled in the 2017 case Murr v. Wisconsin. In the case, a landowner who owned two adjacent riverfront lots claimed that environmental restrictions prevented him from developing the lots and wanted the government to compensate him for “taking” one of the lots even though the landowner could combine the two undersized lots into one larger, buildable one.

In that case, the court decided that it did not need to treat the two lots as separate, but instead would look at the value of the property with the lots combined. The court then ruled that the state had not “taken” the landowner’s property, because the owner still had viable use of it by combining the two lots.

Here again, the court split in a 5-3 decision, with Justices Alito, Thomas and Roberts dissenting, and Gorsuch not participating (presumably because he joined the court too late to do so). The case is important because, for those who favor radical deregulation, the takings clause could be a potent weapon when applied expansively. As the former Supreme Court justice Oliver Wendell Holmes once said: “Government hardly could go on if, to some extent, values incident to property could not be diminished without paying for every such change in the general law.”

What’s next?

These three cases illustrate the importance of the Supreme Court in environmental law, the court’s deep division on ideological grounds, and the key role Justice Kennedy’s independent vote has played. A new conservative justice is highly likely to tip this very delicate balance in ways that threaten continued progress on climate, clean air, and clean water. In addition to undermining the fragile decisions in the case above, the court will likely rule on many new cases of major environmental import. In the next term, for example, the court will take up the authority of the Fish and Wildlife service to designate “critical habitat” areas on private land to protect endangered species. Further down the road, if the Trump administration follows through on its threat to try to take away the right of California and other states to establish their own global warming emissions standards for cars and trucks, no doubt the court will be asked to weigh in on this crucial question.

Given how much is at stake, the public debate over the next nominee needs to include these issues. Just as nominees should be thoroughly questioned on a woman’s right to choose and civil rights issues, the nominee’s record on matters of science and environmental regulation deserves careful scrutiny. Senators should be prepared to ask probing questions, such as whether the nominee considers Mass. v. EPA to be “settled law” and therefore disfavored from being overruled under the doctrine of stare decisis. More generally, a robust discussion about whether the nominee accepts the scientific consensus on climate science, and whether and how a judge should consider scientific evidence in statutory interpretation, is needed. If this scrutiny reveals the nominee to be hostile to science-based regulation, this should establish a bright line which senators should refuse to cross.

 

Photo: Lorie Shaull

How Dangerous is New EPA Chief Andrew Wheeler? Very. Here’s Why.

Photo: Senate EPW

With Scott Pruitt’s resignation as administrator of the Environmental Protection Agency amid a slew of ethics scandals, environmentalists who long campaigned for his ouster should be careful what they wished for.

That is because the acting administrator of the EPA is now Andrew Wheeler, formerly the agency’s second-in-command. Nominated by President Trump and narrowly confirmed in April by the Senate, Wheeler came into the job as the polar opposite of the EPA’s stated mission “to protect human health and the environment.”

Andrew Wheeler: Coal lobbyist

Andrew Wheeler comes to the top EPA post as an unabashed inside man for major polluters on Capitol Hill. Wheeler lobbied for coal giant Murray Energy, serving as a captain in that company’s bitter war against President Obama’s efforts to cut global warming emissions and enact more stringent clean air and clean water rules.

When Pruitt sued the EPA 14 times as Oklahoma attorney general between 2011 and 2017 on behalf of polluting industries, a top petitioner and co-petitioner in half those cases was coal giant Murray Energy. Wheeler was its lobbyist from 2009 until last year.

Notably, Wheeler accompanied Murray Energy’s CEO, Robert Murray, to the now-notorious meeting last year with Energy Secretary Rick Perry, the one in which Murray handed Perry a 16-point action plan ostensibly designed to “help in getting America’s coal miners back to work.” That plan ultimately became the framework of a proposal by Perry to bail out struggling coal and nuclear power plants (Wheeler was also a nuclear industry lobbyist).

That particular proposal was shot down by federal regulators, but with Pruitt’s help, the Trump administration has made inroads on most of that plan’s 16 points, with devastating consequences to the environment—including the US pullout from the Paris climate accords, the rejection of Obama’s Clean Power Plan, and slashing the staff of the EPA down to a level not seen since the 1980s attacks on the agency by President Reagan.

Wheeler has denied helping Murray draw up that document, but he certainly shares its sentiments, telling a coal conference in 2016, “We’ve never seen one industry under siege by so many different regulations from so many different federal agencies at one time. This is unprecedented. Nobody has ever faced this in the history of the regulatory agenda.”

Andrew Wheeler: Longtime Inhofe aide

If it weren’t enough that a top coal lobbyist is now at the helm of the agency charged with protecting the nation’s environmental health, it bears noting that Wheeler’s vigorous lobbying career came after serving as a longtime aide to the Senate’s most vocal climate change denier, Oklahoma’s James Inhofe.

After the Trump administration announced Wheeler’s nomination to the agency in April, Inhofe hailed Wheeler as a “close friend.” That closeness was evident last year when Wheeler held a fundraiser for Inhofe, as well as for Senator John Barrasso of Wyoming, chair of the Senate Environment and Public Works committee, which advanced Wheeler’s nomination by a party-line 11-10 vote. The Intercept online news service reported that Wheeler held the fundraisers even after press accounts revealed that he was under consideration to be Pruitt’s second in command.

Up until now, Wheeler has largely managed to escape the harsh scrutiny that has forced the withdrawal of some Trump appointees—such as Michael Dourson, whose close ties to industry doomed his nomination to oversee chemical safety at EPA, or Kathleen Hartnett White, who spectacularly flamed out with her blatant skepticism about the sources of climate change, once calling carbon dioxide, a key greenhouse gas, the “gas of life.”

In contrast to these colleagues, Wheeler has so far stuck to slickly dry, brief statements that climate change is real, while agreeing with Trump’s pullout of global climate change accords. He even tried to play the good Boy Scout. After Tom Carper of Delaware recited Scouting’s commitment to conservation, Wheeler said, “I agree with you that we have a responsibility in the stewardship of the planet to leave it in better shape than we found it for our children, grandchildren, and nephews.”

Wheeler’s long track record of lobbying suggests precisely the opposite. But Pruitt’s reign was so mercifully short that many of his efforts to roll back critical vehicle emissions standards and the Clean Power Plan, and end full scrutiny of toxic chemicals common in household products, were only in beginning stages. When Wheeler was a lobbyist behind the scenes, it was easy for him to help industry erode the EPA’s science-based mission of protecting public health and the environment.

As the face of an EPA roiling with disillusion and dissent among its scientists, he will not find it so easy to do the bidding of his former masters. This is his chance to act like an administrator for the people, not an abdicator on behalf of industry.

Note: This post is adapted from an earlier version that appeared April 6, 2018, when Andrew Wheeler was nominated to be deputy administrator for the Environmental Protection Agency.

Utilities Should Invest in Electric Vehicle Infrastructure

Photo: SanJoaquinRTD/Wikimedia Commons

For more than a century, our cars and trucks have been fueled almost exclusively by oil. Today, electric vehicles (EVs) give us the potential to power our vehicles with a diverse set of energy sources, including clean and renewable energy. But to make that happen, we need to build the infrastructure that can keep our vehicles fueled and make owning an electric vehicle as convenient as a conventional car.

Across the country, many utilities are stepping up to build the EV infrastructure that we need. Some recent investments include:

  • The California Public Utilities Commission recently approved $738 million in electric vehicle infrastructure proposed by PG&E, SCE and SDG&E, inincludingundreds of millions for charging heavy duty vehicles such as buses and trucks.
  • Utilities in Maryland have recently proposed a $104 million investment in charging infrastructure that would create 24,000 charging stations across the state.
  • The Massachusetts Department of Public Utilities recently approved a $45 million investment by Eversource. A comparable investment by Massachusetts’ other major utility National Grid is still pending in front of the DPU.
  • Ohio has recently approved a $10 million pilot for electric vehicle charging stations.

These investments raise important public policy questions. What electric vehicle infrastructure is most important to speed up adoption? How should we design electricity rates to maximize the value of electric vehicles to ratepayers and the grid? How can our infrastructure best support all types of electric vehicles, including heavy duty electric vehicles such as trucks and buses? How can we use infrastructure to support electrification of shared vehicle fleets?

Today, the Union of Concerned Scientists is releasing a fact sheet outlining 10 principles that we see as particularly important to guide utility investment in electric vehicle infrastructure. In this fact sheet, we argue that utility investment in electric vehicle charging infrastructure is important public policy and ultimately a good deal for ratepayers.

Why should utilities invest in electric vehicle infrastructure?

Electric vehicles (EVs) represent both an enormous opportunity and a significant challenge for our utilities. Converting our vehicle fleet to electricity could add as much as 1,000 terawatt hours of demand onto our electric grid, an increase of about 25 percent of current levels. If managed correctly, this large and flexible load could significantly increase the efficiency of our electric system, which would benefit not only EV drivers but also all ratepayers, providing lower costs.

In the long run, widespread deployment of EVs could also be a source of energy storage, filling a critical need as our electricity system moves away from fossil fuels toward intermittent sources of power, such as wind and solar. Without proper management of EV charging, however, the additional power needed to fuel EVs could require significant new capacity, increasing pollution and imposing additional costs on ratepayers.

Building more EV infrastructure will help more people and businesses make the switch to electric vehicles, saving money and reducing emissions. Consumer studies have consistently found that inadequate access to charging infrastructure remains one of the most pressing obstacles to EV adoption. We have had over a hundred years to build the massive infrastructure necessary to support our gasoline and diesel vehicles. Creating an EV charging network that can compete with our oil infrastructure will require tens of thousands of new charging stations.

What principles should guide utility investments?
  • Provide chargers where people live and work. Most EV charging happens at home, and as affordable, long-range EVs are becoming available, overnight home charging can provide drivers with all the charge they need on most days. So providing universal access to home charging is a top priority. Workplace charging can be a valuable perk that can spur adoption through personal and professional networks.
  • Create a network of high-speed chargers along highways. While most charging will happen at home, a network of fast chargers along highways—capable of recharging an EV in 30 minutes or less—will be a critical component of our infrastructure, allowing EV drivers to access charging for road trips and emergency uses.
  • Maximize benefits to ratepayers and the grid. EVs can provide significant benefits to ratepayers and improve the efficiency of the electric grid if electric vehicle charging occurs during times of low demand or high production of renewable energy. Utilities should create policies that encourage drivers to charge their vehicles during these ‘offpeak’ hours.
  • Establish fair electricity rates for EV charging. EV charging rates should be fair, transparent and provide value to EV drivers. High demand charges can make it difficult to create a viable business model for high speed charging stations, which can be particularly important for electrification of heavy duty and shared vehicles.
  • Support electrification of trucks and buses. Heavy-duty vehicles such as trucks and buses are major contributors to global warming pollution as well as to local air pollution, such as emissions of NOx and particulate matter that cause significant health problems. Investments in charging infrastructure and station equipment can help make these technologies cost effective for fleet managers and transit agencies.
  • Support electrification of new mobility services. Ride hailing services such as Uber and Lyft play an increasing role in our transportation system and must be electrified. Utilities should work with these companies and others to ensure that they have the charging infrastructure and rate design that they need to move to EVs.
  • Ensure low-income communities benefit from electrification. Integration of EVs into ride- and car-sharing networks, installation of more charging stations in apartment buildings, and electrification of transit and freight vehicles can help ensure that low-income residents benefit from the transition to electric transportation.
  • Create an open and competitive market for EV charging. Utilities should work with the auto industry and suppliers of charging equipment to ensure that we retain a competitive market for EV charging that encourages innovation and consumer choice and provides EV drivers with a consistent, high quality experience.

Taken together, universal access to residential charging, widespread availability of workplace charging, and high speed chargers along critical transportation corridors can make driving an EV cheaper, cleaner, and more convenient than any other car. And inducing smart charging and integration with renewables can ensure that the transition to EVs makes our grid stronger and more efficient – and save ratepayers millions in the process.

We encourage utilities and agencies to move forward with ambitious projects to build out EV infrastructure and create the clean transportation system that we need.

Photo: SanJoaquinRTD

Keep Your Paws Off: Three Ways Congress is Preying on Endangered Species Protections

The endangered marbled murrelet.The endangered marbled murrelet. Photo: R. Lowe/USFWS

It seems there is a doggedly persistent contingent of lawmakers in Congress whose life goals include defunding, weakening, ignoring, and overhauling endangered species protections. Their tactics are varied: sidelining science in favor of industry interests, attaching harmful riders to “must-pass” spending bills, and introducing legislation whose insidious intentions are masked by semantics. Here is a quick rundown of current endangered species attacks:

  • Last week, the Union of Concerned Scientists sent a letter to the House Conference Committee for the National Defense Authorization (NDAA) asking them to oppose Utah Representative Rob Bishop’s anti-science rider from being included in the NDAA for Fiscal Year 2019. The amendment arbitrarily blocks federal Endangered Species Act (ESA) protections for the endangered or threatened American burying beetle, sage grouse, and lesser prairie chicken. In this case, decisions to assign protective measures to vulnerable wildlife are determined at the behest of short-term political interests (i.e. oil and gas development), thereby violating the science-based process by which the ESA successfully operates.
  • This past Monday, Senate Environment and Public Works Committee Chairman Senator John Barrasso introduced draft legislation to “strengthen” and “modernize” the Endangered Species Act. It moves to allow states greater authority over endangered species decisions, including listing, delisting, species recovery plans, and habitat conservation. Why is this a bad move? State resource constraints, insufficient laws, lack of political will, and final veto power over scientific decisions are among the most notable concerns. Considering that Senator Barrasso had the support of the Western Governors’ Association, it isn’t a stretch to be worried about states taking concerted efforts to dismiss species protections in the name of development.
  • The House Interior and Environment and House Energy & Water appropriations bills for Fiscal Year 2019 both contain poison-pill riders that would prohibit the listing of the imperiled greater sage grouse and remove protections for red wolves and Mexican gray wolves.

The Fish and Wildlife Service has prevented the extinction of 99 percent of the species listed since its inception in 1973. Despite the Endangered Species Act’s many successes over the years, there are those who have trouble seeing past their own immediate interests. These attacks on the Endangered Species Act are not new, but they are as urgent as ever. Please tell your members of Congress to oppose any anti-science riders affecting endangered species. If you are a scientist, consider joining almost 1500 other scientists in signing on to our letter to Congress.

I would like to acknowledge and thank my colleague Amy Gutierrez, legislative associate for the Center, for her legislative research and input. 

Photo: US Fish and Wildlife Service

Black Lung Resurgence: Without Action, Taxpayers Will Foot the Medical Bills

Photo: Peabody Energy/Wikimedia Commons

I’ve written previously about my family’s experience with black lung and how the disease is making a frightening resurgence. A bit like a miner’s headlamp in the darkness, two recent federal reports and several federal scientific studies shine a light on the disease and its implications—and policymakers should take notice.

Critical benefits to miners and their families

Congress set up the Black Lung Disability Trust Fund in 1978 to provide benefits to coal miners that have become permanently disabled or terminally ill due to coal workers’ pneumoconiosis, or black lung, as well as their surviving dependents. The Trust Fund still protects miners and their families when no liable company could be identified or held responsible. This might happen if a miner had multiple employers, or if the responsible company went out of business. The U.S. Department of Labor, which manages the Trust Fund, estimates that in FY 2017, 64 percent of beneficiaries were paid from the Trust Fund, totaling $184 million in benefits. The Trust Fund provides critical benefits to miners and their families in cases where mining companies can’t or won’t pay.

The Trust Fund is financed primarily through a per-ton excise tax on coal produced and sold domestically. The original legislation set the tax at 50 cents per ton of underground-mined coal, and 25 cents per ton of surface-mined coal (but limited to 2 percent of the sales price). Unfortunately, Trust Fund expenditures have consistently exceeded revenues, despite several actions by Congress to put the Trust Fund on solid financial footing. In other words, to meet obligations in any given year, administrators are forced to borrow from the U.S. Treasury. Moreover, in 1986 Congress set the levels of the excise tax at $1.10 per ton of underground-mined coal and $0.55 per ton of surface mined coal (up to a limit of 4.4 percent of the sales price)—but at the end of this year, the tax levels will revert to their original 1978 values. For these reasons, Congress requested a review of the Trust Fund’s finances and future solvency from the General Accounting Office (GAO), an independent, nonpartisan agency that works for Congress to assess federal spending of taxpayer money.

GAO offers a wake-up call

The GAO concluded its report and released its findings last month—and the results should serve as a wake-up call to Congress. The chart below shows the impact on the Trust Fund of having to borrow year after year to make up for the shortfall in excise tax revenue relative to benefits payments, that is, the accumulation of outstanding debt.

This front-page chart of the GAO report shows that if the excise tax decreases to 1978 levels (according to current law) at the end of 2018, the Trust Fund’s debt will exceed $15 billion by mid-century.

GAO looked at the impact of a few different policy choices, including adjustments to the excise tax rate and debt forgiveness, both of which Congress has used in previous changes to the Trust Fund. In 2008, for example, about $6.5 billion in debt was forgiven (hence the large decrease in debt in the chart above). Unfortunately, that didn’t solve the Trust Fund’s solvency problem, because subsequent coal excise tax revenue was less than expected, thanks to the 2008 recession followed by declining coal production resulting primarily from increased competition with natural gas.

GAO calculated how much money would need to be appropriated by Congress to balance the Trust Fund by 2050 under various assumptions for the excise tax. The chart below summarizes the results succinctly: Increasing the current excise tax by 25 percent would require no debt forgiveness, but allowing the current tax to expire would require $7.8 billion of taxpayer money to balance the Trust Fund by 2050.

Figure 10 from the GAO report (p.30), showing the scale of the problem of outstanding debt in the Trust Fund. Analysts calculated the level of debt forgiveness needed to balance the Trust Fund by 2050, assuming that Congress makes a single lump sum payment in 2019 to pay down the debt. In other words, the bottom bar means that, if Congress allows the current tax rate to expire but also forgives $7.8 billion in existing debt in 2019, then by 2050 the Trust Fund would be balanced (meaning that the remaining debt would have been repaid and annual payments would equal annual revenues).

Assumptions matter

As with any projection of what might happen in the future, the results depend on the assumptions made by the analyst. GAO conducted a credible and sound analysis—based on reasonable, defensible, middle-of-the-road assumptions—to assess the solvency of the Trust Fund. Key drivers are projected revenues expected from future coal production and projected expenditures for future beneficiaries.

Of course, neither of these things is known with much certainty. Worse, there are compelling reasons to believe that the scale of the Trust Fund’s insolvency could be much worse:

  • For one thing, coal production could be lower than what GAO assumed, meaning less revenue from the excise tax. GAO used the U.S. Energy Information Administration’s reference case, which shows coal production essentially flat through 2050. But note that this is likely a conservative assumption: if natural gas prices remain low, or if more renewable sources of energy come online as expected thanks to continuing cost declines, coal production could continue its recent ten-year decline for the foreseeable future. And despite current federal politics, there is momentum for deep decarbonization to address the climate crisis.
  • Even more alarming, the emerging crisis of new black lung cases in Appalachia is not included in the analysis. GAO assumed that the growth rate in new black lung cases is -5.8 percent, based on historical data on the number new beneficiaries of the Trust Fund. That means that the number of beneficiaries will continue to grow, but at a slower pace than in the recent past. With the very recent surge in black lung cases combined with the fact that the disease can’t be detected in the lungs until after about a decade of exposure, this assumption is not likely to hold true.
NIOSH and NAS weigh in on science and solutions

Black lung is completely preventable, and as a result of federal standards limiting miners’ exposure to coal dust, by the late 1990s, the disease had become rare. However, as NPR has reported (here, here, here, here, and here), in just the last few years, Central Appalachia has seen a surge in new cases of complicated black lung, an advanced form of the disease. National Institute for Occupational Safety and Health (NIOSH) investigators found 60 new cases of the disease at a single radiology clinic in Kentucky in just 18 months alone. By comparison, NIOSH’s monitoring program detected only 31 cases nationally from 1990 to 1999. NIOSH researchers also identified 416 new cases in Central Appalachia from 2013 to 2017. NPR’s ongoing investigation puts the number of new cases in Appalachia since 2010 at around 2,000, roughly 20 times official government statistics.

What’s responsible for the spike in reported cases of black lung? For one thing, the national monitoring program historically has a low participation rate, and while the resurgence of the disease shows up in the national monitoring data, the cluster identified in Kentucky was discovered separately. And because it takes years for the disease to manifest in a miner’s lungs, it’s difficult to connect the disease to specific exposure or mining practices. NIOSH researchers suggest that changes in mining practices may be exposing miners to greater amounts of silica dust from cutting through rock formations to access thin or deep coal seams.

On the heels of the GAO report and the NIOSH investigations, the National Academies of Science, Engineering, and Medicine (NAS) released an independent report looking at coal industry approaches to monitoring and sampling the coal dust levels that miners are exposed to. The NAS report concludes that compliance with federal regulations limiting the exposure of miners to coal dust has reduced lung diseases over the last 30 years, but that compliance has failed to achieve “the ultimate goal of the Coal Mine Health and Safety Act of 1969”—eradicating coal dust exposure diseases such as black lung. The NAS goes on to say, “To continue progress toward reaching this goal, a fundamental shift is needed in the way that coal mine operators approach [coal dust] control, and thus sampling and monitoring.” The report recommends a systematic investigation of how changes in mining operations may have increased exposure to silica dust, the development of better monitoring devices, especially for silica, and increasing participation rates in the NIOSH monitoring program.

Congress must act—and fast

The good news is that there is the start of a solution to the funding of black lung benefits already in sight: the RECLAIM Act. If enacted, RECLAIM would free up $1 billion in existing money from the Abandoned Mine Lands (AML) fund to put people to work cleaning up degraded mine lands and spurring local economic development in communities that need it most. How is this separate fund and separate problem connected to black lung benefits?

In short, Congressional budgetary rules require that any time taxpayer money is spent, it must be offset by budget cuts or additional revenue elsewhere. RECLAIM’s champ, Rep. Hal Rogers (R-KY), identified the extension of the coal excise tax at current levels for an additional ten years to “offset” the $1 billion in spending from the AML fund. It doesn’t matter that these two initiatives are—and will remain—separate programs with their own funding streams.

But the two issues are intertwined—the surge in new cases of black lung is happening in the same region where communities are struggling to deal with the legacy of past mining operations and simultaneously trying to chart a new economic future. Addressing all these issues simultaneously is the sort of win-win-win policy solution that doesn’t come around too often.

The astute reader will notice, however, that the extension of the coal excise tax for ten years is insufficient to address the Trust Fund’s long-term solvency problem, as the charts above demonstrate. Passing the RECLAIM Act, therefore, is merely the first step to addressing the problem; but legislators must consider actually increasing the coal excise tax. This would ensure that the responsible parties—that is, coal companies—are forced to pay for the damages inflicted on real people, real families—instead of leaving taxpayers holding the bag. And with black lung set to reach epidemic levels in the coming years, Congress must act now to strengthen the fiscal health of the Trust Fund—to protect the health and well-being of miners and their families in the face of an uncertain future.

UPDATE (5 July 2018): The original version of this post misstated the year when the current coal excise tax was established. The current coal excise tax of $1.10/$0.55 per ton was established in 1986 and extended at current levels in 2008.

Photo: Peabody Energy GAO GAO

Climate Change is the Fastest Growing Threat to World Heritage

Aerial view of the great barrier reef in AustraliaGreat Barrier Reef. Photo: Lock the Gate Alliance (Flickr)

Nineteen extraordinary places were added to UNESCO’s World Heritage list this week, including Buddhist temples in South Korea, the forests and wetlands that form the ancestral home of the Anishinaabeg people in Canada, and the ancient port city of Qalhat in Oman. But amongst all the congratulations and good feeling that comes with adding sites to list of the world’s most important places, there was little or no serious talk about the implications of climate change. Last year, the 21-nation World Heritage Committee, the Convention’s governing body, raised the alarm about climate change and called for stronger efforts to implement the Paris Agreement and increase resilience of World Heritage properties, promising to revise its own decade-old climate policy. In Bahrain, however, the issue received short shrift, making it vital that the Committee make it a key agenda item at its next meeting in 2019.

Climate threats were not anticipated when the Convention was signed in 1972

Added to the World Heritage list in 2018, Pimachiowin Aki in Canada, part of the ancestral lands of the Anishinaabeg people. Photo: Bastian Bertzky/IUCN

Adopted at the General Council of UNESCO in 1972, the World Heritage Convention’s core mission is to protect and conserve the World’s most important natural and cultural heritage. Back in 1972, there was no hint that climate change would become the systemic threat to World Heritage sites that it has since proved. To be inscribed on the World Heritage List, a protected area must demonstrate Outstanding Universal Value (OUV) under at least one of ten criteria. For example, in the US, the Statue of Liberty is listed under two criteria, as a “masterpiece of the human spirit” and as a “symbol of ideals such as liberty, peace, human rights…”. Yellowstone National Park is listed under four criteria, including for its scenic splendor, unparalleled geothermal activity, intact large landscape and role as a refuge for wildlife.

If a site should come under threat from, for example, mining, deforestation or urban development, it can be added to the List of World Heritage in Danger, with the possibility of being de-listed if the problems are not addressed. This year, Kenya’s Lake Turkana was added to the Danger List, because of an immediate threat from upstream development of the Gibe III Dam in Ethiopia.

Climate change is a major threat to the OUV to many World Heritage properties, but the Danger List does not seem an appropriate tool for addressing the issue, as no one state party can address the threat on its own. Neither does the nomination process for new World Heritage sites require any assessment of whether the OUV may be degraded as a result of climate change. It seems absurd that site nomination dossiers which are extremely detailed, take years to complete and require the inclusion of comprehensive management strategies, have no obligation to include even the most basic assessment of climate vulnerability. Consequently, UCS is working with partners to try and identify ways to better respond to climate risks within the World Heritage system.

Climate change is the fastest growing threat to World Heritage

At a workshop in Bahrain last week, we asked a group of natural and cultural World Heritage site managers from around the globe whether they were experiencing climate impacts at the site where they work, 21 of 22 said yes, and 16 of the 22 described actions they are taking to monitor or respond to climate change  And that makes sense, because we know from the IPCC (Intergovernmental Panel on Climate Change), and a host of country and site-level studies that the impacts of climate change are everywhere. But it also drives home the point that this issue is not getting as much attention as it needs at the higher levels of the Convention. Climate impacts are clearly being under-reported by states parties under the official mechanisms of the Convention – the State of Conservation (SOC) reports, and IUCN’s World Heritage Outlook 2 report, published in 2017, identified climate change as the biggest potential threat to natural world heritage and estimated that one in four sites is already being impacted. This also must be an underestimate. In fact, virtually all properties must be being impacted in some way, the key question is how severe the threat to OUV is for each site, and over what time-scale?

UCS, with UNESCO and the United National Environment Program (UNEP) has published 31 representative case studies of World Heritage properties being impacted by climate change, including Yellowstone National Park and the Galapagos Islands. In Bahrain, we heard many new stories about how climate change is affecting World Heritage properties, including for example the immediate risk of flooding and erosion to the Islands of Gorée and Saint-Louis in Senegal, vulnerability to changes in rainfall patterns at Petra in Jordan, and the potential loss of cave paintings & petroglyphs in Tasmania. The historic city of George Town in Penang, Malaysia suffered unprecedented damage from a typhoon in 2017, the kind of extreme storm that the area has not normally had to face in the past.

Map showing highest level of heat stress for the 29 World Heritage reefs during the third global coral bleaching event, Image: NOAA Coral Reef Watch/UNESCO

Although there was a 2014 independent analysis of long-term sea level vulnerability to cultural World Heritage sites that identified 136 out of 700 , the only group of World Heritage properties for which a comprehensive scientific assessment of climate risk has been undertaken, are the coral reefs. There are 29 World Heritage reefs, including Australia’s Great Barrier Reef, the Belize Barrier Reef, and Papahānaumokuākea in the Hawaiian archipelago. According to UNESCO’s 2017 analysis (for Scott Heron and Mark Eakin, both of NOAA, were coordinating lead authors, along with Fanny Douvere from the World Heritage Centre), coral in 21 out of the 29 properties (79%) have experienced severe or repeated heat stress during the past three years. Projecting impacts into the future, under the IPCC’s RCP 8.5 scenario, with a global average temperature of 4.3C by 2100, twice-per-decade severe bleaching would be apparent at 25 of the World Heritage Reefs by 2040.

Why we need a Climate Vulnerability Index for World Heritage

What is needed is a simple, standardized methodology for top-line rapid assessment of climate vulnerability that would work for all World Heritage sites, whether listed for natural, cultural or mixed values. Such a tool would enable the World Heritage Committee to determine which World Heritage properties are most immediately at risk from climate change, where the problems will likely be in the future, and where resources are most urgently needed for more detailed assessment and monitoring, and to undertake resilience and adaptation activities. The methodology needs to be repeatable so that periodic reviews can be undertaken.

Island of Saint-Louis, Sénégal – a World Heritage site at immediate threat from sea level rise. Photo: Dominique Roger/UNESCO

To meet this need, a Climate Vulnerability Index (CVI) for World Heritage properties has been proposed. If adopted by the World Heritage Committee, it has the potential to influence responses to climate change at the World’s most important natural & cultural heritage sites. The concept emerged at an expert meeting on the Baltic island of Vilm, Germany, in 2017, which UCS participated in, and was proposed in the meeting outcome document.  The meeting which was called in response to a decision at the World Heritage Committee in Krakow earlier in 2017 to prioritize climate action and resilience, to investigate the implications for the OUV of World Heritage sites, and revise the Convention’s decade-old climate policy.

At the Bahrain meeting of the World Heritage Committee, the CVI concept was presented at a side event organized by two of the Committee’s three official advisory bodies (IUCN and ICOMOS (the International Council on Monuments and Sites)) in which UCS participated, and at a meeting of the ICOMOS Climate Change & Heritage Working Group co-organized by UCS at the National Museum of Bahrain. The CVI idea is gaining traction. Its value to the Committee would be that it could help quickly identify thematic groups of properties – such as Arctic sites, coastal archaeology, or high mountain ecosystems – at risk, then provide for a deeper dive into all sites within a threatened category, flagging individual sites in need of urgent action or further assessment at the national level.  Critical for the success of the CVI is that it can be applied to both natural and cultural sites, so that a methodology that works for coral reefs, can also work for earthen architecture or cave paintings.

Outside of the side events and the workshops of the advisory bodies and NGOs, where it was a bigger topic than ever before, climate change was hardly mentioned in the plenary sessions of the World Heritage Committee. Only Committee members Trinidad & Tobago and Australia substantively raised the issue, the latter offering an amendment to the Bahrain decision document which was adopted without objection, and which requires the revised climate policy to be presented at the 43rd Committee meeting in Azerbaijan in 2019. Now there is a window of opportunity for civil society to influence the policy revision, and for the vulnerability index concept to move forward. It’s an opportunity that, if taken, could influence how the World Heritage Convention deals with climate change for decades to come.

 

 

Pages