Combined UCS Blogs

What is the Cost of One Meter of Sea Level Rise?

UCS Blog - The Equation (text only) -

The opening line of our recent Scientific Reports article reads “Global climate change drives sea level rise, increasing the frequency of coastal flooding.” Some may read this as plain fact. Others may not.

Undeniable and accelerating

100 years of data from tide gauges and more recently from satellites has demonstrated an unequivocal rise in global sea level (~8-10 inches in the past century). Although regional sea level varies on a multitude of time scales due to oceanographic processes like El Niño and vertical land motion (e.g., land subsidence or uplift), the overall trend of rising sea levels is both undeniable and accelerating. Nevertheless, variability breeds doubt. Saying that global warming is a hoax because it’s cold outside is like saying sea level rise doesn’t exist because it’s low tide.

Global sea level is currently rising at 34 mm/year, making it a relatively slow process. For instance, tides typically change sea level by 0.5-1.0 m every 12 hours, a rate that is ~100,000 times faster than global mean sea level rise.

It’s almost as if sea-level rise were slow enough for us to do something about it…

The civil engineering challenge of the 21st century

At the end of a recent news article by New Scientist, Anders Levermann, a climate scientist for the Potsdam Institute for Climate Impact Research, said “No one has to be afraid of sea level rise, if you’re not stupid. It’s low enough that we can respond. It’s nothing to be surprised about, unless you have an administration that says it’s not happening. Then you have to be afraid, because it’s a serious danger.”

Levermann’s quote captures the challenge of sounding the whistle on the dangers of climate change. We know that sea level rise is a problem; we know what’s causing it (increased concentrations of heat-trapping gasses like CO2 leading to the thermal expansion of sea water and the melting of land-based ice); we know how to solve the problem (reduce carbon emissions and cap global temperatures); yet, in spite of the warnings, the current administration recently chose to back out of a global initiative to address the problem.

Arguing that the Paris agreement is “unfair” to the American economy to the exclusive benefit of other countries is extremely shortsighted. This perspective serves to kick the climate-change can down the road for the next generation to pick up. This perspective, if it dominates US decision making moving forward, sets us up for the worst-case  scenarios of sea-level rise (more than two meters by 2100). Worse yet, this perspective may take us beyond the time horizon in which a straightforward solution may be found, leaving geoengineering solutions as our last-and-only resort.

If the Paris agreement is unfair to the American economy, imagine about how unfair 2.0+ m of sea-level rise would be.  We should seriously question the administration’s focus on improving national infrastructure without considering arguably the greatest threat to it.  Sea-level rise will be one of, if not THE greatest civil engineering challenge of the 21st century.

Sea level rise will:

An astronomically high dollar figure

As a thought experiment, try to quantify the economic value of one meter of sea level rise. Low-lying coastal regions support 30% of the global population and, most likely, a comparable percentage of the global economy. Even if each meter of sea level rise only affected a small percentage of this wealth and economic productivity, it would still represent an astronomically high dollar figure.

Although managed retreat from the coastline is considered a viable option for climate change adaptation, I don’t see a realistic option where we relocate major coastal cities such as New York City, Boston, New Orleans, Miami, Seattle, San Francisco, or Los Angeles.

What will convince the powers-that-be that unabated sea level rise is an unacceptable outcome of climate change? Historically, the answer to this question is disasters of epic proportions.

Hurricane Sandy precipitated large-scale adaptation planning efforts in New York City. Nuisance flooding in Miami has led to a number of on-going infrastructure improvements. The Dutch coast is being engineered to withstand once-in-10,000-year storms. Fortunately, most nations and US states, particular coastal states like Hawaii and California, will abide the Paris agreement.

This administration doesn’t seem to care about the science of climate change, but it does seem to care about economic winners and losers. Would quantifying the impacts of climate change in terms of American jobs and taxpayer dollars convince the administration to change their view of the Paris agreement?

Impossible to ignore

In the executive summary of the 2014 Risky Business report, Michael Bloomberg writes, “With the oceans rising and the climate changing, the Risky Business report details the costs of inaction in ways that are easy to understand in dollars and cents—and impossible to ignore.” This report finds that the clearest and most economically significant risks of climate change include:

  • Climate-driven changes in agricultural production and energy demand
  • The impact of higher temperatures on labor productivity and public health
  • Damage to coastal property and infrastructure from rising sea levels and increased storm surge

For example, the report finds that in the US by 2050 more than $106 billion worth of existing coastal property could be below sea level. Furthermore, a study in Nature Climate Change found that future flood losses in major coastal cities around the world may exceed $1 trillion dollars per year as a consequence of sea level rise by 2050.

The science and economics of climate change are clear.

So why do politicians keep telling us that it’s not happening and that doing something about it would be bad for the economy?

New Interactive Map Highlights Effects of Sea Level Rise, Shows Areas of Chronic Flooding by Community

UCS Blog - The Equation (text only) -

Last week, the Union of Concerned Scientists released a report showing sea level rise could bring disruptive levels of flooding to nearly 670 coastal communities in the United States by the end of the century. Along with the report, UCS published an interactive map tool that lets you explore when and where chronic flooding–defined as 26 floods per year or more–will force communities to make hard choices. It also highlights the importance of acting quickly to curtail our carbon emissions and using the coming years wisely.

Here are a few ways to use this tool:
  1. Explore the expansion of chronically inundated areas

    Sea level rise will expand the zone that floods 26 times per year or more. Within the “Chronic Inundation Area” tab, you can see how that zone expands over time for any coastal area in the lower 48 and for two different sea level rise scenarios (moderate and fast).

    Explore the spread of chronically inundated areas nationwide as sea level rises.

     

  2. Explore which communities join the ranks of the chronically inundated

    We define a chronically inundated community as one where 10% or more of the usable land is flooding 26 times per year or more. With a fast sea level rise scenario, about half of all oceanfront communities in the lower 48 would qualify as chronically inundated. Check out the “Communities at Risk” tab to see if your community is one of them.

    Explore communities where chronic flooding encompasses 10% or more of usable land area.

     

  3. Visualize the power of our emissions choices

    Drastically reducing global carbon emissions with the aim of capping future warming to less than 2 degrees Celsius above pre-industrial levels–the primary goal of the Paris Agreement–could prevent chronic inundation in hundreds of U.S. communities. Explore the “Our Climate Choices” tab to see the communities that would benefit from swift emissions reductions.

    Explore how slowing the pace of sea level rise could prevent chronic inundation in hundreds of US communities.

     

  4. Learn how to use this time wisely

    Our country must use the limited window of time before chronic inundation sets in for hundreds of communities, and plan and prepare with a science-based approach that prioritizes equitable outcomes. Explore our “Preparing for Impacts” tab and consider the federal and state-level policies and resources that can help communities understand their risks, assess their choices, and implement adaptation plans. This tab captures how we can use the diminishing response time wisely.

    Explore federal and state-level resources for communities coping with sea level rise.

Improving the map based on data and feedback

We hope that communities are able to use this tool to better understand the risks they face as sea level rises. We welcome your feedback and will be periodically updating the map as new data and new information comes to light.

Climate Change Just Got a Bipartisan Vote in the House of Representatives

UCS Blog - The Equation (text only) -

On rare occasions, transformative political change emerges with a dramatic flourish, sometimes through elections (Reagan in 1980, Obama in 2008) or key mass mobilizations (the March on Washington in 1963), or even court cases (the Massachusetts Supreme Judicial Court decision declaring marriage inequality unconstitutional.)

But most of the time, transformations happen slowly, step by arduous step, along a path that may be hard to follow and can only be discerned clearly in hindsight.

I believe that we are on such a path when it comes to Republican members of Congress acknowledging climate science and ultimately the need to act. I see some encouraging indications that rank and file Republican members of Congress are heading in the right direction.

In February, 2016, Democratic Congressman Ted Deutch and Republican Congressman Carlos Curbelo, launched the Climate Solutions Caucus, whose mission is “to educate members on economically-viable options to reduce climate risk and to explore bipartisan policy options that address the impacts, causes, and challenges of our changing climate.” Its ranks have now swelled to 48 members, 24 Republicans and 24 Democrats.

Last week, this group flexed its muscle. At issue was UCS-backed language in the National Defense Authorization Act (NDAA). The provision, authored by Democratic Congressman Jim Langevin, would require the Pentagon to do a report on the vulnerabilities to military instillations and combatant commander requirements resulting from climate change over the next 20 years. The provision also states as follows:

Climate change is a direct threat to the national security of the United States and is impacting stability in areas of the world where the United States armed forces are operating today, and where strategic implications for future conflicts exist.

Republican leadership led an aggressive effort to strip the language from the NDAA on the House floor through an amendment offered by Representative Perry (PA-R). But in the end, 46 Republican members (including all but one of the entire climate solutions caucus) voted against it, and fortunately it was not adopted.

We are hopeful this important provision will be included in the final NDAA bill that passes the senate, and then on to President Trump for his signature. He probably won’t like this language, but it seems doubtful that he will veto a military spending bill.

Implications

One shouldn’t read too much into this. The amendment is largely symbolic, and the only thing it requires is that the defense department conduct a study on climate change and national security. There is a long way to go from a vote such as this one to the enactment of actual policies to cut the greenhouse gas emissions that are the primary cause of climate change.

But, it is an important stepping stone. If this bill becomes law, a bipartisan congressional finding that climate change endangers national security becomes the law of the land. Among other things, this should offer a strong rebuttal to those who sow doubt about climate science.

It is also a validation of a strategy that UCS has employed for many years—to highlight the impacts of climate change in fresh new ways that resonate with conservative values. This was the thinking behind our National Landmarks at Risk report, which shows how iconic American landmarks are threatened by climate change.

This was also our strategy behind our recent report which highlights the vulnerability of coastal military bases to sea level rise. This report was cited and relied upon by Congressman Langevin in his advocacy for the amendment.

UCS will work to make sure that this language is included in the final bill, and we will continue to find other ways to cultivate bi-partisan support for addressing climate change. There will be much more difficult votes ahead than this one. But for now, I want to thank the Republican members of Congress for this important vote, and make sure our members and supporters know that the efforts of ours and so many others to work with Republicans and Democrats, and to bring the best science to their attention, is paying off.

Build the Wall and Blame the Poor: Checking Rep. King’s Statements on Food Stamps

UCS Blog - The Equation (text only) -

If you read “Steve King” and think of novelist Stephen King, don’t worry too much about it.

Iowa Representative Steve King dabbled in fear and fiction himself in an interview with CNN last Wednesday, suggesting that a US-Mexico border wall be funded with dollars from Planned Parenthood and the food stamp program.

Photo: CC BY SA/Gage Skidmore

This particular idea was new, but the sentiments King expressed about the Supplemental Nutrition Assistance Program (SNAP) and the people who use it, less so. With 2018 farm bill talks underway, misconceptions about the program and who it serves have manifested with increasing frequency, and setting the record straight about these misconceptions is more important than ever. Policymakers like King, who is a member of both the House Committee on Agriculture and its Nutrition Subcommittee, hold the fate of 21 million SNAP households in their hands, and it’s critical that they’re relying on complete and correct information to make decisions about this program.

Here’s a quick deconstruction of what was said—and what needs to be said—about Americans who use food stamps.

“And the rest of [the funding beyond Planned Parenthood cuts] could come out of food stamps and the entitlements that are being spread out for people that haven’t worked in three generations.”

The idea that food stamp users are “freeloaders” is perhaps one of the most common and least accurate. The truth is, most SNAP participants who can work, do work. USDA data shows that about two-thirds of SNAP participants are children, elderly, or disabled; 22 percent work full time, are caretakers, or participate in a training program; and only 14 percent are working less than 30 hours per week, are unemployed, or are registered for work. Moreover, among households with adults who are able to work, over three-quarters of adults held a job in the year before or after receiving SNAP—meaning the program is effectively helping families fill temporary gaps in employment. King’s constituents are no exception: in his own congressional district, over half of all households receiving SNAP included a person who worked within the past 12 months, and over a third included two or more people who worked within the past 12 months.

“I would just say let’s limit it to that — anybody who wants to have food stamps, it’s up to the school lunch program, that’s fine.”

The national school lunch program provides one meal per day to eligible children. Kids who receive free or reduced price lunch are also eligible to receive breakfast at school through the program, but only about half do. Even fewer kids receive free meals in the summer: less than ten percent of kids who receive free or reduced price lunch at school get free lunches when they’re out of school. This means that, for millions of families, SNAP benefits are critical to filling in the gaps so kids can eat. In fact, USDA data shows that more than 4 in 10 SNAP users are kids. Again, these patterns hold true in King’s district: over half the households that rely on SNAP benefits include children.

“We have seen this go from 19 million people on, now, the SNAP program, up to 47 million people on the SNAP program.”

True. In 1987, average participation in SNAP was around 19 million. In 2013, it peaked at 47 million, and dropped to around 44 million by 2016. The increase over this time period is attributable, at least in part, to changes in program enrollment and benefit rules between 2007 and 2011 and greater participation among eligible populations. However, participation data also demonstrates SNAP’s effective response to economic recession and growth. For example, there was an increase in 2008 as the recession caused more families to fall below the poverty line, and in 2014, for the first time since 2007, participation and total costs began to steadily decrease in the wake of economic recovery. Congressional Budget Office estimates predict that by 2027, the percentage of the population receiving SNAP will return close to the levels seen in 2007.

“We built the program because to solve the problem of malnutrition in America, and now we have a problem of obesity.”

It is undeniable that rising rates of obesity are a significant public health threat. But obesity is an incredibly complex phenomenon, the pathophysiology of which involves myriad social, cultural and biological factors. It is a different type of malnutrition, and we will not solve it simply by taking food away from those who can’t afford it. If we want to focus on increasing the nutritional quality of foods eaten by SNAP recipients, we can look to programs that have been successful in shifting dietary patterns to promote greater fruit and vegetable intake, using strategies such as behavioral economics or incentive programs. Truth be told, most of us—SNAP users or not—would benefit from consuming more nutrient-dense foods like fruits and vegetables.

“I’m sure that all of them didn’t need it.”

Without a doubt. And this could be said of nearly any federal assistance program. But the goal of the federal safety net is not to tailor programs to the specific needs of each person or family—this would be nearly impossible, and the more precise a system gets, the more regulation is required and the greater the administrative burden and financial strain becomes. The goal of federal assistance programs like SNAP is to do the most good for the greatest amount of people, within a system that most effectively allocates a limited amount of resources. And I’d venture to say that a program designed to lift millions of Americans out of poverty—with one of the lowest fraud rates of any federal program, an economic multiplier effect of $1.80 for every $1 spent in benefits, and an ability to reduce food insecurity rates by a full 30 percent—comes closer to hitting its mark than a wall.

Once Deemed Too Small to Be Counted, Rooftop Solar Is Now Racing Up the Charts

UCS Blog - The Equation (text only) -

Sometimes, the littlest of things can point to the biggest of leaps.

In December 2015, the US Energy Information Administration (EIA) announced a major milestone in the life and times of small-scale solar: the agency would start acknowledging the resource by state in its regular monthly generation and capacity report.

Just imagine that, though. Across the country, enough rooftops had started wearing enough solar hats as to potentially shift the profile of states’ electricity use and needs. A day for the clean energy technology scrapbooks, indeed.

And now, a year and a half later, let last week mark another: EIA stated that it will no longer simply be tallying the resource in its rear-view mirror—the agency will also begin looking out into the future and forecasting just how much small-scale solar it thinks will soon be added to the mix.

From ignored to counted to accounted for, all within a few quick spins around the Sun. They sure do grow up fast.

Getting a handle on small-scale solar

To get at why these milestones are so meaningful, we first need to be clear on what we’re talking about. Here, we’re looking at small-scale solar photovoltaics (PV), also known as rooftop, distributed, behind-the-meter, or customer-sited PV. These resources are typically located on the distribution system at or near a customer’s site of electricity consumption, and can be on rooftops, but aren’t always.

Small-scale solar is also, well, small (at least relative to large-scale solar). EIA uses a ceiling of 1 megawatt (MW) for its tracking, but these types of installations are often much smaller, including residential systems, which are commonly on the order of about 5 kilowatts (kW), or 0.005 MW.

And then there’s this: all that electricity being generated by behind-the-meter resources? It’s usually either partially or entirely “invisible” to the utility.

Enter the EIA.

When what happens behind the meter stays behind the meter

At the outset, the invisibility of these resources doesn’t matter much. By itself, one rooftop system isn’t going to generate all that much electricity, and one rooftop system isn’t going to change how much electricity the utility needs to provide. But as more and more of these small systems are installed, together they can actually start to make a real dent in major system loads.

The result? These little rooftop panels can start to move the planning dial.

EIA’s first action above—estimating the presence and contributions of small-scale solar—helps to shed light on just how much these resources are starting to contribute to the system. Now in a lot of places, it isn’t that much…yet. But thanks to EIA’s second action, there will also now be information to help ensure that policymakers and electricity providers sufficiently account for small-scale solar’s future contributions, too.

Let’s take a look:

Credit: EIA.

See the light yellow section? That’s small-scale solar. Sure, it might look a bit like a pat of butter compared to wind in the early years, but it is certainly growing, and it is certainly not negligible. Because that pat—well, EIA estimates that it totaled 19,467 gigawatthours (GWh) of generation in 2016.

To put that number in context, small-scale solar’s generation totaled more electricity than was consumed in 2015 by the residential sectors of half the states in this country. Well worth taking into account, indeed.

And when we look ahead? Well, the future looks bright and getting brighter for this young solar star. Opportunities for these installation abound, and in this week’s Short-Term Energy Outlook, EIA forecasts clear skies and solar on the rise:

Credit: EIA.

Celebrating the little things for the milestones that they are

So here: a few small announcements from EIA, signalling a few giant leaps for rooftop solar. This PV resource is an incredibly important driver of momentum in the clean energy space, but without information on just how much it’s growing, its benefits and contributions can be undervalued. By shining a light on the progress that has taken place to date—and the progress that is to come—EIA is able to provide vital insights on the significance of the transition underway.

Wayne National Forest/Creative Commons (Flickr)

How the Oregon Rebate for Electric Cars Works

UCS Blog - The Equation (text only) -

If you’re an Oregonian and thinking about an electric car, you may want to wait a bit as a bill is about to be signed into law that will establish a rebate of up to $2,500 for electric vehicles sold in the state. This rebate can be had in addition to the $7,500 federal tax credit for EVs, which means Oregonians can get up to $10,000 off an electric vehicle!

The bill also establishes an additional rebate of up to $2,500 for low to moderate income Oregon residents, who can then collectively save up to $12,500 on a qualifying electric vehicle. The rebate program will go into effect in early October 2017.

Which electric vehicles qualify for the rebate

A qualifying vehicle for the new Oregon rebate must:

  • Have a base manufacturer’s suggested retail price of less than $50,000
  • Be covered by a manufacturer’s express warranty on the vehicle drive train, including the battery pack, for at least 24 months from the date of purchase
  • Be either a battery electric vehicle OR a plug-in hybrid vehicle that has at least 10 miles of EPA-rated all-electric range and warranty of at least 15 years and 150,000 miles on emission control components.
    1. $2,500 goes to vehicles with battery capacities above 10 kWh.
    2. $1,500 goes to vehicles with a battery capacity of 10 kWh or less.
  • Be a new vehicle, or used only as a dealership floor model or test-drive vehicle
  • The rebate will apply to new electric vehicles that are purchased or leased, with a minimum 24-month lease term.

How the electric vehicle rebate will be given

  • Send in your rebate application within 6 months of buying the vehicle or starting the vehicle lease.
  • You may need to send it to the Oregon Department of Environmental Quality, or a third party non-profit. The application details have not yet been released.
  • The rebate will “attempt” to be issued within 60 days of receiving the application (the bill says attempt).

Additional rebates for low-income Oregonians (aka charge ahead rebate)

Ideally, EV rebate programs should provide additional financial assistance to low-income drivers. Low-income households typically spend more on transportation than higher earners, and transportation can comprise up to 30 percent of low-income household budgets. So, being able to save on transportation fuel and vehicle maintenance by choosing an electric vehicle can mean even more to low-income households in Oregon and beyond.

Fueling an electric vehicle in Oregon is like paying the equivalent of $0.97 for a gallon of gasoline. In addition, battery electric vehicles have fewer moving parts and don’t require oil changes, so electric vehicle maintenance costs have been estimated to be 35 percent lower than comparable gasoline vehicles.  The eGallon price is calculated using the most recently available state by state residential electricity prices. The state gasoline price above is either the statewide average retail price or a multi-state regional average price reported by EIA. The latest gasoline pricing data is available on EIA’s webpage. Find out more at www.energy.gov/eGallon.

How the Oregon charge ahead rebate works
  • Have a household income less than or equal to 80 percent of the area median income (low income) or between 80 and 120 percent of area median income (moderate income).
    1. Area median income is defined by the Oregon Housing and Community Services Department and is tied to the closest metropolitan area in Oregon.
  • Live in an area of Oregon that has elevated concentrations of air contaminants commonly attributed to motor vehicle emissions.
  • Retire or scrap a gas-powered vehicle that has an engine that is at least 20 years old AND replace that vehicle with an electric vehicle.
  • The electric vehicle can be used or new.
  • Send in an application to the Oregon Department of Environmental Quality or third party non-profit. Details are still be worked out.
  • Get up to an additional $2,500 in rebate off the electric vehicle.
How the Oregon electric vehicle rebate is funded

These rebates are being established as part of a broader transportation package, so the funding mechanisms in the bill are being levied not only for electric vehicles but also for maintaining Oregon’s roads, bridges, and tunnels and other transportation projects.

Beginning in 2020, electric vehicles will be subject to greater titles and registration fees in Oregon, expected to be about $110.

Oregon will also pay for road work with a 4 percent gas tax, increasing incrementally up to 10 cents by 2024. The bill also enforces a $16 vehicle registration fee, a 0.1 percent payroll tax, and 0.5 percent sales tax on new vehicles.

The bill additionally allows Oregon to introduce rush-hour congestion roadway tolls. Cyclists aren’t off the hook, either. Adult bicycles (defined as bikes with wheels at least 26 inches in diameter) over $200 will be subject to a $15 excise tax. These funds will go toward grants for bicycle and pedestrian transportation projects.

Overall, the electric vehicle rebate fund will be at least $12 million annually, though other monies, like donations, can be deposited into the fund too. $12 million is enough cash for 4,800 full $2,500 rebates each year.

Oregon residents bought 1,969 new pure EVs and 1,506 new PHEVs in 2016, so there’s still a good amount of room for this rebate to help grow the Oregon electric vehicle market. Overall, this is a wonderful program that will both help increase electric vehicle sales in Oregon and help expand the benefits of driving on electricity to those who need it the most.

A Quick Guide to the Energy Debates

UCS Blog - The Equation (text only) -

There’s an energy transition happening with major implications for how we use and produce electricity. But not everyone agrees on which direction the transition should take us. The ensuing debate reveals deeply-held views about markets, the role of government, and the place for state policies in a federal system.

UCS has regularly profiled the transition to clean energy, which is led by state choices and rapid growth in renewable energy, energy efficiency, and vehicle electrification. Wind and solar innovations have made these sources very competitive; as coal plants have grown older and new cleaner plants continue to be built, the mix of energy has changed.

With gas now exceeding coal, and monthly renewable generation passing nuclear, the debate has heated up. Here’s a quick rundown on the views of the actors involved.

Consumer interested in clean energy. Photo: Toxics Action Center.

What the markets say

In the electricity markets run by PJM, NYISO and ISO-New England, (covering roughly the region from Chicago to Virginia, and up to Maine), there is wide understanding that “Cheap gas is coal’s fiercest enemy.” There, the debate is how to deal with state policies that contribute revenues to nuclear plants, foremost, as well as renewables.

These grid operators—and the stakeholders with billions of dollars of revenues from their markets—have long-running efforts to refine price signals and participation rules to ensure competition. The Federal Energy Regulatory Commission (FERC) supervises these markets, and has a similar long-term commitment to seeing these markets succeed.

When it comes to environmental policies, and the notion of environmental externalities (e.g. the costs of pollution), the economists speak for these grid organizations. These markets have accepted the Regional Greenhouse Gas Initiative (RGGI), which adds a modest price on carbon allowances, and would be ready and able to include a more impactful carbon price. But the current circumstances, where states have selected various means to correct for externalities, make the market purists upset.

Some subsidies are more equal than others?

Renewable support is the law in more than 29 states—and fossil fuels receive $10s of billions of subsidies. UCS argued at FERC and in the PJM stakeholder process that market advocates lack any consistent justification for discriminating between subsidies. At best, they have said “we can live with some, but not too much, subsidy.” No comparison has been offered showing the impacts on market prices of one set of subsidies compared with another.

EIA charting energy mix changing for electricity production.

Others in the debate, from opposite ends of the commercial spectrum, warn that the grid operators should seek alignment with the environmental and diversity goals expressed by consumers and policy makers. Representing consumers and local government, American Municipal Power, based in Columbus, Ohio with members in 9 states from Delaware to Michigan, and electric co-operatives in NRECA, call for respect and recognition for decisions made outside the federally-supervised markets. At the same time, Exelon, owner of nuclear plants across the eastern US, aligns itself with the state support of renewable energy now that similar state policies have surfaced for existing nuclear plants.

FERC, the arbiter of this debate, expressed sincere hope that the parties will settle this themselves, so that the agency will not have to, as Exelon put it, “require states to forgo their sovereign power to make their own environmental policy as the price of admission to the federal wholesale markets.”

Review so far

Let’s try to summarize: the market folks see gas beating coal and nuclear on economics. The nuclear folks want state policies to support existing nuclear plants. States and consumer-owned utilities seek to keep federally-supervised markets from overriding democratically-decided choices.

Enter the DOE

Secretary of Energy Rick Perry, who as governor of Texas oversaw the greatest expansion of wind energy in the US, seeks to support coal with a forthcoming Department of Energy “baseload” study. From all indications, this initiative is meant to:

1) defeat the market where gas has out-competed coal;
2) trample the consumer and voter choices for renewables; and
3) reverse the trend of lower energy costs from innovation by requiring more payments to the oldest and most expensive generators.

Unfortunately, the April 14 memo from Perry ordering this study mixes flawed assertions about reliability with assumptions about economics. Organizations across the political spectrum have labored to explain that maintaining coal plants, or even the label of baseload generation, are economic concepts from another time.

When the debate continues, keep these facts in mind:

  • Coal provides less than 1% of electricity in New York and the 6-state New England grid.
  • The same is true in Washington, Oregon, and California.
  • At times, wind and solar have generated 50 to 60 percent or more of total electricity demand in some parts of the country, including Texas, while maintaining and even improving reliability.
  • In May, wind, solar, geothermal and biopower supplied a record 67 percent of electricity needs in California’s power pool, and more than 80 percent when you include hydropower.
  • In 2016, wind power provided more than 30 percent of Iowa’s and South Dakota’s annual electricity generation, and more than 15 percent in nine states.

With an energy transition clearly underway, some strange debates are breaking out. Like in so many things, perhaps the only consistent way to sort out the positions and policies is to follow the money.

Photo: Chris Hunkeler/CC BY-SA (Flickr)

Cooper: Nuclear Plant Operated 89 Days with Key Safety System Impaired

UCS Blog - All Things Nuclear (text only) -

The Nebraska Public Power District’s Cooper Nuclear Station about 23 miles south of Nebraska City has one boiling water reactor that began operating in the mid-1970s to add about 800 megawatts of electricity to the power grid. Workers shut down the reactor on September 24, 2016, to enter a scheduled refueling outage. That process eventually led to NRC special inspections.

Following the outage, workers reconnected the plant to the electrical grid on November 8, 2016, to begin its 30th operating cycle. During the outage, workers closed two valves that are normally open when while the reactor operates. Later during the outage, workers were directed to re-open the valves and they completed paperwork indicating the valves had been opened. But a quarterly check on February 5, 2017, revealed that both of the valves remained closed. The closed valves impaired a key safety system for 89 days until the mis-positioned valves were discovered and opened. The NRC dispatched a special inspection team to the site on March 1, 2017, to look into the causes and consequences of the improperly closed valves.

The Event

Workers shut down the reactor on September 24, 2016. The drywell head and reactor vessel head were removed to allow access to the fuel in the reactor core. By September 28, the water level had been increased to more than 21 feet above the flange where the reactor vessel head is bolted to the lower portion of the vessel. Flooding this volume—called the reactor cavity or refueling well—permits spent fuel bundles to be removed while still underwater, protecting workers from the radiation.

With the reactor shut down and so much water inventory available, the full array of emergency core cooling systems required when the reactor operates was reduced to a minimal amount. The reduction of systems required to remain in service facilitates maintenance and testing of out-of-service components.

In the late afternoon of September 29, workers removed Loop A of the Residual Heat Removal (RHR) system from service for maintenance. The RHR system is like a nuclear Swiss Army knife—it can supply cooling water for the reactor core, containment building, and suppression pool and it can provide makeup water to the reactor vessel and suppression pool. Cross-connections enable the RHR system to perform so many diverse functions. Workers open and close valves to transition from one RHR mode of operation to another.

As indicated in Figure 1, the RHR system at Cooper consisted of two subsystems called Loop A and Loop B. The two subsystems provide redundancy—only one loop need function for the necessary cooling or makeup job to be accomplished successfully.

Fig. 1 (Source: Nebraska Public Power District, Individual Plant Examination (1993))

RHR Loop A features two motor-driven pumps (labeled P-A and P-C in the figure) that can draw water from the Condensate Storage Tank (CST), suppression chamber, or reactor vessel. The pump(s) send the water through, or around, a heat exchanger (labeled HX-A). When passing through the heat exchanger, heat is conducted through the metal tube walls to be carried away by the Service Water (SW) system. The water can be sent to the reactor vessel, sprayed inside the containment building, or sent to the suppression chamber. RHR Loop B is essentially identical.

Work packages for maintenance activities include steps when applicable to open electrical breakers to de-energize components and protect workers from electrical shocks and close valves to allow isolated sections of piping to be drained of water so valves or pumps can be removed or replaced. The instructions for the RHR Loop A maintenance begun on September 29 included closing valves V-58 and V-60. These are valves that can only be opened and closed manually using handwheels. Valve V-58 is in the minimum flow line for RHR Pump A while V-60 is in the minimum flow line for RHR Pump C. These two minimum flow lines connect downstream of these manual valves and then this common line connects to a larger pipe going to the suppression chamber.

Motor-operated valve MOV-M016A in the common line automatically opens when either RHR Pump A or C is running and the pump’s flow rate is less than 2,731 gallons per minute. The large RHR pumps generate considerable heat when they are running. The minimum flow line arrangement ensures that there’s sufficient water flow through the pumps to prevent them from being damaged by overheating. MOV-M016A automatically closes when pump flow rises above 2,731 gallons per minute to prevent cooling flow or makeup flow from being diverted.

The maintenance on RHR Loop A was completed by October 7. The work instructions directed operators to reopen valves V-58 and V-60 and then seal the valves in the opened position. For these valves, sealing involved installing a chain and padlock around the handwheel so the valve could not be repositioned. The valves were sealed, but mistakenly in the closed rather than opened position. Another operator independently verified that this step in the work instruction had been completed, but failed to notice that the valves were sealed in the wrong position.

At that time during the refueling outage, RHR Loop A was not required to be operable. All of the fuel had been offloaded from the reactor core into the spent fuel pool. On October 19, workers began transferring fuel bundles back into the reactor core.

On October 20, operators declared RHR Loop A operable. Due to the closed valves in the minimum flow lines, RHR Loop A was actually inoperable, but that misalignment was not known at the time.

The plant was connected to the electrical grid on November 8 to end the refueling outage and begin the next operating cycle.

Between November 23 and 29, workers audited all sealed valves in the plant per a procedure required to be performed every quarter. Workers confirmed that valves V-58 and V-60 were sealed, but failed to notice that the valves were sealed closed instead of opened.

On February 5, 2017, workers were once again performing the quarterly audit of all sealed valves. This time, they noticed that valves V-58 and V-60 were not opened as required. They corrected the error and notified the NRC about its discovery.

The Consequences

Valves V-58 and V-60 had been improperly closed for 89 days, 12 hours, and 49 minutes. During that period, the pumps in RHR Loop A had been operated 15 times for various tests. The longest time that any pump was operated without its minimum flow line available was determined to be 2 minutes and 18 seconds. Collectively, the pumps in RHR Loop A operated for a total of 21 minutes and 28 seconds with flow less than 2,731 gallons per minute.

Running the pumps at less than “minimum” flow introduced the potential for their having been damaged by overheating. Workers undertook several steps to determine whether damage had occurred. Considerable data is collected during periodic testing of the RHR pumps (as suggested by the fact it was known that the longest a pump ran without its minimum flow line was 2 minutes and 18 seconds). Workers reviewed data such as differential pressures and vibration levels from tests over the prior two years and found that current pump performance was unchanged from performance prior to the fall 2016 refueling outage.

Workers also calculated how long it would take a RHR pump to operate before becoming damaged. They estimated that time to be 32 minutes. To double-check their work, a consulting firm was hired to independently answer the same question. The consultant concluded that it would take an hour for an RHR pump to become damaged. (The 28 minute difference between the two calculations was likely due to the workers onsite making conservative assumptions that the more detailed analysis was able to reduce. But it’s a difference without distinction—both calculations yield ample margin to the total time the RHR pumps ran.)

The testing and analysis clearly indicate that the RHR pumps were not damaged by their operating during the 89-plus days their minimum flow lines were unavailable.

The Potential Consequences  

The RHR system can perform a variety of safety functions. If the largest pipe connected to the reactor vessel were two rupture, the two pumps in either RHR Loop are designed to provide more than sufficient makeup flow to refill the reactor vessel before the reactor core overheats.

The RHR system has high capacity, low head pumps. This means the pumps supply a lot of water (many thousands of gallons each minute) but at a low pressure. The RHR pumps deliver water at roughly one-third of the normal operating pressure inside the reactor vessel. When small or medium-sized pipes ruptured, cooling water drains out but the reactor vessel pressure takes longer to drop below the point where the RHR pumps can supply makeup flow. During such an accident, the RHR pumps will automatically start but will send water through the minimum flow lines until reactor vessel pressures drops low enough. The closure of valves V-58 and V-60 could have resulted in RHR Pumps A and C being disabled by overheating about an hour into an accident.

Had RHR Pumps B and D remained available, their loss would have been inconsequential. Had RHR Pumps B and D been unavailable (such as due to failure of the emergency diesel generator that supplies them electricity), the headline could have been far worse.

NRC Sanctions

The NRC’s special inspection team identified the following two apparent violations of regulatory requirements, both classified as Green in the agency’s Green, White, Yellow and Red classification system:

  • Exceeding the allowed outage time in the operating license for RHR Loop A being inoperable. The operating license permitted Cooper to run for up to 7 days with one RHR loop unavailable, but the reactor operated far longer than that period with the mis-positioned valves.
  • Failure to implement an adequate procedure to control equipment. Workers used a procedure every quarter to check sealed valves. But the guidance in that procedure was not clear enough to ensure workers verified both that a valve was sealed and that it was in the correct position.

UCS Perspective

This near-miss illustrates the virtues, and limitations, of the defense-in-depth approach to nuclear safety.

The maintenance procedure directed operators to re-open valves V-58 and V-60 when the work on RHR Loop A was completed.

While quite explicit, that procedure step alone was not deemed reliable enough. So, the maintenance procedure required a second operator to independently verify that the valves had been re-opened.

While the backup measure was also explicit, it was not considered an absolute check. So, another procedure required each sealed valves to be verified every quarter.

It would have been good had the first quarterly check identified the mis-positioned valves.

It would have been better had the independent verifier found the mis-positioned valves.

It would have best had the operator re-opened the valves as instructed.

But because no single barrier is 100% reliable, multiple barriers are employed. In this case, the third barrier detected and corrected a problem before it could be contribute to a really bad day at the nuclear plant.

Defense-in-depth also accounts for the NRC’s levying two Green findings instead of imposing harsher sanctions. The RHR system performs many safety roles in mitigating accidents. The mis-positioned valves impaired, but did not incapacitate, one of two RHR loops. That impairment could have prevented one RHR loop from successfully performing its necessary safety function during some, but not all, credible accident scenarios. Even had the impairment taken RHR Loop A out of the game, other players on the Emergency Core Cooling System team at Cooper could have stepped in.

Had the mis-positioned valves left Cooper with a shorter list of “what ifs” that needed to line up to cause disaster or with significantly fewer options available to mitigate an accident, the NRC’s sanctions would have been more severe. The Green findings are sufficient in this case to remind Cooper’s owner, and other nuclear plant owners, of the importance of complying with safety regulations.

Accidents certainly reveal lessons that can be learned to lessen the chances of another accident. Near-misses like this one also reveal lessons of equal value, but at a cheaper price.

How President Trump’s Proposed Budget Cuts Would Harm Early Career Scientists

UCS Blog - The Equation (text only) -

Kaila Colyott is coming close to graduation as a Ph.D. candidate at the University of Kansas, but she’s not finishing with the same enthusiasm for her career prospects that she began graduate school with.  At the beginning, she wasn’t particularly worried about getting a job after graduation. “I was a first generation student coming from a largely uneducated background. I was pretty stoked about doing science, and I was told that more education would help me land a job in the future.”

She wasn’t ever informed that, under the current market, academics graduate with significant debt and without a promise of a job. Ms. Colyott said that she became more concerned about her job prospects as she learned more about the job market in academia, “I became more concerned over time as I witnessed academics hustling for money all the time.”

President Trump’s proposed across-the-board cuts to scientific research and training would make this problem worse. While they are out of step with what Congress wants and have yet to be realized, they would have tremendous impacts on our nation’s scientific capacity  and ability to enact science based policy.

Incentives for scientific careers are dwindling

According to a National Science Foundation (NSF) report, American universities awarded 54,070 PhD’s in 2014, yet 40% of those newly minted doctorates had no job lined up after graduation. There is a connection between funding for scientific research and job opportunities for early career scientists: one can slash the hopes and dreams of the other.

I am fortunate to work in science policy because of the funding and training opportunities afforded to me as an early career scientist. Having received two fellowships as a graduate student from NSF, one that allowed me to take on an internship in the White House’s Office of Science and Technology Policy, and a post-doctoral fellowship through the Department of Energy’s Oak Ridge Institute for Science and Education, I received robust training in both science and policy analysis.

Such training also offered me a career path outside of academia, made necessary by a limited number of tenure-track positions available at universities. Thus, I view this type of funding and training as essential for early career scientists as many will need to seek career paths outside of academia given the job market. Yet, President Trump’s proposed budget cuts signal to me that these same opportunities may not be afforded to a younger generation of scientists—such a signal is concerning.

Government funding of science is essential to early career scientists

There was a time when most PhD-level scientists would enter into a tenure-track position at a university after graduation. Today, even the most accomplished students pursuing a PhD have a particularly difficult time landing a tenure-track position because there are very few jobs and competition is stiff.

This creates the need for other options. Among those graduating with a PhD in 2014 who had indicated they had a post-graduate commitment, 39% were entering into a post-doc position (a temporary research position most common in the sciences). While many of these exist in research labs at universities across the country, there are also many post-doc positions available in the federal government.

Such opportunities can expose early career scientists to the process by which science informs the policy-making process in government while still allowing them to conduct research. This allows early career scientists the chance to increase both their interest and efficacy in science policy. Additionally, agencies such as the NSF and the National Institutes for Health (NIH) offer graduate students and post-docs fellowships and grants that allow them to build skills in forming their own research ideas and writing grant proposals.

Opportunities for early career scientists to obtain government fellowships or grants in the sciences may decrease under the Trump administration, if the administration’s budget cuts are actualized. For example, President Trump has proposed to cut NSF’s budget by 11 percent. As NSF struggled with its 2018 budget request to meet the 11% cut, the agency decided it would need to cut half of the annual number of prestigious graduate research fellowships offered to PhD students.

Such a cut would significantly reduce the availability of fellowships for biologists and environmental scientists, especially since the NSF biology directorate announced in June that it would cease its funding of Doctoral Dissertation Improvement Grants (DDIGs) due to the time needed to manage the program. Additionally, many other fellowships and grants have been proposed to be cut or re-structured such as the STAR grant program at EPA, the National Space Grant and Fellowship Program at NASA, and the Sea Grant program at NOAA.

These programs have led to many scientific advances that have reduced the costs of regulations, protected public health and the environment, and saved lives. For example, the STAR grant program at EPA implemented several major pollution focused initiatives such as the Particulate Matter Centers, Clean Air Research Centers, and Air, Climate and Energy Centers, which have together produced substantial research showing air pollution can decrease human life expectancy. A recent report by the National Academies of Sciences noted that this research likely saved lives and reduced healthcare costs, having helped to inform policies that improved air quality nationwide.

While many of the extreme proposed cuts to science funding will likely not come to fruition, given bipartisan support of scientific funding in Congress, even small cuts to government programs that offer funding for science could really impact job prospects for early career scientists. And the uncertainty created by these suggested cuts will discourage young people, especially those from disadvantaged backgrounds, from pursuing scientific careers at all.

Neil Ganem, a School of Medicine assistant professor of pharmacology and experimental therapeutics at Boston University, described how the realization of these cuts would have a negative economic effect. “It would mean that hospitals, medical schools, and universities would immediately institute hiring freezes. Consequently, postdocs, unable to find academic jobs, would start accumulating and be forced into other career paths, many, I’m sure, outside of science. Jobs would be lost. The reality is that if principal investigators don’t get grants, then they don’t hire technicians, laboratory staff, and students; they also don’t buy [lab supplies] or services from local companies. There is a real trickle-down economic effect.” Indeed, such cuts could be devastating to post-docs and their families, especially in the case a post-doc was offered a position only to see that funding pulled at the last minute.

Early career scientists are paying attention

When asked if Trump’s proposed budget cuts to basic research made her more concerned about her job prospects, Colyott said that they were one of many factors, but she expressed greater concern for the generation of young scientists below her. “These cuts make me concerned about younger scientists who won’t have the same resources that I had at my disposal—like NSF’s Graduate Research Fellowships or the DDIGs. Having the ability to propose my own ideas and receive funding for them built a lot of confidence in me such that I felt I could continue to do science.”

Colyott has been very active in science outreach as a graduate student and is very passionate about this field, and intends to seek a job in outreach after graduation to get first generation students like her interested in science. However, she is now worried about encouraging young students into academia. “Why would I want to encourage others to enter science when I already am nervous myself about my own job prospects?”

Even if President Trump’s egregious cuts to scientific funding do not come true, they most certainly send a signal to scientists, especially young scientists, that their skills are not valued. This message can be particularly disheartening to students attempting to gain a career in science, which may dissuade them from entering the field.

So, I have my own message for these younger scientists. I see you, I hear you, and I completely understand your fears about your job prospects. You deserve a chance to advance our understanding on scientific topics that are vital to better humanity. Your scientific research is valued and it is important, and there is a huge community of others who believe the very same thing. Science is collaborative by nature—I assure you, we all will work together to lift you up and make sure your voice is heard.

One of The Largest Icebergs on Record Just Broke off Antarctica. Now What?

UCS Blog - The Equation (text only) -

An iceberg, among the largest on record (since satellites started tracking in 1978), broke off the Larsen-C ice shelf along the Antarctic Peninsula.  The iceberg is greater than the area of Delaware and a volume twice that of Lake Erie.  What were the origins of this event, and now what?

Origins of the gigantic iceberg  IPCC 2013

Terms for cold regions of Earth. Contributions to global sea level rise as of the 2013 publication of the Intergovernmental Panel on Climate Change fifth assessment report (working group 1). Source: (to see enlarged graph) IPCC AR5 WG1 figure 4-25.

In order to understand the present and future implications, we can quickly run through some facts regarding the origins of this gargantuan iceberg.  As we do this, it’s helpful to get a refresher on terms and recent trends for sea level rise contributions from cold regions  (see figure from the IPCC AR5 WG1 Figure 4-25).

Glaciers outside of Greenland and Antarctica have been the largest ice source contribution to global sea level rise between 1993 and 2009.  Antarctica and Greenland have increased their contribution over the recent part of this period.

Now, a quick look at the iceberg, and how it formed:

What? An Iceberg, likely to be named A68, weighs more than a trillion tons.

Where?: This iceberg used to be part of the floating Larsen C Ice Shelf located along that part of Antarctica that looks like a skinny finger pointing toward South America.

When?:  The iceberg broke away sometime between July 10 and July 12, 2017 (uncertainty due to the gap between repeat passes by satellites).  Despite the current predominance of polar darkness in the region, several satellites detected this event with special instruments: NASA’s Aqua MODIS, NASA and NOAA’s Suomi VIIRS, European Space Agency Sentinel-1 satellites.

Why?:  It is natural for floating ice shelves to break off – or to “calve” – icebergs, as was captured in this unforgettable time lapse video clip from the film Chasing Ice.  The Larsen C ice shelf is a type that is fed by land-based ice – called glaciers – on the Antarctic Peninsula. The shelf size depends on the supply of ice from the glaciers and snow minus the loss of ice from calving and melting.

While calving is entirely natural, scientists are investigating other factors that could have played a role in the size and the timing of this event.  An ice shelf can melt and thin if the surface air temperature or ocean waters beneath an ice shelf warm above the freezing point.  The Antarctic Peninsula has experienced surface temperature warming over recent decades that is unprecedented over the last two millennia in the region.

Now what? Break up of Larsen B Ice Shelf

Larsen B ice shelf demise (NASA MODIS image by Ted Scambos, National Snow and Ice Data Center, University of Colorado, Boulder). Source: https://nsidc.org/news/newsroom/larsen_B/index.html

Immediate Risks: Not much in terms of global sea level rise since the ice shelf was already floating.  Similar to the demonstration with floating ice cubes melting in a cup of water and the liquid water level remains the same. If iceberg A68 had instead suddenly calved from land-based ice, according to Gavin Schmidt (NASA), it would have contributed to global sea level.

The iceberg could pose a navigation hazard for ships.   Iceberg A68 can drift for years and, based on typical iceberg tracks for this region, it would likely move to lower latitudes where more ships would have to avoid navigating too close. For now, few ships would head that far south during the Antarctic winter and are likely to place greater risk on large waves when they pass through the seas surrounding Antarctica.  These have “unlimited fetch” where strong winds can generate some of the largest waves in the world with a well-earned reputation amongst seafarers embedded in the nautical terms referring to these hazardous southern latitudes: “roaring forties” and “furious fifties.”

Near-term risks:  Scientists will closely track developments to see if the Larsen C ice shelf rebounds or follows the fate of nearby and lower latitude ice shelves that have disintegrated (Larsen A and Larsen B) over the past two decades.

The data that will be tracked include processes observed during Larsen B disintegration such as meltwater ponding, changes to snow accumulation or loss, and meltwater penetrating deep into the ice shelf through cracks that can increase ice loss.

To better understand the risks, we also need critical information, currently difficult to obtain, regarding ocean temperatures underneath the Larsen C ice shelf.   Warmer ocean waters lapping at the new fresh edge of the Larsen C ice shelf and penetrating deeper underneath could increase the risks for Larsen C shelf thinning and potential disintegration.

Long-term risks: Ice shelves buttress glaciers. If ice shelves are no longer there to buttress the glaciers and “put the brakes on” the flow of ice from the land-based ice sources, these glaciers could accelerate ice flow rates and directly contribute to sea level rise.  Many studies document many times greater flow in glaciers after complete disintegration of the Larsen B ice shelf.

“Glacier-ice shelf interactions: In a stable glacier-ice shelf system, the glacier’s downhill movement is offset by the buoyant force of the water on the front of the shelf. Warmer temperatures destabilize this system by lubricating the glacier’s base and creating melt ponds that eventually carve through the shelf. Once the ice shelf retreats to the grounding line, the buoyant force that used to offset glacier flow becomes negligible, and the glacier picks up speed on its way to the sea.”  Image by Ted Scambos and Michon Scott, National Snow and Ice Data Center, University of Colorado, Boulder. Source: NSIDC

If a similar sequence of events were to occur with the Larsen C ice shelf, then coastal planners likely need to know the scale of the potential risk and how quickly it could happen.   The Larsen C ice shelf is fed by glaciers on the skinny Antarctic Peninsula which contains an estimated combined equivalent of 1 cm contribution potential to future global sea level.

The pace and timing are big questions for scientist to monitor and make projections based on models that incorporate the processes observed.  These could improve sea level rise projections in a world with the Paris Agreement fully implemented (i.e. with limits of no more than 2 degrees Celsius global temperature rise above pre-industrial) versus higher emissions scenarios.  A good resource on the current estimates for the timing of a threshold for chronic inundation for many U.S. coastal communities is the new UCS report released yesterday and the accompanying peer-reviewed publication –  Dahl et al., 2017.

IPCC 2013 WG 1 Figure 4-25 Ted Scambos NSIDC NASA

Northern Plains Drought Shows (Again) that Failing to Plan for Disasters = Planning to Fail

UCS Blog - The Equation (text only) -

As the dog days of summer wear on, the northern plains are really feeling the heat. Hot, dry weather has quickly turned into the nation’s worst current drought in Montana and the Dakotas, and drought conditions are slowly creeping south and east into the heart of the Corn Belt. Another year and another drought presents yet another opportunity to consider how smart public policies could make farmers and rural communities more resilient to these recurring events.

Let’s start with what’s happening on the ground: Throughout the spring and early summer, much of the western United States has been dry, receiving less than half of normal rainfall levels. And the hardest hit is North Dakota. As of last week, 94 percent of the state was experiencing some level of abnormally dry conditions or drought, with over a quarter of the state in severe or extreme drought (a situation that only occurs 3 to 5 percent of the time, or once every 20 to 30 years).

Throughout the spring and early summer, drought conditions have worsened across the Dakotas and Montana, stressing crops and livestock.
Image: http://droughtmonitor.unl.edu/

But this drought is not just about a dry spring. Experts believe the problem started last fall when first freeze dates were several weeks later than usual, creating a “bonus” growing period for crops like winter wheat and pasture grasses, which drew more water from the soil. This is an important pattern for agriculture to stay tuned into, as recent temperature trends point to greater warming conditions in the winter.

Bad news for wheat farmers (and bread eaters)

The timing of the drought is particularly damaging to this region’s farm landscape, which centers around grasslands for grazing livestock, along with a mix of crops including wheat, corn, soy, and alfalfa.

Spring wheat has been especially hard hit—experts believe this is the worst crop in several decades in a region that produces more than 80 percent of the country’s spring wheat. (Here’s a great map of the wheat varieties grown across the country, which makes it easy to see that the bread and pasta products we count on come from Montana and the Dakotas).

As grasses wither, cattle ranchers have only bad options

More than 60 percent of the region’s pasture grasses are also in poor or very poor condition, leaving cattle without enough to eat. Given the forecast of high temperatures upcoming, and the creeping dry conditions into parts of the Corn Belt (at a time of year when corn is particularly sensitive to hot and dry conditions), it is shaping up to be a difficult situation for farmers and  ranchers all around the region.

So it’s appropriate that the Secretary of Agriculture released a disaster proclamation in late June, allowing affected regions to apply for emergency loans. But another of the Secretary’s solutions for ranchers with hungry livestock—authorizing “emergency grazing” (and just this week) “emergency haying” on grasslands and wetlands designated off-limits to agriculture—could exacerbate another problem.

Short-term emergencies can hurt our ability to plan for the long-term

The Conservation Reserve Program (CRP), created by the 1985 Farm Bill, pays landowners a rental fee to keep environmentally sensitive lands out of agricultural production, generally for 10-15 years. It also serves to protect well-managed grazing lands as well as to provide additional acres for grazing during emergencies such as drought.

Instead of planting crops on these acres, farmers plant a variety of native grasses and tree species well suited to provide flood protection, wildlife and pollinator habitat, and erosion prevention. In 2016, almost 24 million acres across the United States (an area roughly the size of Indiana) were enrolled in CRP. This included 1.5 million acres in North Dakota, which represents approximately 4 percent of the state’s agricultural land.

While this might sound like a lot, CRP numbers across the country are down, and in fact North Dakota has lost half of its CRP acreage since 2007. This is due in part to Congress  imposing caps on the overall acreage allowed in the program, but in large part due to the historically high commodity prices over the same time period, as well as increased demand for corn-based ethanol.

The loss of CRP acreage over the last decade demonstrates high concentrations of land conversion in the Northern Plains, nearly overlapping with the current drought. Image: USDA Farm Service Agency

Research on crop trends tells a complicated story about how effective this program is at protecting these sensitive lands in the long-term. The data demonstrate how grasslands, notably CRP acreage, are being lost rapidly across the United States. CRP acreage often comes back into crop production when leases expire (see examples of this excellent research here, here and finally here, which notes that often CRP lands turn into corn or soy fields). This may potentially erase the environmental benefits from these lands that were set aside.

At the same time, with negotiations toward a new Farm Bill underway, some ranchers and lawmakers are looking for even more “flexibility” in the CRP program. Some have expressed concerns about the amount of land capped for CRP. Some feel that CRP rental rates are too high, tying up the limited suitable land that young farmers need to get started, while others believe there are not enough new contracts accepted (for things like wildlife habitat) because of caps.

The bottom line is that it is critical to have emergency plans in place to protect producers in cases of drought. Emergency livestock grazing on CRP acreage is one solution to help prevent ranchers from selling off their herds (such sell-offs are already being reported). But, if CRP acreage continues to decline, what will happen when the next drought occurs, or if this drought turns into a multi-year disaster? And what will happen if floods hit the region next year, and the grasslands that could help protect against that emergency aren’t there?

Unfortunately, short-term emergencies can hurt our ability to plan for long term, and the trend toward losing CRP and grasslands is one example of this. It is no simple balance for policy to find solutions that simultaneously support short-term needs while encouraging risk reduction in the long term.

Agroecology helps farmers protect land while still farming it

But there’s another way to achieve conservation goals that doesn’t depend upon setting land aside. A number of existing farm bill programs encourage farmers to use practices on working lands that build healthier soils to retain more water, buffering fields from both drought and flood events. Increasing investment and strengthening elements of these programs is an effective way to help farmers and ranchers build long-term resilience.

Recent research from USDA scientists in the Northern Plains highlights climate change impacts and adaptation options for the region, and their proposed solution sound much like the agroecological practices UCS advocates for: increased cropping intensity and cover crops to protect the soil, more perennial forages, integrated crop and livestock systems, as well as economic approaches that support such diversification and the extension and education services needed to bring research to farmers.

As I wrote last year, drought experts recognize that proactive planning is critical, thinking ahead about how disasters can be best managed through activities such as rainfall monitoring, grazing plans, and water management is critical. Here we are again with another drought, and climate projections tell us that things are likely to get worse. In this year as a new Farm Bill is being negotiated, we have an opportunity to think long-term and make investments for the future to better manage future drought.

 

As Coal Stumbles, Wind Power Takes Off in Wyoming

UCS Blog - The Equation (text only) -

After several years of mostly sitting on the sidelines, Wyoming is re-entering the wind power race in a big way. Rocky Mountain Power recently announced plans to invest $3.5 billion in new wind and transmission over the next three years. This development—combined with the long-awaited start of construction on what could be the nation’s largest wind project—will put Wyoming among the wind power leaders in the region. That’s welcome news for a state economy looking to rebound from the effects of the declining coal industry.

Capitalizing on untapped potential

Wyoming has some of the best wind resources in the country. The state ranks fifth nationally in total technical potential, but no other state has stronger Class 6 and 7 wind resources (considered the best of the best). And yet, wind development has remained largely stagnant in Wyoming since 2010.

In the last seven years, just one 80-megawatt wind project came online in Wyoming as the wind industry boomed elsewhere—more than doubling the installed US wind capacity to 84,000 megawatts.

Fortunately, it appears that Wyoming is ready to once again join the wind power bonanza, bringing a much-needed economic boost along with it. On June 29th, Rocky Mountain Power—Wyoming largest power provider—filed a request with regulators for approval to make major new investments in wind power and transmission. The plan includes upgrading the company’s existing wind turbines and adding up to 1,100 MWs of new wind projects by 2020, nearly doubling the state’s current wind capacity.

In addition to the $3.5 billion in new investments, Rocky Mountain Power estimates that the plan will support up to 1,600 construction jobs and generate as much as $15 million annually in wind and property tax revenues (on top of the $120 million in construction-related tax revenue) to help support vital public services. What’s more—thanks to the economic competitiveness of wind power—these investments will save consumers money, according to the utility.

Rocky Mountain Power isn’t the only company making a big investment in Wyoming’s rich wind resources. After more than a decade in development, the Power Company of Wyoming (PCW) has begun initial construction on the first of the two-phase Chokecherry and Sierra Madre wind project, which will ultimately add 3,000 MW of wind capacity in Carbon County. The $5 billion project expects to support 114 permanent jobs when completed, and hundreds more during the 3-year construction period. PCW also projects that over the first 20 years of operation, the massive project will spur about $780 million in total tax revenues for local and state coffers.

Diversifying Wyoming’s economy with wind

When completed, these two new wind investments will catapult Wyoming to the upper tier of leaders in wind development in the west and nationally. And combined with Wyoming’s existing wind capacity, the total annual output from all wind projects could supply nearly all of Wyoming’s electricity needs, if all the generation was consumed in state. That’s not likely to happen though, as much of the generation from the Chokecherry and Sierra Madre project is expected to be exported to other western states with much greater energy demands.

Still, the wind industry is now riding a major new wave of clean energy momentum in a state better known for its coal production.

Coal mining is a major contributor to Wyoming’s economy, as more than 40 percent of all coal produced in the US comes from the state’s Powder River Basin. But coal production has fallen in recent years as more and more coal plants retire and the nation transitions to cleaner, more affordable sources of power. In 2016, Wyoming coal production dropped by 20 percent compared with the previous year, hitting a nearly 20-year low. That resulted in hundreds of layoffs and confounded the state’s efforts to climb out of a long-term economic slump.  And while production has rebounded some this year, many analysts project the slide to continue over the long-term.

Of course, Wyoming’s recent wind power investments and their substantial benefits alone can’t replace all its losses from the coal industry’s decline. But a growing wind industry can offset some of the damage and play an important role in diversifying Wyoming’s fossil-fuel dependent economy. In fact, Goldwind Americas, the US affiliate of a large Chinese wind turbine manufacturer, recently launched a free training program to unemployed coal miners in Wyoming who want to become wind turbine technicians.

A growing wind industry can also provide a whole new export market for the state as more and more utilities, corporations, institutions and individual consumers throughout the west want access to a clean, affordable, reliable and carbon-free power supply.

Sustaining the momentum

As the wind industry tries to build on its gains in Wyoming, what’s not clear today is whether the state legislature will help foster more growth or stand in the way. In the past year, clean energy opponents in the Wyoming legislature have made several attempts to stymie development, including by significantly increasing an existing modest tax on wind production (Wyoming is the only state in the country that taxes wind production) and penalizing utilities that supply wind and solar to Wyoming consumers. Ultimately, wiser minds prevailed and these efforts were soundly defeated.

That’s good news for all residents of Wyoming. Wind power has the potential to boost the economy and provide consumers with clean and affordable power. Now that the wind industry has returned to Wyoming, the state should do everything it can to keep it there.

Photo: Flickr, Wyoming_Jackrabbit

The San Francisco Bay Area Faces Sea Level Rise and Chronic Inundation

UCS Blog - The Equation (text only) -

Looking across the San Francisco Bay at the city’s rapidly rising skyscrapers, it’s easy to see why Ingrid Ballman and her husband chose to move to the town of Alameda from San Francisco after their son was born. With streets lined with single family bungalows painted in a rainbow of pastel colors and restaurant patios lined with senior citizens watching pelicans hunt offshore, Alameda is a world away from the gigabits per second pace of life across the bay.

Children playing along Alameda’s Crown Memorial State Beach along San Francisco Bay. An idyllic place to play, the California State Department of Parks and Recreation describes the beach as “a great achievement of landscaping and engineering,” a description that applies to much of the Bay Area’s waterfront.

“I had a little boy and it’s a very nice place to raise a child–very family-oriented, the schools are great. And we didn’t think much about any other location than Alameda,” Ballman says. Alameda has been, by Bay Area standards, relatively affordable, though with median home  prices there more than doubling in the last 15 years, this is becoming less the case.

After Ballman and her husband bought their home she began to think more about the island’s future. “At some point,” she says carefully, “it really became clear that we had picked one of the worst locations” in the Bay Area.

A hotspot of environmental risk

The City of Alameda is located on two islands…sort of. Alameda Island, the larger of the two, is truly an island, but it only became so in 1902 when swamps along its southeastern tip were dredged and the Oakland Estuary was created. Bay Farm Island, the smaller of the two, used to be an island, but reclamation of the surrounding marshes has turned it into a peninsula that is connected to the mainland. In the 1950s, Americans flocked to suburbs in search of the American Dream of a house with a white picket fence and 2.5 children, and Alameda Island, home to a naval base and with little space for new housing, responded by filling in marshes, creating 350 additional acres. Bay Farm Island was also expanded with fill to extend the island farther out into the bay.

The filling of areas of San Francisco Bay was common until the late 1960s, when the Bay Conservation and Development Commission was founded.

Many Bay Area communities are built on what used to be marsh land. These low-lying areas are particularly susceptible to sea level rise and coastal flooding. Alameda  Island is circled in red, Bay Farm Island is just to the south.

While many former wetland areas are slated for restoration, many others now house neighborhoods, businesses, and schools, and are among the Bay Area’s more affordable places to live. The median rent for an apartment in parts of San Mateo and Alameda Counties where fill has been extensive can be half what it is in San Francisco’s bedrock-rooted neighborhoods.

When Bay Area residents think about natural hazards, many of us think first of earthquakes. In Alameda, Ballman notes, the underlying geology makes the parts of the island that are built on fill highly susceptible to liquefaction during earthquakes. It is precisely this same geology that places communities built on former wetlands in the crosshairs of a growing environmental problem: chronic flooding due to sea level rise.

Chronic inundation in the Bay Area

Ballman studies a map I brought showing the extent of chronic inundation in Alameda with a moderate sea level rise scenario that projects about 4 feet of sea level rise by the end of the century. The map is a snapshot from UCS’s latest national-scale analysis of community-level exposure to sea level rise.

“Right here is my son’s school,” she says, pointing to a 12-acre parcel of land that’s almost completely inundated on my map. With this moderate scenario, the school buildings are safe and it’s mostly athletic fields that are frequently flooded.

I haven’t brought along a map of chronic inundation with a high sea level rise scenario–about 6.5 feet of sea level rise by 2100–for Ballman to react to, but with a faster rate of sea level rise, her son’s school buildings would flood, on average, every other week by the end of the century. While this scenario seems far off, it’s within the lifetime of Ingrid’s son. And problems may well start sooner.

Seas are rising more slowly on the West Coast than on much of the East and Gulf Coasts, which means that most California communities will have more time to plan their response to sea level rise than many communities along the Atlantic coast. Indeed, by 2060, when the East and Gulf Coasts have a combined 270 to 360 communities where 10% or more of the usable land is chronically inundated, the West Coast has only 2 or 3. Given how densely populated the Bay Area is, however, even small changes in the reach of the tides can affect many people.

As early as 2035 with an intermediate sea level rise scenario, neighborhoods all around the Bay Area–on Bay Farm Island, Alameda, Redwood Shores, Sunnyvale, Alviso, Corte Madera, and Larkspur– would experience flooding 26 times per year or more—UCS’s threshold for chronic flooding–with a moderate scenario. By 2060, the number of affected neighborhoods grows to include Oakland, Milpitas, Palo Alto, East Palo Alto, and others along the corridor between San Francisco and Silicon Valley.

By 2100, the map of chronically inundated areas around the Bay nearly mirrors the map of the areas that were historically wetlands.

By 2100, with an intermediate sea level rise scenario, many Bay Area neighborhoods would experience flooding 26 times or more per year. Many of these chronically inundated areas were originally tidal wetlands.

Affordable housing in Alameda

Like many Bay Area communities, Alameda has struggled to keep up with the demand for housing–particularly housing that is affordable to low- and middle-income families–as the population of the region has grown. In the past 10-15 years, large stretches of the northwestern shore of the island have been developed with apartment and condo complexes.

Driving by the latest developments and glancing down at my map of future chronic inundation zones, I was struck by the overlap. With a high scenario, neighborhoods only 10-15 years old would be flooding regularly by 2060. The main thoroughfares surrounding some of the latest developments would flood by the end of the century.

While the addition of housing units in the Bay Area is needed to alleviate the region’s growing housing crisis, one has to wonder how long the homes being built today will be viable places to live. None of this is lost on Ballman who states, simply, “There are hundreds of people moving to places that are going to be underwater.”

Many of Alameda’s newly developed neighborhoods would face frequent flooding in the second half of the century with intermediate or high rates of sea level rise.

“Some of the more affordable places to live,” says Andy Gunther of the Bay Area Ecosystems Climate Consortium, “are the places that are most vulnerable to sea level rise, including Pinole, East Palo Alto, and West Oakland.” Many of these communities that are highly exposed to sea level rise are low-income communities of color that are already suffering from a lack of investment. These communities have fewer resources at their disposal to cope with issues like chronic flooding.

Bay Area action on sea level rise

How neighborhoods–from the most affordable to the most expensive–throughout the Bay Area fare in the face of rising seas will depend, in part, on local, state, and federal policies designed to address climate resilience. A good first step would be to halt development in places that are projected to be chronically inundated within our lifetimes.

For Bay Area and other Pacific Coast communities that will experience chronic inundation in the coming decades, there is a silver lining: For many, there is time to plan for a threat that is several decades away, compared to communities on the Atlantic Coast that have only 20 or 30 years. And California is known for its environmental leadership, which has led to what Gunther calls an “incredible patchwork” of sea level rise adaptation measures.

Here are some of the many pieces of this growing patchwork quilt of adaptation measures:

In South San Francisco Bay, a number of shoreline protection projects have been proposed or are underway.

A regional response to sea level rise

Gunther notes that “We’re still struggling with what to do, but the state, cities, counties, and special districts are all engaged” on the issue of sea level rise. With hundreds of coastal communities nationwide facing chronic flooding that, in the coming decades, will necessitate transformative changes to the way we live along the coast, regional coordination, while challenging will be critical. Otherwise, communities with fewer resources to adapt to rising seas risk getting left behind.

“There’s a regional response to sea level rise that’s emerging,” says Gunther, and the recently passed ballot measure AA may be among the first indicators of that regional response.

In 2016, voters from the nine counties surrounding San Francisco Bay approved measure AA, which focuses on restoring the bay’s wetlands. Gunther says that this $500+ million effort could prove to be “one of the most visionary flood protection efforts of our time.” The passage of Measure AA was particularly notable in that it constituted a mandate from not one community or one county, but all nine counties in the Bay Area.

Toward a sustainable Bay Area

Waves of people have rushed in and out of the Bay Area for over 150 years, seeking fortunes here, then moving on as industries change. The stunning landscape leaves an indelible mark on all of us, just as we have left a mark on it, forever altering the shoreline and ecosystems of the bay.

For those of us, like Ingrid Ballman and like me, who have made our homes and are watching our children grow here, the reality that we cannot feasibly protect every home, every stretch of the bay’s vast coastline, is sobering. All around the bay, incredible efforts are underway to make Bay Area communities safer, more flood-resilient places to live. Harnessing that energy at the regional and state levels, and continuing to advocate for strong federal resilience-building frameworks has the potential to make the Bay Area a place we can continue to live for a long time, and a leader in the century of sea level rise adaptation that our nation is entering.

Spanish Translation (En español)

Pengrin/Flickr San Francisco Bay Joint Venture Union of Concerned Scientists Kristy Dahl San Francisco Estuary Institute and the Bay Area Ecosystems Climate Change Consortium.

El área de la bahía de San Francisco enfrenta aumento del nivel del mar e inundación crónica

UCS Blog - The Equation (text only) -

Cuando desde el lado de la bahía uno ve los rascacielos de San Francisco multiplicarse a paso frenético, es fácil entender por qué Ingrid Ballman y su esposo eligieron mudarse de la ciudad hacia Alameda después del nacimiento de su hijo. Con búngalos unifamiliares pintados en un arco iris de colores pastel y restaurantes con patios en donde adultos mayores pasan el tiempo mirando a los pelícanos pescar, Alameda es un mundo de diferencia entre el ritmo de ‘gigabits’ por segundo de San Francisco y la vida del otro lado de la bahía.

Niños jugando en la playa estatal Crown Memorial de Alameda, en la bahía de San Francisco. Un lugar de ensueño para jugar, el Departamento de Parques y Recreación del Estado de California describe la playa como “un gran logro de paisajismo e ingeniería”, descripción que se aplica a la mayor parte de la costa del área de la bahía.

“Tuve un niño y es un lugar agradable para criarlo, muy orientado a la familia, las escuelas son buenas. No pensamos mucho en ningún otro lugar más que Alameda”, dice Ballman. Alameda ha sido, para los estándares del área de la bahía, relativamente económica aunque con el promedio de los precios de las casas, que han subido más del doble en 15 años, esto es cada vez menos el caso.

Después de que Ballman y su esposo compraron su casa, ella comenzó a pensar más en el futuro de la isla. “Hasta cierto punto”, dice cuidadosamente, “realmente está claro que escogimos una de las peores ubicaciones” del área de la bahía de San Francisco.

Un punto estratégico de riesgo

La ciudad de Alameda está situada en dos islas…más o menos. La isla de Alameda, la más grande de las dos, es verdaderamente una isla, lo ha sido desde el 1902 cuando unos pantanos a lo largo de la punta sureste fueron dragados para crear el estuario de Oakland.

La isla de Bay Farm, la más pequeña de las dos, solía ser una isla, pero la recuperación de los humedales la convirtió en una península conectada a tierra firme. En los años cincuenta, cuando familias enteras migraron a los suburbios en busca del sueño americano (una casa con una cerca blanca y 2 hijos y medio), la isla de Alameda, con su base naval, contaba con poco espacio para construir nuevas viviendas.

La solución al influjo de población fue rellenar los humedales para crear 350 acres adicionales. La isla de Bay Farm también usó rellenos para ampliar la isla hacia la bahía.

El relleno de áreas de la bahía de San Francisco fue común hasta finales de los años sesenta, cuando fue fundada la Comisión para la Conservación y Desarrollo de la Bahía.

Muchas comunidades del área de la bahía están construidas sobre humedales Estas áreas bajas son particularmente susceptibles al aumento del nivel del mar e inundación costera.

Mientras que existen programas para recuperar humedales en muchas áreas, muchas otras zonas de relleno son hoy barrios establecidos, con negocios y escuelas, que son más económicos para vivir que otras zonas. La renta promedio de un apartamento en partes de los condados de San Mateo y Alameda, donde el relleno ha sido extensivo, puede valer la mitad que en los barrios de tierra firme de San Francisco.

Mientras de los residentes del área de la bahía consideran peligros ambientales, muchos de nosotros pensamos primero en los terremotos. En Alameda, hace notar Ballman, el terreno geológico hace que partes de la isla que fueron rellenadas sean altamente susceptibles a licuefacción (o pérdida de la firmeza del suelo) cuando hay terremotos. Es precisamente esta misma geología la que pone a estas comunidades, que fueron construidas sobre antiguos humedales, en la mira de un creciente problema ambiental: inundaciones crónicas ocasionadas por el aumento del nivel del mar.

Inundación crónica en el área de la bahía

Ballman estudia el mapa que traje que muestra la extensión de las inundaciones crónicas en Alameda en el año 2100 teniendo en cuenta un escenario intermedio en el aumento del nivel del mar que proyecta un incremento de 4 pies comparado al nivel actual. El mapa es una muestra del último análisis, a escala nacional, de UCS que muestra los riesgos que enfrentan las comunidades del país con el aumento del nivel del mar.

“Aquí está la escuela de mi hijo”, dice apuntando hacia una parcela de 12 acres de tierra que aparece casi completamente inundada en mi mapa. Con este escenario intermedio los edificios de la escuela están a salvo y son principalmente los campos deportivos los que se inundan frecuentemente.

En esta ocasión, no traje un mapa de inundaciones crónicas con un escenario alto que proyecta un aumento de 6.5 pies del nivel del mar para finales de siglo para que Ballman lo viera. Ese mapa muestra que si no logramos reducir las emisiones, y continuamos al mismo paso de aumento del nivel del mar, para finales de siglo los edificios de la escuela de su hijo se inundarán, en promedio, cada dos semanas. Aunque este escenario parece lejano, el hijo de Íngrid vivirá para ese entonces. Más aún, estos impactos podrían adelantarse.

El nivel del mar está aumentando más lentamente en la costa oeste que en la mayor parte de las costas este y del Golfo, lo cual significa que la mayoría de las comunidades californianas tendrán más tiempo para planear su respuesta ante el aumento del nivel del mar.

Ciertamente, para el año 2060, cuando las costas del este y del Golfo cuenten con 270 a 360 comunidades donde el 10% o más de la tierra utilizable se inunda crónicamente, la costa del oeste solamente tendrá 2 o 3. Dado que el área de la bahía está densamente poblada, sin embargo, aún pequeños cambios en el alcance de las mareas podría afectar a mucha gente.

Tan pronto como el año 2035, teniendo en cuenta un escenario intermedio del aumento del nivel del mar, los barrios de la isla Bay Farm, Alameda, Redwood Shores, Sunnyvale, Alviso, Corte Madera y Larkspur vivirán inundaciones 26 veces al año o más (este es el umbral que ha definido UCS para catalogar las áreas que sufren inundaciones crónicas).

Para el año 2060, el número de barrios afectados ascendería hasta incluir Palo Alto, East Palo Alto y otras zonas a lo largo del corredor entre San Francisco y Silicon Valley.

Para el año 2100, el mapa de áreas crónicamente inundadas alrededor de la bahía es muy parecido al mapa de áreas que previamente fueron humedales.

Para el año 2100, en un escenario intermedio del aumento del nivel de mar, muchos vecindarios del área de la bahía experimentarían inundaciones 26 veces o más al año. Muchas de estas áreas crónicamente inundadas fueron originalmente humedales de mareas.

Vivienda asequible en Alameda

Como en muchas otras comunidades del área de la bahía, Alameda ha luchado para mantenerse a la altura de la demanda de la vivienda, particularmente la vivienda asequible para familias de ingresos bajos y medios, ante el crecimiento de la población en la región.

En los últimos 10 a 15 años, en grandes extensiones de la costa del noroeste de la isla se han desarrollado complejos de apartamentos y condominios. Conduciendo por las últimas construcciones y echando un vistazo a mi mapa de futuras zonas de inundaciones crónicas, me sentí perpleja por la superposición.

En un escenario alto, los barrios construidos solamente hace 10 o 15 años se inundarán regularmente para el año 2060. En este mismo escenario, para final de siglo, las vías principales que rodean algunos de las últimas construcciones se inundarán.

A pesar de que es necesario construir más unidades residenciales en el área de la bahía para aliviar la creciente crisis de vivienda, uno se pregunta, ¿cuánto tiempo serán lugares viables para vivir las casas hoy en construcción? Ballman entiende la magnitud del problema y dice simplemente, “hay cientos de personas mudándose a lugares que estarán bajo el agua”.

Muchos de los barrios recién construidos en Alameda enfrentarían frecuentes inundaciones en la segunda mitad del siglo con índices intermedios o altos de aumento del nivel de mar.

“Algunos de los lugares más accesibles para vivir”, dice Andy Gunther de Bay Area Ecosystems Climate Consortium, “son los lugares más vulnerables al aumento del nivel del mar, incluyendo Pinole, East Palo Alto y West Oakland”. Muchas de estas comunidades son comunidades de bajos recursos pertenecientes a minorías étnicas y raciales, que tienen que lidiar con la falta de inversión en sus barrios, y quienes, por lo tanto, tendrán menos recursos para enfrentar el aumento del nivel del mar.

Medidas adoptada por la bahía de San Francisco con miras al aumento del nivel de mar

Cómo le vaya a los barrios del área de la bahía, tanto al más barato como al más caro, ante el aumento del nivel del mar dependerá en parte de las políticas locales, estatales y federales diseñadas para enfrentar el cambio climático. Un buen primer paso sería detener las construcciones en lugares en los que se proyecta estarán crónicamente inundados en el transcurso de nuestras vidas

Pero para las comunidades de la bahía hay buenas noticias en medio de la adversidad: muchas tienen décadas para planear como enfrentarán los cambios venideros mientras las comunidades del Golfo y de la costa Atlántica tan solo cuentan con 20 o 30 años para tomar estas decisiones. California es conocido por su liderazgo en el medioambiente, que ha conducido a lo que Gunther llama “un increíble mosaico” de medidas de adaptación ante el aumento del nivel del mar.

Aquí tenemos algunas de las muchas piezas del creciente trabajo del “mosaico” de medidas de adaptación:

  • El año pasado, cuando la ley 2800 fue aprobada, el Gobernador de California Jerry Brown creó el Climate-Safe Infrastructure Working Group que busca integrar un rango de posibles escenarios climáticos al diseño y planeación de infraestructura.
  • La ciudad de San Francisco desarrolló guías para planear la ciudad pensando en el aumento del nivel del mar.
  • Con subsidio de la Agencia de Protección al Medioambiente (EPA, por sus siglas en inglés) el Novato Watershed Program está aprovechando los procesos naturales para reducir los riesgos de inundaciones a lo largo de Novato Creek.
  • El Instituto del Estuario de San Francisco (SFEI, por sus siglas en inglés) está trabajando para entender la historia natural de San Francisquito Creek, cerca de Palo Alto, y de East Palo Alto con la finalidad de desarrollar estructuras de control de inundaciones y metas de restauración funcionales y sostenibles.
  • El Santa Clara Valley Water District está programado para empezar a trabajar este verano para mejorar el drenaje de los canales del este y el oeste de Sunnyvale propensos a inundaciones y a reducir los riesgos de inundaciones en 1,600 hogares. El distrito también está abordando los problemas de inundaciones por mareas en cooperación con el Cuerpo de Ingenieros del Ejército de los Estados Unidos.
  • Como parte de sus esfuerzos para afrontar el aumento del nivel del mar, el condado de San Mateo instaló visores de realidad virtual a lo largo de las orillas del mar para involucrar al público en una discusión sobre cómo el aumento del nivel del mar afectaría su comunidad.
  • A nivel regional, la Bay Conservation Development Commission en colaboración con la Administración Nacional Oceánica y Atmosférica (NOOA, por sus siglas en inglés) y otras agencias locales, estatales y federales para el proyecto Adapting to Rising Tides que proporciona información, herramientas y orientación para organizaciones que buscan soluciones a los restos que trae el cambio climático.
  • La competencia Resilient by Design para el área de la bahía reúne a ingenieros, miembros de la comunidad y diseñadores quienes conjuntamente diseñan soluciones para enfrentar las consecuencias del aumento del nivel del mar.

En el sur de la bahía de San Francisco se han propuesto o están en marcha un número de proyectos para la protección de la costa. Fuentes: Instituto del Estuario de San Francisco y Consorcio de Cambio Climático de los Ecosistemas del Área de la Bahía..

Respuesta regional ante el aumento del nivel del mar

Gunther menciona que en el tema del aumento del nivel del mar “aún estamos lidiando con lo que hay que hacer, pero todas las ciudades, estados, condados y distritos especiales están comprometidos”. Con cientos de comunidades costeras en el país enfrentando inundaciones crónicas, en las próximas décadas necesitarán cambios transformadores en la forma de vida costera, coordinación regional, junto con compromiso estatal y federal, serán críticos para abordar los difíciles retos por venir. De otra forma, las comunidades con menos recursos para adaptarse a los riesgos del aumento del nivel del mar se arriesgarán a quedarse atrás.

“Está emergiendo una respuesta regional ante el aumento del nivel del mar”, dice Gunther, y la medida AA recientemente aprobada por votación puede estar entre los primeros indicadores de respuesta regional. En el año 2016, los votantes de los nueve condados que rodean la bahía de San Francisco aprobaron la medida AA, que se enfoca en la restauración de los humedales de la bahía.

Gunther dice que estos esfuerzos de más de $500 millones de dólares podrían probar ser “uno de los esfuerzos más visionarios de protección a las inundaciones de nuestros tiempos”. La aprobación de la medida AA fue particularmente notable porque constituyó un mandato no de una comunidad o un condado, sino de nueve condados de la bahía.

Hacia una área de la bahía sostenible

Por más de 150 años, oleadas de personas han venido a la bahía buscando fortuna, y luego han partido con el cambio de las industrias. El maravilloso paisaje deja una marca imborrable en todos nosotros, así como nosotros hemos dejado una marca en él, alterando por siempre la costa y los ecosistemas de la bahía.

Para aquellos de nosotros, como Ingrid Ballman y como yo, quienes hemos echado raíces y estamos viendo crecer a nuestros hijos aquí, la realidad de que no podemos de forma viable proteger cada casa ni cada tramo de la vasta costa de la bahía da que pensar.

A través de toda la bahía van en camino esfuerzos increíbles para hacer que las comunidades sean lugares más seguros y más resistentes para vivir. Aprovechar esa energía a niveles regional y estatal y continuar haciendo cabildeo para solidificar fuertes marcos de resistencia federales, tiene el potencial de hacer del área de la bahía un lugar sostenible y un líder en el nuevo siglo de la adaptación al aumento del nivel del mar al que está entrando nuestra nación.

 

Pengrin/Flickr San Francisco Bay Joint Venture Union of Concerned Scientists Kristy Dahl San Francisco Estuary Institute and the Bay Area Ecosystems Climate Change Consortium.

Turkey Point: Fire and Explosion at the Nuclear Plant

UCS Blog - All Things Nuclear (text only) -

The Florida Power & Light Company’s Turkey Point Nuclear Generating Station about 20 miles south of Miami has two Westinghouse pressurized water reactors that began operating in the early 1970s. Built next to two fossil-fired generating units, Units 3 and 4 each add about 875 megawatts of nuclear-generated electricity to the power grid.

Both reactors hummed along at full power on the morning of Saturday, March 18, 2017, when problems arose.

The Event

At 11:07 am, a high energy arc flash (HEAF) in Cubicle 3AA06 of safety-related Bus 3A ignited a fire and caused an explosion. The explosion inside the small concrete-wall room (called Switchgear Room 3A) injured a worker and blew open Fire Door D070-3 into the adjacent room housing the safety-related Bus 3B (called Switchgear Room 3B.)

A second later, the Unit 3 reactor automatically tripped when Reactor Coolant Pump 3A stopped running. This motor-driven pump received its electrical power from Bus 3A. The HEAF event damaged Bus 3A, causing the reactor coolant pump to trip on under-voltage (i.e., less than the desired voltage of 4,160 volts.) The pump’s trip triggered the insertion of all control rods into the reactor core, terminating the nuclear chain reaction.

Another second later and Reactor Coolant Pumps 3B and 3C also stopped running. These motor-driven pumps received electricity from Bus 3B. The HEAF event should have been isolated to the Switchgear Room 3A, but the force of the explosion blew open the connecting fire door, allowing Bus 3B to also be affected. Reactor Coolant Pumps 3B and 3C tripped on under-frequency (i.e., alternating current electricity at too much less than the desired 60 cycles per second). Each Turkey Point unit has three Reactor Coolant Pumps that force the flow of water through the reactor core, out the reactor vessel to the steam generators where heat gets transferred to a secondary loop of water, and then back to the reactor vessel. With all three pumps turned off, the reactor core would be cooled by natural circulation. Natural circulation can remove small amounts of heat, but not larger amounts; hence, the reactor automatically shuts down when even one of its three Reactor Coolant Pumps is not running.

At shortly before 11:09 am, the operators in the control room received word about a fire in Switchgear Room 3A and the injured worker. The operators dispatched the plant’s fire brigade to the area. At 11:19 am, the operators declared an emergency due to a “Fire or Explosion Affecting the Operability of Plant Systems Required to Establish or Maintain Safe Shutdown.”

At 11:30 am, the fire brigade reported to the control room operators that there was no fire in either Switchgear Room 3A or 3B.

Complication #1

The Switchgear Building is shown on the right end of the Unit 3 turbine building. Switchgear Rooms 3A and 3B are located adjacent to each other within the Switchgear Building. The safety-related buses inside these rooms take 4,160 volt electricity from the main generator, the offsite power grid, or an EDG and supply it to safety equipment needed to protect workers and the public from transients and accidents. Buses 3A and 3B are fully redundant; either can power enough safety equipment to mitigate accidents.

Fig. 1 (Source: Nuclear Regulatory Commission)

To guard against a single file disabling both Bus 3A and Bus 3B despite their proximity, each switchgear room is designed as a 3-hour fire barrier. The floor, walls, and ceiling of the room are made from reinforced concrete. The opening between the rooms has a normally closed door with a 3-hour fire resistance rating.

Current regulatory requirements do not require the room to have blast resistant fire doors, unless the doors are within 3 feet of a potential explosive hazard. (I could give you three guesses why all the values are 3’s, but a correct guess would divulge one-third of nuclear power’s secrets.) Cubicle 3AA06 that experienced the HEAF event was 14.5 feet from the door.

Fire Door D070-3, presumably unaware that it was well outside the 3-feet danger zone, was blown open by the HEAF event. The opened door created the potential for one fire to disable Buses 3A and 3B, plunging the site into a station blackout. Fukushima reminded the world why it is best to stay out of the station blackout pool.

Complication #2

The HEAF event activated all eleven fire detectors in Switchgear Room 3A and activated both of the very early warning fire detectors in Switchgear Room 3B. Activation of these detectors sounded alarms at Fire Alarm Control Panel 3C286, which the operators acknowledged. These detectors comprise part of the plant’s fire detection and suppression systems intended to extinguish fires before they cause enough damage to undermine nuclear safety margins.

But workers failed to reset the detectors and restore them to service until 62 hours later. Bus 3B provided the only source of electricity to safety equipment after Bus 3A was damaged by the HEAF event. The plant’s fire protection program required that Switchgear Room 3B be protected by the full array of fire detectors or by a continuous fire watch (i.e., workers assigned to the area to immediately report signs of smoke or fire to the control room.) The fire detectors were out-of-service for 62 hours after the HEAF event and the continuous fire watches were put in place late.

Workers were in Switchgear Room 3B for nearly four hours after the HEAF event performing tasks like smoke removal. But a continuous fire watch was not posted after they left the area until 1:15 pm on March 19, the day following the HEAF event. And these workers were placed in Switchgear Room 3A, not in Switchgear Room 3B housing the bus that needed to be protected.

Had a fire started in Switchgear Room 3B, neither the installed fire detectors nor the human fire detectors would have alerted control room operators. The lights going out on Broadway, or whatever they call the main avenue at Turkey Point, might have been their first indication.

Complication #3

At 12:30 pm on March 18, workers informed the control room operators that the HEAF event damaged Bus 3A such that it could not be re-energized until repairs were completed. Bus 3A provided power to Reactor Coolant Pump 3A and to other safety equipment like the ventilation fan for the room containing Emergency Diesel Generator (EDG) 3A. Due to the loss of power to the room’s ventilation fan, the operators immediately declared EDG 3A inoperable.

EDGs 3A and 3B are the onsite backup sources of electrical power for safety equipment. When the reactor is operating, the equipment is powered by electricity produced by the main generator as shown by the green line in Figure 2. When the reactor is not operating, electricity from the offsite power grid flows in through transformers and Bus 3A to the equipment as indicated by the blue line in Figure 2. When under-voltage or under-frequency is detected on their respective bus, EDG 3A and 3B will automatically start and connect to the bus to supply electricity for the equipment as shown by the red line in Figure 2.

Fig. 2 (Source: Nuclear Regulatory Commission with colors added by UCS)

Very shortly after the HEAF event, EDG 3A automatically started due to under-voltage on Bus 3A. But protective relays detected a fault on Bus 3A and prevented electrical breakers from closing to connect EDG 3A to Bus 3A. EDG 3A was operating, but disconnected from Bus 3A, when the operators declared it inoperable at 12:30 pm due to loss of the ventilation fan for its room.

But the operators allowed “inoperable” EDG 3A to continue operating until 1:32 pm. Given that (a) its ventilation fan was not functioning, and (b) it was not even connected to Bus 3A, they should not have permitted this inoperable EDG from operating for over an hour.

Complication #4

A few hours before the HEAF event on Unit 3, workers removed High Head Safety Injection (HHSI) pumps 4A and 4B from service for maintenance. The HHSI pumps are designed to transfer makeup water from the Refueling Water Storage Tank (RWST) to the reactor vessel during accidents that drain cooling water from the vessel. Each unit has two HHSI pumps; only one HHSI pump needs to function in order to provide adequate reactor cooling until the pressure inside the reactor vessel drops low enough to permit the Low Head Safety Injection pumps to take over.

On the day before, workers found a small leak from a small test line downstream of the common pipe for the recirculation lines of HHSI Pumps 4A and 4B (circled in orange in Figure 3). The repair work was estimated to take 18 hours. Both pumps had to be isolated in order for workers to repair the leaking section.

Pipes cross-connect the HHSI systems for Units 3 and 4 such that HHSI Pumps 3A and 3B (circled in purple in Figure 3) could supply makeup cooling water to the Unit 4 reactor vessel when HHSI Pumps 4A and 4B were removed from service. The operating license allowed Unit 4 to continue running for up to 72 hours in this configuration.

Fig. 3 (Source: Nuclear Regulatory Commission with colors added by UCS)

Before removing HHSI Pumps 4A and 4B from service, operators took steps to protect HHSI Pumps 3A and 3B by further restricting access to the rooms housing them and posting caution signs at the electrical breakers supplying electricity to these motor-driven pumps.

But operators did not protect Buses 3A and 3B that provide power to HHSI Pumps 3A and 3B respectively. Instead, they authorized work to be performed in Switchgear Room 3A that caused the HEAF event.

The owner uses a computer program to characterize risk of actual and proposed plant operating configurations. Workers can enter components that are broken and/or out of service for maintenance and the program bins the associated risk into one of three color bands: green, yellow, and red in order of increasing risk. With only HHSI Pumps 4A and 4B out of service, the program determined the risk for Units 3 and 4 to be in the green range. After the HEAF event disabled HHSI Pump 3A, the program determined that the risk for Unit 4 increased to nearly the green/yellow threshold while the risk for Unit 3 moved solidly into the red band.

The Cause(s)

On the morning of Saturday, March 18, 2017, workers were wrapping a fire-retardant material called Thermo-Lag around electrical cabling in the room housing Bus 3A. Meshing made from carbon fibers was installed to connect sections of Thermal-Lag around the cabling for a tight fit. To minimize the amount of debris created in the room, workers cut the Thermal-Lag material to the desired lengths at a location outside the room about 15 feet away. But they cut and trimmed the carbon fiber mesh to size inside the room.

Bus 3A is essentially the nuclear-sized equivalent of a home’s breaker panel. Open the panel and one can open a breaker to stop the flow of electricity through that electrical circuit within the house. Bus 3A is a large metal cabinet. The cabinet is made up of many cubicles housing the electrical breakers controlling the supply of electricity to the bus and the flow of electricity to components powered by the bus. Because energized electrical cables and components emit heat, the metal doors of the cubicles often have louvers to let hot air escape.

The louvers also allow dust and small airborne debris (like pieces of carbon fiber) to enter the cubicles. The violence of the HEAF event (a.k.a. the explosion) destroyed some of the evidence at the scene, but carbon fiber pieces were found inside the cubicle where the HEAF occurred.  The carbon fiber was conductive, meaning that it could transport electrical current. Carbon fiber pieces inside the cubicle, according to the NRC, “may have played a significant factor in the resulting bus failure.”

Further evidence inside the cubicle revealed that the bolts for the connection of the “C” phase to the bottom of the panel had been installed backwards. These backwards bolts were the spot where high-energy electrical current flashed over, or arced, to the metal cabinet.

As odd as it seems, installing fire retardant materials intended to lessen the chances that a single fire compromises both electrical safety systems started a fire that compromised both electrical safety systems.

The Precursor Events (and LEAF)

On February 2, 2017, three electrical breakers unexpectedly tripped open while workers were cleaning up after removing and replacing thermal insulation in the new electrical equipment room.

On February 8, 2017, “A loud bang and possible flash were reported to have occurred” in the new electrical equipment room as workers were cutting and installing Thermo-Lag. Two electrical breakers unexpectedly tripped open. The equipment involved used 480 volts or less, making this a low energy arc fault (LEAF) event.

NRC Sanctions

The NRC dispatched a special inspection team to investigate the causes and corrective actions of this HEAF event. The NRC team identified the following apparent violations of regulatory requirements that the agency is processing to determine the associated severity levels of any applicable sanctions:

  • Failure to establish proper fire detection capability in the area following the HEAF event.
  • Failure to properly manage risk by allowing HHSI Pumps 4A and 4B to be removed from service and then allowing work inside the room housing Bus 3A.
  • Failure to implement effective Foreign Material Exclusion measures inside the room housing Bus 3A that enabled conductive particles to enter energized cubicles.
  • Failure to provide adequate design control in that equipment installed inside Cubicle 3AA06 did not conform to vendor drawings or engineering calculations.

UCS Perspective

This event illustrates both the lessons learned and the lessons unlearned from the fire at the Browns Ferry Nuclear Plant in Alabama that happened almost exactly 42 years earlier. The lesson learned was that a single fire could disable primary safety systems and their backups.

The NRC adopted regulations in 1980 intended to lessen the chances that one fire could wreak so much damage. The NRC found in the late 1990s that most of the nation’s nuclear power reactors, including those at Browns Ferry, did not comply with these fire protection regulations. The NRC amended its regulations in 2004 giving plant owners an alternative means for managing the fire hazard risk. Workers were installing fire protection devices at Turkey Point in March 2017 seeking to achieve compliance with the 2004 regulations because the plant never complied with the 1980 regulations.

The unlearned lesson involved sheer and utter failures to take steps after small miscues to prevent a bigger miscue from happening. The fire at Browns Ferry was started by a worker using a lit candle to check for air leaking around sealed wall penetrations. The candle’s flame ignited the highly flammable sealant material. The fire ultimately damaged cables for all the emergency core cooling systems on Unit 1and most of those systems on Unit 2. Candles had routinely been used at Browns Ferry and other nuclear power plants to check for air leaks. Small fires had been started, but had always been extinguished before causing much damage. So, the unsafe and unsound practice was continued until it very nearly caused two reactors to meltdown. Then and only then did the nuclear industry change to a method that did not stick open flames next to highly flammable materials to see if air flow caused the flames to flicker.

Workers at Turkey Point were installing fire retardant materials around cabling. They cut some material in the vicinity of its application. On two occasions in February 2017, small debris caused electrical breakers to trip open unexpectedly. But they continued the unsafe and unsound practice until it caused a fire and explosion the following month that injured a worker and risked putting the reactor into a station blackout event. Then and only then did the plant owner find a better way to cut and install the material. That must have been one of the easiest searches in nuclear history.

The NRC – Ahead of this HEAF Curveball

The NRC and its international regulatory counterparts have been concerned about HEAF events in recent years. During the past two annual Regulatory Information Conferences (RICs), the NRC conducted sessions about fire protection research that covered HEAF. For example, the 2016 RIC included presentations from the Japanese and American regulators about HEAF. These presentations included videos of HEAF events conducted under lab conditions. The 2017 RIC included presentations about HEAF by the German and American regulators. Ironically, the HEAF event at Turkey Point occurred just a few days after the 2017 RIC session.

HEAF events were not fully appreciated when regulations were developed and plants were designed and built. The cooperative international research efforts are defining HEAF events faster than could be accomplished by any country alone. The research is defining factors that affect the chances and consequences of HEAF events. For example, the research indicates that the presence of aluminum, like in cable trays holding the energized electrical cables, can be ignited during a HEAF event, significantly adding to the magnitude and duration of the event.

As HEAF research defined risk factors, the NRC has been working with nuclear industry representatives to better understand the role these factors may play across the US fleet of reactors. For example, the NRC recently obtained a list of aluminum usage around high voltage electrical equipment.

The NRC needs to understand HEAF factors as fully as practical before it can determine if additional measures are needed to manage the risk. The NRC is also collecting information about potential HEAF vulnerabilities. Collectively, these efforts should enable the NRC to identify any nuclear safety problems posed by HEAF events and to implement a triaged plan that resolves the biggest vulnerabilities sooner rather than later.

New World Heritage Sites Already Under Threat From Climate Change

UCS Blog - The Equation (text only) -

At least four of the new World Heritage sites designated by UNESCO at the annual meeting of the World Heritage Committee this week are under serious threat from climate change.

In all, 21 new sites were added to the World Heritage list, and although most are not immediately vulnerable to climate change, probably all are already experiencing local climatic shifts, and most will be significantly impacted within a few decades unless action is taken soon to reduce heat-trapping emissions globally. Climate change is a fast-growing problem for World Heritage and one that the World Heritage Committee needs to take much more seriously than it currently is.

Climate is the biggest global threat to World Heritage

In 2014, the International Union for the Conservation of Nature (IUCN) identified climate change as the biggest potential threat to natural World Heritage sites and a study by the Potsdam Institute for Climate Impact Research and the University of Innsbruck in Austria found 136 of 700 cultural World Heritage sites to be at long-term risk from sea level rise. In 2016, a joint UCS, UNESCO, UNEP report concluded that “climate change is fast becoming one of the most significant risks for World Heritage worldwide”. This year, UNESCO launched two new reports highlighting the dramatic climate threat to coral reefs in World Heritage sites, and to sites in the Arctic.

The World Heritage Committee needs to address climate change

There is a dilemma here. The World Heritage Convention is a remarkable international instrument that was set up to identify and protect both natural and cultural sites of “outstanding universal value” for future generations. However, when the convention was adopted in 1972, the threat of global climate change was nowhere on political or scientific radar screens, and so the mechanisms of the treaty were geared to preventing local impacts such as water pollution, mining & quarrying, infrastructure development and land use change.

The convention hasn’t yet effectively responded to modern climate change risks. If a World Heritage site is threatened by coal mining, tourism pressure or suburbanization, it can be placed on the list of sites in danger, and then the responsibility lies with the host country to implement management actions reducing the threat. But no site has yet been placed on that list because of climate change.

Meanwhile, places at serious risk from climate change are still being added as new World Heritage sites. UCS plans to work with UNESCO’s two primary international non-profit technical advisors, IUCN and ICOMOS (International Council on Monuments and Sites) to address this issue at next year’s World Heritage Committee meeting.

Four newly designated World Heritage sites vulnerable

Here are the four newly designated sites already being impacted by climate change:

Lake District, United Kingdom

The Lake District. Photo: Adam Markham

A spectacular landscape of glaciated valleys and lakes, this region was the cradle of the English Romantic movement led by the poets William Wordsworth and Samuel Taylor Coleridge, and home to the authors Beatrix Potter and John Ruskin. Its agro-pastoral landscape dotted with hill farms and stone walls is the result of hundreds of years of sheep farming, and the Lake District is now one of Britain’s most popular tourism destinations.

Unfortunately, the area is already experiencing warmer, wetter, winters and more intense extreme weather events. Disastrous floods in 2009 washed away old bridges and footpaths, and unprecedented drought in 2010-12 affected water supply and water quality in lakes and rivers. Conservation managers predict that species at the edge of their ranges in the Lake District, including cold-water fish such as the Arctic char, could become locally extinct, peat habitats may dry out, woodland species composition will change and invasive alien species like Japanese knotweed will proliferate in response to changing conditions.

Kujataa, Greenland (Denmark)

Ruined Norse buildings at Kujataa. Photo: UNESCO/Garðar Guðmunds-son

Kujataa in southern Greenland holds archaeological evidence of the earliest introduction of farming to the Arctic by Norse settlers from Iceland, and earlier hunter-gatherers.

Today, it’s an exceptionally well preserved cultural landscape of fields and pastures from medieval times through to the 18th Century, representing a combination of Norse and Inuit subsistence farming and sea mammal hunting. However, in common with the rest of Greenland, the area is experiencing a rapidly warming climate.

Coastal erosion exacerbated by sea level rise and more intense storms can damage historic monuments and archaeology. Elsewhere in Greenland, warming temperatures have been shown to hasten decomposition of organic material at archaeological sites, including wood, fabrics and animal skins – a growing problem throughout the Arctic. Warming at Kujataa is also expected to increase the growth of shrubby vegetation and alter agricultural cycles, potentially necessitating changes in cropping strategies by local farmers.

Landscapes of Dauria, Mongolia & Russian Federation

Daurien steppe wetlands. Photo: UNESCO/O.Kirilyu

This new transboundary World Heritage site covers a huge area of undisturbed steppe, and is a globally important ecoregion. Home to nomadic Mongolian herders who have used the grasslands for over 3,000 years, the Daurian steppes are also rich in biodiversity. They are important for millions of migratory birds and home to almost all the world’s Mongolian gazelle population as well as threatened species such as the red-crowned crane and swan goose.

According to a climate impacts assessment by IUCN, the mean annual temperature of the region has already risen by 2°C and further climate change is expected to bring longer and more severe droughts, reducing grassland productivity and changing wetlands dramatically in what is already a landscape of cyclical weather extremes. Desertification and wildfires worsened by climate change are adding further environmental pressures.

‡Khomani Cultural Landscape, Republic of South Africa

‡Khomani San cultural heritage has at last been recognized. Photo: UNESCO/Francois Odendaal Productions

The ‡Khomani San (or Kalahari bushmen) are the indigenous first people of the Kalahari Desert, but they were forced from their land when the Kalahari Gemsbok National Park (now part of the  Kgalagadi Transfrontier Park) was created in 1931. The displacement led to dispersion of the ‡Khomani San people through South Africa, Namibia and Botswana and almost killed off many traditional cultural practices as well as ancient languages such as N|u.

After apartheid ended, the San were successful in settling a land claim and the new World Heritage site, which coincides with the boundaries of the national park, recognizes their thousands of years of traditional use of this land, their close connection to its natural systems and their right to co-manage the preserve.

Unfortunately, climate change presents a new challenge. The Intergovernmental Panel on Climate Change (IPCC) has projected accelerated warming and a drying trend for this area of southern Africa, and in recent decades conversion of grassland into savanna, with more un-vegetated soil has been reported and increased desertification is a growing threat. Kalahari Gemsbok National Park is the fastest warming park in South Africa and scientists have recorded a rise in mean maximum annual temperature there of nearly 2°C since 1960.

 

President Trump’s Budget Leaves Workers Behind

UCS Blog - The Equation (text only) -

Budgets reflect priorities; they also reflect values. And the Trump Administration has signaled where it stands loud and clear via its agency appointments (Scott Pruitt, need we say more?) and its FY18 budget proposals. We have already said plenty about what the proposed cuts to the EPA budget mean for public health and the environment.

A recap here, here, here, here. Many others are also ringing that alarm bell (here, here, here).

Less in the public eye is the Administration’s budget proposals for agencies that protect another critical resource—our nation’s workforce! We do have some indication of where Congress and the Administration stand on worker health and safety (here, here)—and it’s not reassuring.

Trump budget puts worker health on chopping block

Let’s cut to the chase. President Trump’s FY18 budget proposals are not good for working people; these are our loved ones, our families’ breadwinners. They are also essential contributors to powering our economy…you know, making America great.

Here’s a quick snapshot of the cuts our President has proposed for our primary worker health and safety agencies—the agencies that safeguard and protect our nation’s workforce:

  • Occupational Safety and Health Administration (OSHA). $9.5 million budget cut; staffing cuts in enforcement program; elimination of safety and health training grants for workers. OSHA was created by Congress to “assure safe and healthful working conditions for working men and women.” It is our nation’s bulwark in protecting workers by setting and enforcing standards and providing training, outreach, education and assistance to employers and workers. At current budget levels, OSHA can only inspect every workplace in the United States once every 159 years.
  • National Institute for Occupational Safety and Health (NIOSH). An astounding 40% budget cut. NIOSH is our nation’s primary federal agency responsible for conducting research, transferring that knowledge to employers and workers, and making recommendations for the prevention of work-related illness and injury. These draconian cuts will essentially eliminate academic programs that train occupational health and safety professionals (occupational medicine physicians and nurses, industrial hygienists, workplace safety specialists) that serve both employers and workers. It will eliminate extramural research programs that conduct, translate, or evaluate research, as well as state surveillance programs for occupational lead poisoning, silicosis, and other diseases.
  • Mine Safety and Health Administration (MSHA). $3 million cut to the agency’s budget on top of previous $8 million cut. This will reduce the number of safety inspection in U.S. coal mines by nearly 25%. MSHA was established in 1977 to prevent death, illness, and injury from mining and to promote safe and healthful workplaces for U.S. miners. (The first federal mine safety statute was passed in 1891.)
Some context

My reflections on this year’s Worker Memorial Day pretty much capture it. But here’s a quick summary:

  • In 2015, 4,836 U.S. workers died from work-related injuries, the highest number since 2008. That’s about 13 people every day! In the United States!
  • Deaths from work-related occupational disease—like silicosis, coal workers’ pneumoconiosis (black lung), occupational cancer, etc.—are not well captured in data surveillance systems. It is estimated that another 50,000-60,000 died from occupational diseases—an astounding number. And, for many, their deaths come years after suffering debilitating and painful symptoms.
  • And then there are the nonfatal injuries and illnesses. Employers reported approximately 2.9 million of them in private industry workers in 2015; another 752,600 injury and illness cases were reported among the approximately 18.4 million state and local government workers.
  • There were nine fatalities and 1,260 reportable cases of workplace injury in the US coal mining industry in 2016.
Speak out opportunity this week

The House subcommittee on Labor–HHS Appropriations has scheduled the markup on the FY 2018 Labor–HHS funding bill for Thursday, July 13, 2017. This is the bill that funds OSHA, MSHA, and NIOSH, as well as the National Institute for Environmental Health Sciences (NIEHS) and the National Labor Relations Board (NLRB). Now is the time to give the appropriators an earful on these proposed cuts—cuts that seriously endanger workers’ safety and health, essentially leaving them behind. Reach out to members of the House appropriation subcommittee and committee and urge them to oppose these cuts to our worker health and safety agencies. Also urge them to oppose any “poison pill riders” to block or delay the implementation of worker protection rules.

Here’s a list of members of the Labor–HHS subcommittee. Members of the full Appropriations Committee are listed here.

How the Senate Healthcare Bill Bolsters the Tanning Industry’s Misinformation Campaign

UCS Blog - The Equation (text only) -

The American Suntanning Association (ASA) and the Indoor Tanning Association (ITA) are trade organizations representing the interests of indoor tanning manufacturers, suppliers and salon owners. The product that these trade organizations sell to customers is artificial UV radiation. The ASA has called itself a “science-first organization” and spouts off so-called scientific information on their website, TanningTruth.com, designed to correct “misinformation” about the harms of indoor sun tanning.

One problem, though: the science doesn’t support their position. In May 2013, several scientific researchers wrote in JAMA Dermatology about the ASA’s biased scientific agenda and how its unscientific messages can negatively impact the public: “Clinicians should be aware of this new counter-information campaign by the [indoor tanning] industry and continue to inform their patients about the risks of [indoor tanning] and the existence of potentially misleading information from the ASA and other organizations. Scientists and clinicians have a duty to remain cognizant of such issues and to voice concerns when agenda-based research is presented in order to “defend and promote” a product with potentially devastating health consequences.”

Like the tobacco industry, sugar industry, and fossil fuel industries before them, the indoor tanning industry is refusing to accept its ethical responsibility to inform customers of the harms of its products. Instead it is actively working to create uncertainty around the science and question the integrity of the governmental and scientific institutions that are acting in the public’s best interest.

The indoor tanning industry seeks to become “great again”

As President Trump took office, the ASA began the derivative slogan, “Make Indoor Tanning Great Again” in the industry’s monthly magazine, SmartTan. By “great again,” the industry means repealing the Affordable Care Act’s (ACA) inclusion of a 10% tax on indoor tanning services. In its advertisement, the ASA urges readers to join the movement and add to its effort of “building relationships with key policymakers and educating the federal government about our industry’s scientifically supported position.” In the June issue of SmartTan, in an article titled “Axing the Tan Tax,” the author brags that three full-time ASA lobbyists over the course of four years have convinced Congressional leadership (including current HHS chief Tom Price and Vice President Mike Pence) that the American tanning industry has been treated unfairly and that there is science supporting the benefits of non-burning UV exposure.

I’ll let The American Academy of Dermatology (AAD) take this one.

The AAD recommends that individuals should obtain vitamin D from a healthy diet, not from unprotected exposure to ultraviolet radiation, because there is “not scientifically validated, safe threshold of UV exposure from the sun or indoor tanning devices that allows for maximal vitamin D synthesis without increasing skin cancer risk.” Anyway, even if there were benefits of UV exposure, the business model of these salons operates upon the retention of customers throughout the year, not just during the busy season. And more trips to the salon means an increased risk of burns and unsafe exposure.

The tanning tax has a twofold goal: help reduce skin cancer risk, especially in young adults, and to use funds to help pay for implementation of the Act. The science on the association between indoor tanning and increased risk of skin cancer supported the case for the inclusion of this policy measure in President Obama’s healthcare bill. A quick spotlight on that science can be summed up by the headline on the Centers for Disease Control (CDC) website addressing the topic: “indoor tanning is not safe.”

The Surgeon General issued a call to action to prevent skin cancer in 2014 which warned of the risks of indoor tanning. The International Agency for Research on Cancer (IARC) has said this of tanning bed related exposure: “citing evidence from years of international research on the relationship between indoor tanning and skin cancer, the IARC, affiliated with the World Health Organization, placed this type of UVR in its most dangerous cancer category for humans, alongside offenders such as radon, asbestos, cigarettes, plutonium, and solar UVR.” Incidence of one of the most dangerous types of skin cancer, melanoma, has been rising over the past 30 years. And, people who started tanning before age 35 have an increased risk of developing melanoma. Because indoor tanning is popular among adolescents and it’s a risky behavior to start while young, restriction of indoor tanning would help public health outcomes.

The FDA has proposed a rule restricting indoor tanning bed use to adults over 18 and requiring users to sign a risk acknowledgement certification stating that they understand the risks of indoor tanning. Taxes are also an effective way to curb demand for a substance, the use of which has worked to decrease cigarette smoking and more recently, sugar-sweetened beverage consumption. However, the tanning industry has joined the ranks of the tobacco and sugar industries to fuel a misinformation campaign designed to sow doubt about the body of science showing harm and delay or quash policies that hurt their bottom line.

Shining a light on the tanning industry’s misinformation campaign

The tanning industry has long fought any regulation or control of its messaging and has funneled money to members of Congress to help in these efforts.

Burt Bonn, immediate past president of ASA, told Smart Tan magazine in February 2017 that “[t]he science has been in our favor from the very beginning…Our opponents have relied on just a few out of hundreds of studies on the risks of UV light to make their case. Nearly all of those studies have been debunked.” He continued, “I think the science is already at a point that it ought to be embarrassing to have someone in the medical profession advise complete and total sunlight abstinence or suggest that a tanning bed operated in a professional tanning salon is a major issue.” Embarrassing? Tell that to the American Academy of Dermatologists, the American Academy of Pediatrics, the American Medical Association, representing thousands of experts in their fields who have recommended that all adults reduce UV exposure, and that children under 18 eliminate UV exposure altogether.

The tanning industry has been on fire in its repeated attempts to misrepresent the science on UV exposure in multiple venues. In 2010, the Federal Trade Commission (FTC) charged that the ITA made misleading representations in its advertising and marketing for indoor tanning including falsely claiming that indoor tanning poses no risk to health, including no risk of skin cancer. The 2010 administrative order from the FTC prohibits the ITA from making representations that imply that the indoor tanning does not increase the risk of skin cancer. In March of 2017, the FTC wrote to the ITA informing them that its FAQ page on its website that claimed that “indoor tanning [was]more responsible than outdoor tanning” and that “melanoma was not associated with UV exposure from tanning bed[s]” were not allowed.

Apparently, the failure to remove that language since 2010 was an oversight by the ITA, but those non-scientific bits of information had persisted on their website, and had been picked up and used on third-party websites, for years! Not only are the indoor tanning industry trade associations still distributing their own unscientific materials to convince users of their products’ safety, but they are using these same arguments to attempt to meddle with public messaging and federal policy at the CDC and FDA and in its lobbying for tanning tax relief since 2011.

According to the tanning industry’s January 2015 issue of Smart Tan, the ASA’s legal and lobbying teams succeeded in getting the CDC to remove claims of a 75% increase in melanoma risk from sunbed use from its website, and “the ASA legal team is following appropriate CDC protocol to challenge even more language that we believe is not supported by sound science.” ASA’s Burt Bonn told the industry magazine that the previous leadership of the CDC and the Surgeon General were unwilling to consider the appropriate scientific evidence regarding indoor tanning and seems hopeful that the new directors will be more “open-minded.” He continued, “The federal government is currently treating tanning like smoking, when there’s no science to support that ridiculous comparison. The consumer advocacy campaigns need to stop…The science is overwhelmingly supportive of sunlight and human health, and we currently have an administration that seems to be driven more by politics than current science. But now they are gone and change is coming.” It should not surprise us that the Administration that coined the moronic oxymoron “alternative facts” would be supportive of an industry wed to the abusive use of such things.

The ASA and ITA’s 2016 comment to the FDA opposed its proposed rule that would restrict the sale, distribution, and use of indoor tanning beds to minors on the basis that the science that the agency relied upon is outdated and doesn’t support the “onerous requirements” of the FDA’s rule.

Section 118 of the Better Care Reconciliation Act of 2017 would repeal the 10% excise tax on indoor tanning services established by the 2010 Affordable Care Act. According to the Joint Committee on Taxation, its repeal would reduce revenues by approximately $622 over ten years and would drastically reduce funding to implement the Affordable Care Act.

There have also been several failed Congressional attempts to repeal the ACA tanning tax, lobbied for by none other than the ITA. In 2015 and 2017, Representative George Holding introduced a bill to repeal the tanning tax after receiving over $6,000 from the Indoor Tanning Association. Representative Michael Grimm had introduced a similar bill in 2011 and 2014 and received over $8,000 in 2012.  And now, the indoor tanning industry is cheering the introduction of the provision within the draft Senate healthcare bill, the Better Care Reconciliation Act of 2017, that would repeal the tax.

The truth about indoor tanning risks must inform federal policy

In spite of the tanning industry’s best efforts so far, the government’s messaging on the risks of indoor tanning and the policy measures instituted to reduce its use are working. CDC data from the High School Youth Risk Behavior Surveillance System found that the number of high school students who used an indoor tanning device decreased by more than half since the ACA and the tanning tax went into effect, from 15.6% in 2009 to 7.3% in 2015. In light of the great body of scientific evidence demonstrating the risks of UV exposure, we should be doing even more to educate and protect teenagers and adults alike from the harms of UV exposure, rather than rolling back important policies aimed at accomplishing just that.

The Senate healthcare bill ignores the science on health impacts of indoor tanning and caves to the tanning industry and their misinformation campaign. It is currently opposed by nearly all scientific voices that work in healthcare. The American Academy of Dermatology opposes the repeal of the tanning tax provision, and the full bill is opposed by the American Medical Association, the American Hospital Association, the Federation of American Hospitals, the American Academy of Pediatrics, the American College of Physicians, the American Academy of Family Physicians, the Association of American Medical Colleges, and the list goes on.

We have the right to make decisions based on accurate scientific information, not the information cherry-picked and screened by an industry who stands to profit from our ignorance. We should put the Indoor Tanning Association and the American Suntanning Association in the hot seat (and not the kind you find in a tanning salon) for perpetuating falsehoods about their products that could harm consumers’ health. This goes for the current draft Senate healthcare bill and any other future measures that would limit the amount of information available to consumers about the risks of indoor tanning or otherwise compromise policy solutions aimed at keeping us safe.

Photo: Marco Vertch/CC BY 2.0 (Wikimedia)

Historic Treaty Makes Nuclear Weapons Illegal

UCS Blog - All Things Nuclear (text only) -

Remember this day, July 7, 2017. Today, history was made at the United Nations and the nuclear status quo was put on notice and most of the world stood up and said simply, “Enough.”

(Source: United Nations)

Just hours ago, 122 nations and a dedicated group of global campaigners successfully adopted a legally binding international treaty prohibiting nuclear weapons and making it illegal “to develop, test, produce, manufacture, otherwise acquire, possess or stockpile nuclear weapons or other nuclear explosive devices.” Nuclear weapons now join biological and chemical weapons, land mines and cluster munitions that are now explicitly and completely banned under international law.

Our heartfelt gratitude to all who worked tirelessly to make this moment possible, including the International Campaign to Abolish Nuclear Weapons (ICAN), Reaching Critical Will, the governments of Norway, Mexico and Austria (which hosted three international conferences on the Humanitarian Consequences of Nuclear Weapons which inspired this effort) and so many other nations, civil society organizations, scientists, doctors & other public health professionals and global citizens/activists.

This is a powerful expression of conscience and principle on behalf of humanity from 63 percent of the 193 UN member states—one anchored in the simple truth that nuclear weapons are illegitimate instruments of security. ICAN lays out the imperative quite well:

“Nuclear weapons are the most destructive, inhumane and indiscriminate weapons ever created. Both in the scale of the devastation they cause, and in their uniquely persistent, spreading, genetically damaging radioactive fallout, they are unlike any other weapons. They are a threat to human survival.”

Challenging the status quo is at the heart of most successful mass movements for social and planetary progress. Those who benefit from the status quo never give up easily. Movements to end slavery, give women the right to vote, establish marriage equality in the United States and other examples of momentous social change were first bitterly opposed and derided by opponents as naïve, wrong, out of touch, costly, unachievable, etc.

Nuclear weapons are no different. The United States, Russia and other nuclear-armed and nuclear “umbrella” states chose not to participate in these ban treaty negotiations, and dismissed it outright. Indeed, senior officials in the Obama administration spent years doing verbal gymnastics to align the rhetoric of the president who stood up in Prague pledging to work toward “the peace and security of a world free of nuclear weapons” with outright hostility to the ban treaty. To no one’s surprise, the Trump administration has embraced the Obama administration’s plans to perpetuate the nuclear status quo and has forfeited any role or leadership in this critical discussion.

And don’t even get me started about all of the Washington insiders who believe nuclear deterrence will never fail and we can rely on the sound judgement of a small number of people (most of them men) to prevent global nuclear catastrophe.

The ban treaty effort is meant to provide renewed energy and momentum to the moribund global nuclear disarmament process. It is intended to be a prod to the nuclear-armed signatories of the Nuclear Non-Proliferation Treaty (NPT), which have largely ignored their obligation to pursue nuclear disarmament. It will help revive the NPT and the UN’s Conference on Disarmament, not replace them.

Indeed, most of the world has run out of patience and today they spoke loudly. The treaty will be open for signature in September and one can only hope that this is a true turning point in our effort to save humanity from these most horrible of all weapons.

Tesla Model 3 vs. Chevy Bolt? What You Need to Know Before Buying an Electric Car

UCS Blog - The Equation (text only) -

It’s 90 degrees here in our nation’s capital but it might feel like the winter holiday season to those who reserved a Tesla Model 3. Expected to have a 215-mile range and sticker price of $35,000 (or $27,500 after the federal tax credit), the Model 3 will compete with the similar spec’d Chevy Bolt for the prize of cornering the early majority of electric vehicle owners.

No other automaker has a relatively affordable, 200 mile-plus range electric vehicle on the market, yet (the nextgen Nissan Leaf will compete too), and one or both of these vehicles may be a pivotal point in the modern shift to electrics.Assuming you’re already sold on the benefits of driving on electricity, here are a couple tips for you to consider if you’re prepping for an electric vehicle.

#1 Prepare your home charging

There are two main options for charging an electric vehicle at home: (1) 120V charging from an ordinary home outlet and (2) 240V charging from either an upgraded home circuit or existing circuit for a heavy electric appliance like a drying machine.

There is also DC fast charging, but that is only applicable to charging on-the-go and described in more detail below. Before deciding on how to charge, talk with a couple licensed electricians to better understand your home’s electrical capacity. Mr. Electric appears to win the Google SEO for “electrician for electric vehicle,” so maybe head there for a start.

Electric Vehicle Charging Level 1 (120 volts) – about 4-6 miles of range per hour of charge

  • Uses an ordinary wall outlet just like a toaster.
  • Typically won’t require modifications to electric panels or home wiring.
  • Confirm that your home’s electrical circuits are at least 15 or 20-amp, single pole by consulting with a licensed electrician.
  • Slow, but can get the job done if you don’t drive that much on a daily basis. If you only need 20 miles of range, for example, only getting 20 miles of charge each night is not a problem. For road trips, most EVs are equipped to handle the faster charging options that can make charging pit stops on road trips pretty quick.

Electric Vehicle ChargingLevel 2 (240 volts) – about 10-25 miles of range per hour of charge

  • Installation costs vary, but here’s a 30-amp charger from Amazon that is highly rated and costs around $900, including installation, and here’s one that includes an algorithm to minimize charging emissions and costs.
  • Will likely require a new dedicated circuit from the electric panel to a wall location near the EV parking spot.
  • Consult with a licensed electrician to verify that your home has a two-pole 30 to 50-amp electrical circuit breaker panel.

Electric Vehicle Charging Level 3 (aka DC fast charging) (400 volts) – Not for home use, but can charge battery up to 80 percent in about 30 minutes

  • The fastest charging method available, but prohibitively expensive for home use.
  • Some vehicles can get an 80 percent full charge in as little as 30 minutes, depending on the electric vehicle type.
#2 File your tax credit(s)

Purchasing an electric vehicle should qualify you for a federal tax credit of up to $7,500. Here is all the information and form to fill out when you file taxes. You better file quick because the federal tax credit is capped at 200,000 credits per manufacturer. Some manufacturers, including Nissan and Chevrolet, are forecast to hit the 200,000 cap as early as 2018. If Tesla delivers on its 400,000 Model 3 pre-orders, not every Model 3 owner will be able to take advantage of the full $7,500 savings, so act fast!

Also check this map to see what additional state incentives you may qualify for.

#3 Locate public charging stations

Tesla has a network of fast charging stations exclusively for Tesla owners, but there are thousands of public charging stations that any electric vehicle driver can use on the go too. You may be surprised to find chargers near your workplace, school, or other frequent destination. Check out this Department of Energy station locator, or this map from PlugShare. The Department of Transportation has also designated several charging corridors that should be getting even more EV chargers.

#4 Contact your utility

Give your utility a heads up that you are getting an electric vehicle, and inquire about any promotional plans for vehicle charging. Some utilities have flexible “time-of-use” rates, meaning that they will charge you less when you plug a vehicle in during off-peak times (typically overnight). Your utility might also have its own electric vehicle incentives, like a rebate on installation or charger costs, or even a pilot project on smart charging where you can get paid to plug in your vehicle.

#5 Say goodbye to internal combustion engines, forever!

Driving on electricity is not only cheaper and cleaner than driving on gasoline, it’s also a total blast. Prepare to never want to go back to gasoline-powered vehicles as you cruise on the smooth, silent power of electricity.

Pages

Subscribe to Union of Concerned Scientists aggregator - Combined UCS Blogs