Combined UCS Blogs

North Korea’s May 21 Missile Launch

UCS Blog - All Things Nuclear (text only) -

A week after the test launch of an intermediate range Hwasong-12 missile, North Korea has tested a medium-range missile. From press reports, this appears to be a Pukguksong-2 missile, which is the land-based version of the submarine launched missile it is developing. This appears to be the second successful test of this version of the missile.

South Korean sources reported this test had a range of 500 km (300 miles) and reached altitude of 560 km (350 miles). If accurate, this trajectory is essentially the same as the previous test of the Pukguksong-2 in February (Fig. 1). Flown on a standard trajectory, this missile carrying the same payload would have a range of about 1,250 km (780 miles). If this test was conducted with a very light payload, as North Korea is believed to have done in past tests, the actual range with a warhead could be significantly shorter.

Fig. 1: The red curveis reportedly the trajectory followed on this test. The black curve (MET=minimum-energy trajectory) is the same missile on a maximum range trajectory.

The Pukgukgsong-2 uses solid fuel rather than liquid fuel like most of North Korea’s missiles. For military purposes, solid-fueled missiles have the advantage that they have the fuel loaded in them and can be launched quickly after moving them to the launch site. With large liquid-fuel  missiles you instead need to move them without fuel and then fuel them once they are in place at the launch site. That process can take an hour or so, and the truck carrying the missile must be accompanied by a number of trucks containing the fuel. So it is easier to spot a liquid missile before launch and there is more time available to attack it.

However, it is easier to build liquid missiles, so that is typically where countries begin. North Korea obtained liquid fuel technology from the Soviet Union in the 1980s, and built its program up from there. It is still in early stages of developing solid missiles.

Building large solid missiles is difficult. If you look at examples of other countries building long-range solid missiles, e.g., France and China, it took them several decades to get from the point of building a medium-range solid missile, like North Korea has, to building a solid ICBM. So this is not something that will happen soon, but with time North Korea will be able to do it.

Infrastructure Spending Is Coming. Climate Change Tells Us to Spend Wisely

UCS Blog - The Equation (text only) -

The news of new federal infrastructure proposals landed in a timely fashion with this year’s Infrastructure Week, including a bill introduced by the House Democrats (LIFT America Act, HR 2479) and another expected shortly from Trump’s administration. For years now, the American Society of Civil Engineers has graded the U.S.’s infrastructure at near failing (D+). With the hashtag #TimetoBuild, Infrastructure Week participants are urging policymakers to “invest in projects, technologies, and policies necessary to make America competitive, prosperous, and safe.”

We must build for the future

Conversations in Washington, D.C. and across the country over the coming weeks and months are sure to focus on what projects to build. But we first need to ask for what future are we building? Will it be a version based on similar assumptions and needs as those we experience today, or a future radically shaped by climate change? (Changing demographics and technologies will undoubtedly shape this future as well.)

It’s imperative that this changing climate future is incorporated into how we design and plan infrastructure projects, especially as we consider investing billions of taxpayer dollars into much needed enhancements to our transportation, energy, and water systems.

Climate change will shape our future

A vehicle remained stranded in the floodwater of Highway 37 on Jan. 24, 2017. Photo: Marin Independent Journal.

Engineers and planners know that, ideally, long-lived infrastructure must be built to serve needs over decades and withstand the ravages of time—including the effects of harsh weather and extended use—and with a margin of safety to account for unanticipated risks.

Much of our current infrastructure was built assuming that past trends for climate and weather were good predictors of the future. One example where I currently live would be the approach to the new Bay Bridge in Oakland, California, which was designed and built without consideration of sea level rise and will be permanently under water with 3 feet of sea level rise, a likely scenario by end of this century. Currently, more than 270,000 vehicles travel each day on this bridge between San Francisco and the East Bay.

Another near my hometown in New Jersey is LaGuardia Airport in Queens, NY, which accommodated 30 million passengers in 2016. One study shows that if seas rise another 3 feet, it could be permanently inundated; the PATH and Hoboken Terminal are at risk as well.

Instead, we must look forward to what climate models and forecasts tell us will be the “new normal”- higher temperatures, more frequent and intense extreme weather events like droughts and flooding, larger wildfires, and accelerated sea level rise. This version of the future will further stress our already strained roads, bridges, water and energy systems, as well as the natural or green infrastructure systems that can play a key role in limiting these climate impacts (e.g. flood protection). As a result, their ability to reliably and safely provide the critical services that our economy, public safety, and welfare depend on is threatened.

The reality is we are not yet systematically planning, designing and building our infrastructure with climate projections in mind.

Recent events as a preview

We can look at recent events for a preview of some of the infrastructure challenges we may face with more frequency and severity in the future because of a changing climate. (These events themselves are not necessarily the direct result of climate change but studies do show that climate change is making certain extreme events more likely, like the 2016 Louisiana floods). For example:

  • In September 2015, the Butte and Valley Fires destroyed more than one thousand structures and damaged hundreds of power lines and poles, leaving thousands of Californians without power.
  • Earlier this year, more than 188,000 residents downstream of Oroville Dam were ordered to evacuate as water releases in response to heavy rains and runoff damaged both the concrete spillway and a never-before-used earthen emergency spillway, threatening the dam.
  • Winter storms also resulted in extreme precipitation that devastated California’s roads, highways, and bridges with flooding, landslides, and erosion, resulting in roughly $860 million in repairs.

View of the Valley Fire, which destroyed nearly 77,000 acres in Northern California from Sept. 12, 2015 to Oct. 15, 2015. Photo: U.S. Coast Guard.

Similar events have been occurring all over the country, including recent highway closures from flooding along the Mississippi River. Other failures are documented in a Union of Concerned Scientists’ blog series “Planning Failures: The Costly Risks of Ignoring Climate Change,” and a report on the climate risks to our electricity systems.

Will the infrastructure we start building today still function and meet our needs in a future affected by climate change? Maybe. But unlikely, if we don’t plan differently.

Will our taxpayer investments be sound and will business continuity and public safety be assured if we don’t integrate climate risk into our infrastructure decisions? No.

If we make significant federal infrastructure investments over the next few years without designing in protections against more extreme climate forces, we risk spending much more of our limited public resources on repair, maintenance, and rebuilding down the line–a massively expensive proposition.

Building for our climate future

UCS has recently joined and started to amplify a small but growing conversation about what exactly climate-resilient infrastructure entails. This includes several of the Steering Committee Members and Sponsors of Infrastructure Week, including Brookings Institute, American Society of Civil Engineers, AECOM, WSP, and HTNB. The LIFT America Act also includes some funding dedicated to preparing infrastructure for the impacts of climate change.

For example, last year, UCS sponsored a bill, AB 2800 (Quirk), that Governor Brown signed into law, to establish the Climate-Safe Infrastructure Working Group. It brings together climate scientists, state professional engineers, architects and others to engage in a nuts-and-bolts conversation about how to better integrate climate impacts into infrastructure design, examining topics like key barriers, important information needs, and the best design approach for a range of future climate scenarios.

UCS also successfully advocated for the California State Water Resources Control Board to adopt a resolution to embed climate science into all of its existing work: permits, plans, policies, and decisions.

A few principles for climate resilient infrastructure

At UCS, we have also been thinking about key principles to ensure that infrastructure can withstand climate shocks and stresses in order to minimize disruptions to the system and safety (and the communities that depend on it) as well as safety and rebound quickly. Our report, “Towards Climate Resilience: A Framework and Principles for Science-Based Adaption”, outlines fifteen key principles for science-based adaptation.

We sought input from a panel of experts, including engineers, investors, emergency managers, climate scientists, transportation planners, water and energy utilities, and environmental justice organizations, at a recent UCS convening in Oakland, California focused on how we can start to advance policies and programs that will result in infrastructure that can withstand climate impacts.

The following principles draw largely from these sources. They are aspirational and not exhaustive, and will continue to evolve. To be climate-resilient, new and upgraded infrastructure should be built with these criteria in mind:

  • Scientifically sound: Infrastructure decisions should be consistent with the best-available climate science and what we know about impacts on human and natural systems (e.g., flexible and adaptive approaches, robust decisions, systems thinking, and planning for the appropriate magnitude and timing of change).
  • Socially just: New or upgraded infrastructure projects must empower communities to thrive, and ensure vulnerable groups can manage the climate risks they’ll face and share equitably in the benefits and costs of action. The historic under-investment in infrastructure in low-income and communities of color must be addressed.
  • Fiscally sensible: Planning should consider the costs of not adapting to climate change (e.g., failure to deliver services or costs of emergency repairs and maintenance) as well as the fiscal and other benefits of action (e.g., one dollar spent preparing infrastructure can save four dollars in recovery; investments in enhancing and protecting natural infrastructure that accommodates sea level rise, absorbs stormwater runoff, and creates parks and recreation areas).
  • Ambitiously commonsense: Infrastructure projects should avoid maladaptation, or actions that unintentionally increase vulnerabilities and reduce capacity to adapt, and provide multiple benefits. It should also protect what people cherish, and reflect a long-term vision consistent with society’s values.
  • Aligned with climate goals: Since aggressive emissions reductions are essential to slowing the rate that climate risks become more severe and common and we need to prepare for projected climate risks, infrastructure projects should align with and complement long-term climate goals – both mitigation and adaptation.
Americans want action for a safer, more climate resilient future

A 2015 study found that the majority of Americans are worried about global warming, with more than 40% believing it will harm them personally. As we engage in discussions around how to revitalize our economy, create jobs, and protect public safety by investing in infrastructure, climate change is telling us to plan and spend wisely.

From the current federal proposals to the recently enacted California transportation package, SB 1 ($52 billion) and hundreds of millions more in state and federal emergency funds for water and flood-protection, there is a lot at stake: taxpayer dollars, public safety and welfare, and economic prosperity. We would be smart to heed this familiar old adage when it comes to accounting for climate risks in these infrastructure projects: a failure to plan is a plan to fail.

No Rest for the Sea-weary: Science in the Service of Continually Improving Ocean Management

UCS Blog - The Equation (text only) -

Marine reserves, or no-fishing zones, are increasing throughout the world. Their goals are variable and numerous, often a mix of conserving our ocean’s biodiversity and supporting the ability to fish for seafood outside reserves for generations to come. California is one location that has seen the recent implementation of marine reserves, where the California Marine Life Protection Act led to the establishment of one of the world’s largest networks of marine reserves.

A number of scientific efforts have informed the design of marine reserves throughout the world and in California. Mathematical models were central to these research efforts as they let scientists and managers do simulated “experiments” of how different reserve locations, sizes, and distances from each other affect how well reserves might achieve their goals.

While a PhD student in the early 2000s, I began my scientific career as one of many contributing to these efforts. In the process, a key lesson I learned was the value of pursuing partnerships with government agencies such as NOAA Fisheries to ensure that the science I was doing was relevant to managers’ questions, an approach that has become central to my research ever since.

Map of the California Marine Protected Areas; courtesy of California Department of Fish and Wildlife

A transition from design to testing

Now, with many marine reserves in place, both managers and scientists are turning to the question of whether they are working. On average (but not always), marine reserves harbor larger fish and larger population sizes for fished species, as well as greater total biomass and diversity, compared both to before reserves were in place and to areas outside reserves. However, answering a more nuanced question—for a given reserve system, is it working as expected?—can help managers engage in “adaptive management”: using the comparison of expectations to data to identify any shortfalls and adjust management or scientific understanding where needed to better achieve the original goals.

Mathematical models are crucial to calculating expectations and therefore to answering this question. The original models used to answer marine reserve design questions focused on responses that might occur after multiple decades. Now models must focus on predicting what types of changes might be detectable over the 5-15 year time frame of reserve evaluation. Helping to develop such modeling tools as part of a larger collaboration, with colleagues Alan Hastings and Louis Botsford at UC Davis and Will White at the University of North Carolina, is the focus of my latest research on marine reserves in an ongoing project that started shortly after I arrived as a professor at UC Davis.

To date we have developed new models to investigate how short-term expectations in marine reserves depend on fish characteristics and fishing history. Now we have a new partnership with California’s Department of Fish and Wildlife, the responsible management agency for California’s marine reserves, to collaboratively apply these tools to our statewide reserve system. This application will help rigorously test how effective California’s marine reserves are, and therefore help with continually improving management to support both the nutrition and recreation that Californians derive from the sea. In addition, it will let California serve as a leading example of model-based adaptive management that could be applied to marine reserves throughout the world.

The role of federal funding

The cabezon is just one type of fish protected from fishing in California’s marine reserves. Photo credit: Wikimedia Commons.

Our project on models applied to adaptive managed started with funding in 2010–2014 from NOAA SeaGrant, a funding source uniquely suited to support research that can help improve ocean and fisheries management. With this support, we could be forward-looking about developing the modeling tools that the State of California now needs.  NOAA SeaGrant would be eliminated under the current administration’s budget proposal.

My other experience with NOAA SeaGrant is through a graduate student fellowship program that has funded PhD students in my (and my colleagues’) lab group to do a variety of marine reserve and fisheries research projects. This fellowship funds joint mentorship by NOAA Fisheries and academic scientists towards student research projects relevant to managing our nation’s fisheries. Along with allowing these students to bring cutting-edge mathematical approaches that they learn at UC Davis to collaborations with their NOAA Fisheries mentors, this funding gives students the invaluable experience I had as a PhD student in learning how to develop partnerships with government agencies that spur research relevant to management needs. Both developing such partnerships and training students in these approaches are crucial elements to making sure that new scientific advancements are put to use. This small amount of money goes a long way towards creating future leaders who will continue to help improve the management of our ocean resources.

 

Marissa Baskett is currently an Associate Professor in the Department of Environmental Science and Policy at the University of California, Davis.  Her research and teaching focus on conservation biology and the use of mathematical models in ecology.  She received a B.S. in Biological Sciences at Stanford University and both an M.A. and Ph.D. in Ecology and Evolutionary Biology at Princeton University, and she is an Ecological Society of America Early Career Fellow.  

The views expressed in this post solely represent the opinions of Marissa Baskett and do not necessarily represent the views of UC Davis or any of her funders or partners.

Science Network Voices gives Equation readers access to the depth of expertise and broad perspective on current issues that our Science Network members bring to UCS. The views expressed in Science Network posts are those of the author alone.

New Study on Smart Charging Connects EVs & The Grid

UCS Blog - The Equation (text only) -

We know that electric vehicles (EVs) tend to be more environmentally friendly than gasoline cars. We also know that a future dominated by EVs poses a problem—what happens if everyone charges their cars at the same time (e.g., when they get home from work)?

Fortunately, there’s an answer: smart charging. That’s the topic of a report I co-authored, released today.

As a flexible load, EVs could help utilities balance supply and demand, enabling the grid to accommodate a larger fraction of variable renewable energy such as wind and solar. As well, the charging systems can help utilities and grid operators identify and fix a range of problems. The vehicles can be something new, not simply an electricity demand that “just happens,” but an integral component of grid modernization.

Where the timing and power of the EV charging automatically adjust to meet drivers’ needs and grid needs, adding EVs can reduce total energy system costs and pollution.

This idea has been around since the mid-1990s, with pilots going back at least to 2001. It has been the focus of many recent papers, including notable work from the Smart Electric Power Alliance, the Rocky Mountain Institute, the International Council on Clean Transportation, the Natural Resources Defense Council, the National Renewable Energy Laboratory, Synapse Energy Economics, and many more.

Over the past two years, I’ve read hundreds of papers, talked to dozens of experts, and convened a pair of conferences on electric vehicles and the grid. I am pleased to release a report of my findings at www.ucsusa.org/smartcharging.

Conclusions, but not the end

This is a wide-ranging and fast-moving field of research with new developments constantly. As well, many well-regarded experts have divergent views on certain topics. Still, a few common themes emerged.

  • Smart charging is viable today. However, not all of the use cases have high market value in all regions. Demand response, for example, is valuable in regions with rapid load growth, but is less valuable in regions where electricity demand has plateaued.
  • The needs of transportation users take priority. Automakers, utilities, charging providers, and regulators all stress the overriding importance of respecting the needs of transportation users. No stakeholder wants to inconvenience drivers by having their vehicles uncharged when needed.
  • Time-of-use pricing is a near-term option for integrating electric vehicles with the grid. Using price signals to align charging with grid needs on an hourly basis—a straightforward implementation of smart charging—can offer significant benefits to renewable energy utilization.
  • Utilities need a plan to use the data. The sophisticated electronics built into an EV or a charger can measure power quality and demand on the electric grid. But without the capabilities to gather and analyze this data, utilities cannot use it to improve their operations.

The report also outlines a number of near-term recommendations, such as encouraging workplace charging, rethinking demand charges, and asking the right questions in pilot projects.

Defining “smart”

One important recommendation is that “smart” charging algorithms should consider pollution impacts. This emerged from the analytical modeling that UCS conducted in this research.

Basic applications of “smart charging” lower electric system costs by reducing peak demand and shifting the charging to off-peak periods, reducing need for new power plants and reducing consumer costs.  But, in some regions that have lagged in the transition to cleaner electricity supplies, “baseload” power can be dirtier than peak power. Our model of managed charging shifted power demand by the hour, without regard to lowering emissions or the full range of services that smart charging performs today (like demand response or frequency regulation), let alone adding energy back with two-way vehicle-to-grid operation.

The model illustrated that encouraging off-peak charging without attention to emissions might, at a national scale, slightly increase pollution compared to unmanaged charging. Both charging strategies would reduce pollution compared to relying on internal-combustion vehicles, and the managed case would have lower system costs.

This is not a prediction, but one possible outcome under certain circumstances—a possibility also noted by NREL and by other research teams. It is a consequence of off-peak power that is cheap but dirty, and of a model that does not yet properly represent the full capabilities of smart charging. Charging when renewables are greatest, or employing policies that assign a cost to pollution, would change this outcome.

Fortunately, even before we have such policies, we have existing systems that can selectively charge when the greenest power is “on the margin.” This technology and other systems are discussed in the report.

The broader context

Smart charging of electric vehicles has a key role to play in the grid modernization initiatives happening around the country. EVs can be a flexible load that communicates with the grid, incorporates energy storage, benefits from time-varying rates, and participates in ancillary services markets, representing many of the innovations that can improve the economic and environmental performance of our electricity system.

Photo: Steve Fecht/General Motors

There’s an Elephant in the Room, and It Smells Like Natural Gas

UCS Blog - The Equation (text only) -

A curious thing happened in the aftermath of President Trump attempting to sign away the past eight years of work on climate and clean energy: the public face of progress didn’t flinch. From north to south and east to west, utilities and businesses and states and cities swore their decarbonization compasses were unswerving; yes, they said, we’re still closing coal plants, and yes, yes!, we’re still building ever more wind and solar—it just makes sense.

But here’s why all the subsequent commentary reiterating the inevitability of coal’s decline and cheering the unsinkable strength of renewables’ rise was right in facts, but incomplete in message:

Coal is closing. Renewables are rising. But right now, we need to be talking about natural gas.

We’re fine without a map…

President Trump accompanied his signature on the Executive Order on Energy Independence with a vow that the order would put the coal industry “back to work.” But  shortly thereafter, even those in the business reported they weren’t banking on a turn-around. Coal plants just keep shutting down:

This map shows coal units that have retired just between 2007 and 2016—many more have been announced for closure in the near future.

At the same time, renewable resources have been absolutely blowing the wheels off expectations and projections, with costs plummeting and deployment surging. The renewable energy transformation is just that—a power sector transformation—and it certainly appears there’s no going back:

Wind and solar capacity has been growing rapidly since the early 2000s.

Now when you put these two trajectories together, you end up with an electric power sector that has, in recent years, steadily reduced its carbon dioxide emissions:

Three positive charts, and three tremendous reasons to cheer (which we do a lot, and won’t soon stop—clean energy momentum is real and it’s rolling). The problem is, these charts only capture part of the energy sector story.

What’s missing? Natural gas. Or, what is now the largest—and still growing—source of carbon emissions in the electric power sector.

…Until we finally realize we’re lost

There are two phases to climate change emissions reductions conversations. In Phase 1, we acknowledge that a problem exists, we recognize we’re a big reason for that problem, and we take action to initiate change. With the exception of just a few of the most powerful people in our government (ohthem), we seem to have Phase 1 pretty well in hand. Cue the stories about the triumphant resilience of our climate resolve.

The trouble is Phase 2.

In Phase 2, we move to specifics. Namely, specifics about what the waypoints are, and by when we need to reach them. This is the conversation that produces glum replies—and it’s the source of those weighty, distraught affairs scattered among the buoyant takes on the recent executive order—because the truth is:

  • We know what the waypoints are,
  • We know by when we need to reach them, and
  • We know that currently, we’re not on track.

Without a map, we’re left feeling good about the (real and true) broad-brush successes of our trajectory—emissions reductions from the retirement of coal plants; technology and economic improvements accelerating the deployment of renewables—but we have no means by which to measure the adequacy of our decarbonization timeline.

As a result, we put ourselves at grave risk of failing to catch the insufficiency of any path we’re on. And right now? That risk has the potential to become reality as our nation, propelled by the anti-regulatory, pro-fossil policies of the Trump administration, lurches toward a wholesale capitulation to natural gas.

Natural gas and climate change

Last year, carbon dioxide emissions from coal-fired power plants fell 8.6 percent. But take a look at the right-hand panel in the graph below. See what’s not going down? Emissions from natural gas. In fact, carbon dioxide emissions from natural gas overtook coal emissions last year, even omitting the additional climate impacts from methane released during natural gas production and distribution.

Bridge fuel? Not so much.

There’s no sign of the trend stopping, either. Natural gas plants have been popping up all across the country, and new plants keep getting proposed—natural gas generators now comprise more than 40 percent of all electric generating capacity in the US.

Natural gas plants are located all across the country, and new projects keep getting proposed.

And all those natural gas plants mean even more gas pipelines. According to project tracking by S&P Global Market Intelligence, an additional 70 million Dth/d of gas pipeline capacity has been proposed to come online by the early 2020s (subscription). That is a lot of gas, and would require the commitment of a lot of investment dollars.

When plants are built, pipelines are laid, and dollars are committed, it becomes incredibly hard to convince regulators to force utilities to let it all go.

Still, that’s what the markets—and the climate—will demand. As a result, ratepayers may be on the hook for generators’ bad bets.

The thing is, we know today the external costs of these investments, and the tremendous risks of our growing overreliance on natural gas. So why do these assets keep getting built?

Because many of our regulators, utilities, and investors are working without a map.

Now there are a growing number of states stepping up where the federal government has faltered, and beginning to make thoughtful energy decisions based on specific visions of long-term decarbonization goals, like in California, the RGGI states, and as recently as this week, Virginia. Further, an increasing number of insightful and rigorous theoretical maps are being developed, like the US Mid-Century Strategy for Deep Decarbonization, amongst many others (UCS included).

But for the vast majority of the country, the main maps upon which decarbonization pathways were beginning to be based—the Clean Power Plan and the Paris Climate Agreement—are both at immediate risk of losing their status as guiding lights here in the US, sitting as they are beneath the looming specter of the Trump administration’s war on facts.

Plotting a course to a better tomorrow

So where to from here? Ultimately, there is far too much at stake for us to simply hope we’re heading down the right path. Instead, we need to be charting our course to the future based on all of the relevant information, not just some of it.

To start, we recommend policies that include:

  • Moving forward with implementation of the Clean Power Plan, a strong and scientifically rigorous federal carbon standard for power plants.
  • Developing, supporting, and strengthening state and federal clean energy policies, including renewable electricity standards, energy efficiency standards, carbon pricing programs, and investment in the research, development, and deployment of clean energy technologies.
  • Defending and maintaining regulations for fugitive methane emissions, and mitigating the potential public health and safety risks associated with natural gas production and distribution.
  • Improving grid operation and resource planning such that the full value and contributions of renewable resources, energy efficiency, and demand management are recognized, facilitated, and supported.

We need to show that where we’re currently heading isn’t where we want to be.

We need to talk about natural gas.

Zorandim/Shutterstock.com U.S. EIA, Generator Monthly U.S. EIA U.S. EIA U.S. EIA U.S. EIA

April 2017 Was the Second Hottest April on Record: We Need NOAA More Than Ever

UCS Blog - The Equation (text only) -

Today, NOAA held its monthly climate call, where it releases the previous month’s global average temperature, and discusses future weather and climate outlooks for the US. According to the data released today, April 2017 was the second warmest April on record after only April 2016, with a temperature 0.90°C (1.62°F) above the 20th century April average. Data for the contiguous US was released earlier, and found April 2017 to be the 11th warmest on record, and 2017 to be the second warmest year to date (January to April data).

That means that, yes, we are still seeing warming that is basically unprecedented.

Photo: NOAA

Today’s data release was just one of the myriad ways NOAA’s data and research touches our lives in important ways. I can’t help but wonder if, before someone leaves their house in the morning, and checks the weather forecast—will it rain? Will it be hot or cold?— do they wonder how those numbers come about? Do they realize the sheer amount of science that goes into saying what will happen in every small town across the country (and the world)?

Do people think about science at all when they go about their lives? And do they wonder how that science comes to be?

Probably not. But here is why they should.

Science is essential for climate and weather predictions

NOAA (short for the “National Oceanic and Atmospheric Administration”) is one the lead agencies that helps provide that science. But NOAA’s mission and budget are increasingly under attack under the Trump administration. President Trump’s pick for the new NOAA administrator will soon be announced, and it’s critical that s/he take a strong stance to defend the mission and the budget of the agency.

The National Weather Service, administered by NOAA, is one of the most essential federal institutions for regular citizens’ everyday lives. It is there (and at the Climate Prediction Center) that the data collected by instruments managed by Federal agencies all over the globe, on air, land, and sea, turns into something as important as weather forecasts and seasonal climate outlooks. Data from satellites is routinely used by local stations for tornado warnings, and hurricane tracking is also provided courtesy of those satellites and other instruments, like tide gauges that show the water rising to a flooding threshold, which in turn triggers warnings from the NWS for the affected areas.

It takes very specific and detailed scientific and engineering training to build those instruments in the first place—tide gauges, satellites, thermometers, you name it. And then, science is needed to interpret and make sense of the raw data. And because most people would agree that better forecasts make for improved planning of one’s life—from daily activities to crop planting to storm preparedness—yes, you guessed it, we need better science.

Unfortunately, what we are seeing in this administration is not very promising when it comes to leveraging and supporting science. On many fronts—NASA, NOAA, EPA, DOI, DOE, to name a few—science is being dismissed or ignored, to the detriment of the environment and people like you and me. Proposed budgets include cuts to many scientific programs within agencies. One can’t help but wonder what the consequences (especially unforeseen ones) would be.

NOAA needs more, not less funding

Current funding is already strained to produce enough research to prepare for the increased seasonal variability that we are observing, and that is expected to increase with climate change. We are seeing more devastating floods and worsening wildfire seasons, and many of our coastal cities are seeing significantly more flooding at high tides and during storms, due to sea level rise.

The weather that makes up our climate is behaving so erratically, we need more, not less resources to help predict and prepare appropriately. Fortunately, Congress has held the line so far on keeping budgets for FY17 close to prior year levels rather than accepting the drastic reductions proposed by the administration. We are working hard to help ensure that this trend continues when Congress appropriates the FY18 budget. NOAA needs more funding to continue its climate monitoring program and to improve seasonal forecasts and operational programs, which in turn are essential for planning budgets at state and local levels, and for preparedness measures that can save resources, lives and property.

Wouldn’t it be great if we could tell how much snow is REALLY coming so the right amount of road treatments can be allocated? Or how much rain is going to fall in a very short period of time and how much that river is going up after that rain? I think we can all agree on that.

The Weather Research and Forecasting Innovation Act of 2017, which was signed into law in April 2017, is a breath of fresh air into NOAA’s forecasting lungs—but it is not enough. It focuses on research into sub-seasonal to seasonal prediction, and better forecasts of tornadoes, hurricanes, and other severe storms, as well as long-range prediction of weather patterns, from two weeks to two years ahead. One important aspect of the Act is its focus on communication of forecasts to inform decision-making by public safety officials.

The Act had bipartisan support and was applauded by the University Corporation for Atmospheric Research (UCAR), a well-respected research institution. It was also championed by Barry Myers, the CEO of Accuweather and a frontrunner for the position of NOAA administrator. It is definitely a good step, and a long time coming, but we need more. We need continued support for these types of initiatives, and for the broader mission of NOAA.

We need a vision, and the resources to make it happen. We need an administrator who will turn that vision into reality.

NOAA is a lot more than weather forecasts

NOAA plays a large role in the US economy. It supports more than one-third of the US GDP, affecting shipping, commerce, farming, transportation, and energy supply. The data coming from NOAA also helps us maintain public safety and public health, and enable national security.

In addition to the NWS, other programs within NOAA are essential to track climate change and weather, such as the National Environmental Satellite, Data, and Information Service (NESDIS), which supports weather forecasts and climate research through the generation of over 20 terabytes of data daily from satellites, buoys, radars, models, and many other sources. Other important programs are the Office of Oceanic and Atmospheric Research (OAR); and the Coastal Zone Management Program at the Office of Coastal MGMT (OCM), at the National Ocean Service (NOS).

Those programs provide state-of-the-art data that directly or indirectly affect all the aforementioned segments of Americans daily lives.

The US needs talent and resources to continue its top-notch work

In a recent blog, Dr. Marshall Shepherd laid out  the five things that the weather and climate communities need from a NOAA administrator: to offer strong support for research; to support the NWS; to fight back against the attack on climate science; to protect the satellite and Sea Grant programs; and to value external science expertise. I couldn’t agree more!

NOAA can be the cutting-edge science agency for a “weather ready nation” helping communities become more resilient as they prepare for climate change risks. All it needs is a great administrator, who will stand up for science and fight for the needed budget for the agency’s ever growing needs. Will the nominee be up for the job? And will Congress and the Trump administration continue to provide the budget the agency needs to do its job well?

Warhead Reentry: What Could North Korea Learn from its Recent Missile Test?

UCS Blog - All Things Nuclear (text only) -

As North Korea continues its missile development, a key question is what it may have learned from its recent missile test that is relevant to building a reentry vehicle (RV) for a long-range missile.

The RV is a crucial part of a ballistic missile. A long-range missile accelerates its warhead to very high speed—16,000 mph—and sends it arcing through space high above the atmosphere. To reach the ground it must reenter the atmosphere. Atmospheric drag slows the RV and most of the kinetic energy it loses goes into heating the air around the RV, which then leads to intense heating of the surface of the RV. The RV absorbs some of the heat, which is conducted inside to where the warhead is sitting.

So the RV needs to be built to (1) withstand the intense heating at its outer surface, and (2) insulate the warhead from the absorbed heat that is conducted through the interior of the RV.

The first of these depends on the maximum heating rate at the surface and the length of time that significant heating takes place. Number (2) depends on the total amount of heat absorbed by the RV and the amount of time the heat has to travel from the surface of the RV to the warhead, which is roughly the time between when intense heating begins and when the warhead detonates.

I calculated these quantities for the two cases of interest here: the highly lofted trajectory that the recent North Korean missile followed and a 10,000 km missile on a normal (MET) trajectory. The table shows the results.

The maximum heating rate (q) is only about 10% higher for the 10,000 km range missile than the lofted missile. However, the total heat absorbed (Q) is nearly twice as large for the long-range missile and the duration of heating (τ) is more than two and a half times as long.

This shows that North Korea could get significant data from the recent test—assuming the RV was carrying appropriate sensors and sent that information back during flight, and/or that North Korea was able to recover the RV from the sea. But it also shows that this test does not give all the data you would like to have to understand how effective the heatshield might be before putting a nuclear warhead inside the RV and launching it on a long-range missile.

Some details

The rate of heat transfer per area (q) is roughly proportional to ρV3, where ρ is the atmospheric density and V is the velocity of the RV. Since longer range missiles reenter at higher speeds, the heating rate increases rapidly with missile range. The total heat absorbed (Q) is the integral of q over time during reentry.

This calculation assumes the ballistic coefficient (β) of the RV is 48 kN/m2 (1,000 lb/ft2). The heating values in the table roughly scale with β. A large value of β means less atmospheric drag so  the RV travels through the atmosphere at higher speed. That increases the accuracy of the missile but also increases the heating. The United States worked for many years to develop RVs with special coatings that allowed them to have high β and therefore high accuracy, but  could also withstand the heating under these conditions.

The results in the table can be understood by looking at how RVs on these two trajectories slow down as they reenter. Figs. 1 and 2 plot the speed of the RV versus time; the x and y axes of the two figures have the same scale. The maximum deceleration (slope of the curve) is roughly the same in the two cases, leading to roughly the same value of q. But the 10,000 km range missile loses more total energy—leading to a larger value of Q—and does so over a longer time than the lofted trajectory.

Ad Hoc Fire Protection at Nuclear Plants Not Good Enough

UCS Blog - All Things Nuclear (text only) -

A fire at a nuclear reactor is serious business. There are many ways to trigger a nuclear accident leading to damage of the reactor core, which can result in the release of radiation. But according to a senior manager at the US Nuclear Regulatory Commission (NRC), for a typical nuclear reactor, roughly half the risk that the reactor core will be damaged is due to the risk of fire. In other words, the odds that a fire will cause an accident leading to core damage equals that from all other causes combined. And that risk estimate assumes the fire protection regulations are being met.

However, a dozen reactors are not in compliance with NRC fire regulations:

  • Prairie Island Units 1 and 2 in Minnesota
  • HB Robinson in South Carolina
  • Catawba Units 1 and 2 in South Carolina
  • McGuire Units 1 and 2 in North Carolina
  • Beaver Valley Units 1 and 2 in Pennsylvania
  • Davis-Besse in Ohio
  • Hatch Units 1 and 2 in Georgia

Instead, they are using “compensatory measures,” which are not defined or regulated by the NRC. While originally intended as interim measures while the reactor came into compliance with the regulations, some reactors have used these measures for decades rather than comply with the fire regulations.

The Union of Concerned Scientists and Beyond Nuclear petitioned the NRC on May 1, 2017, to amend its regulations to include requirements for compensatory measures used when fire protection regulations are violated.

Fire Risks

The dangers of fire at nuclear reactors were made obvious in March 1975 when a fire at the Browns Ferry nuclear plant disabled all the emergency core cooling systems on Unit 1 and most of those systems on Unit 2. Only heroic worker responses prevented one or both reactor cores from damage.

The NRC issued regulations in 1980 requiring electrical cables for a primary safety system to be separated from the cables for its backup, making it less likely that a single fire could disable multiple emergency systems.

Fig. 1 Fire burning insulation off cables installed in metal trays passing through a wall. (Source: Tennessee Valley Authority)

After discovering in the late 1990s that most operating reactors did not meet the 1980 regulations, the NRC issued alternative regulations in 2004. These regulations would permit electrical cables to be in close proximity as long as analysis showed the fire could be put out before it damaged both sets of cables. Owners had the option of complying with either the 1980 or 2014 regulations. But the dozen reactors listed above are still not in compliance with either set of regulations.

The NRC issued the 1980 and 2004 fire protection regulations following formal rulemaking processes that allowed plant owners to contest proposed measures they felt were too onerous and the public to contest measures considered too lax. These final rules defined the appropriate level of protection against fire hazards.

Rules Needed for “Compensatory Measures”

UCS and Beyond Nuclear petitioned the NRC to initiate a rulemaking process that will define the compensatory measures that can be substituted for compliance with the fire protection regulations.

The rule we seek will reduce confusion about proper compensatory measures. The most common compensatory measure is “fire watches”—human fire detectors who monitor for fires and report any sightings to the control room operators who then call out the onsite fire brigades.

For example, the owner of the Waterford nuclear plant in Louisiana deployed “continuous fire watches.” The NRC later found that they had secretly and creatively redefined “continuous fire watch” to be someone wandering by every 15 to 20 minutes. The NRC was not pleased by this move, but could not sanction the owner because there are no requirements for fire protection compensatory measures. Our petition seeks to fill that void.

The rule we seek will also restore public participation in nuclear safety decisions. The public had opportunities to legally challenge elements of the 1980 and 2004 fire protection regulations it felt to be insufficient. But because fire protection compensatory measures are governed only by an informal, cozy relationship between the NRC and plant owners, the public has been locked out of the process. Our petition seeks to rectify that situation.

The NRC is currently reviewing our submittal to determine whether it satisfies the criteria to be accepted as a petition for rulemaking. When it does, the NRC will publish the proposed rule in the Federal Register for public comment. Stay tuned—we’ll post another commentary when the NRC opens the public comment period so you can register your vote (hopefully in favor of formal requirements for fire protection compensatory measures.)

BP Hosts Annual General Meeting Amid Questions on Climate Change

UCS Blog - The Equation (text only) -

Tomorrow, BP holds its Annual General Meeting (AGM) in London. BP shareholders are gathering at a time of mounting pressure on major fossil fuel companies to begin to plan for a world free from carbon pollution—as evidenced by last week’s vote by a majority of Occidental Petroleum shareholders in favor of a resolution urging the company to assess how the company’s business will be affected by climate change.

BP was one of eight companies that UCS assessed in the inaugural edition of The Climate Accountability Scorecard, released last October. BP responded to our findings and recommendations, but left important questions unanswered. Here are four questions that we hope BP’s decision makers will address at the AGM.

1) What is BP doing to stop the spread of climate disinformation—including by WSPA?

BP 2016 Score: Fair

In its own public communications, BP consistently acknowledges the scientific evidence of climate change and affirms the consequent need for swift and deep reductions in emissions from the burning of fossil fuels. BP left the climate-denying American Legislative Exchange Council (ALEC) in 2015 (without explicitly citing climate change as its reason for leaving).

Still, the company maintains leadership roles in trade associations and industry groups that spread disinformation on climate science and/or seek to block climate action.

For example, the Western States Petroleum Association (WSPA) made headlines in 2015 for spreading blatantly false statements about California’s proposed limits on carbon emissions from cars and trucks. The association employed deceptive ads on more than one occasion to block the “half the oil” provisions of a major clean-energy bill enacted by California lawmakers.

In response to a question at last year’s AGM about the misleading tactics of WSPA in California, CEO Bob Dudley said, “of course we did not support that particular campaign.” Yet according to the most recent data available, BP remains a member of WSPA and is represented on its board of directors.

Shareholders should be asking how BP communicated its disapproval of WSPA’s tactics in California to the association, and how WSPA responded. And how is BP using its leverage on the board of WSPA to end the association’s involvement in spreading climate disinformation and blocking climate action?

BP is also represented on the boards of the American Petroleum Institute (API) and the National Association of Manufacturers (NAM), both of which are named defendants in a lawsuit brought by youth seeking science-based action by the U.S. government to stabilize the climate system.

UCS’s 2015 report, “The Climate Deception Dossiers,” exposed deceptive tactics by the Western States Petroleum Association (WSPA).

2) Why did BP fund an attack on disclosure of climate-related risks and opportunities?

BP 2016 Score: Fair

BP—along with Chevron, ConocoPhillips, and Total SA—funded a new report criticizing the recommendations of the Task Force on Climate-Related Financial Disclosures (TCFD). The TCFD was set up by the Financial Stability Board (FSB), an international body that monitors and makes recommendations about the global financial system, in recognition of the potential systemic risks posed by climate change to the global economy and economic system. Through an open, collaborative process, the TCFD is recommending consistent, comparable, and timely disclosures of climate-related risks and opportunities in public financial filings.

A broad range of respondents in the TCFD’s public consultation supported its recommendations, and on Monday the We Mean Business coalition issued a statement expressing support for the TCFD recommendations and calling for G20 governments to endorse them. Members of We Mean Business include BSR (Business for Social Responsibility) and the World Business Council for Sustainable Development—both of which, in turn, count BP among their members.

Meanwhile, US Chamber of Commerce will reportedly roll out the oil and gas company-sponsored report at an event this week. (We found no evidence that BP is a member of the US Chamber).

In its own financial reporting, BP provides a detailed analysis of existing and proposed laws and regulations relating to climate change and their possible effects on the company, including potential financial impacts, and generally acknowledges physical risks to the company, including “adverse weather conditions,” but does not include discussion of climate change as a contributor to those risks.

So where does BP stand on climate-related disclosures? The company’s shareholders and the business community at large deserve to know, and tomorrow’s AGM is a good opportunity for CEO Bob Dudley to explain why BP’s funding isn’t aligned with its stated positions.

3) How is BP planning for a world free from carbon pollution?

BP 2016 Score: Poor

Both directly and through its membership in the Oil and Gas Climate Initiative, BP has expressed support for the Paris Climate Agreement and its goal of keeping warming well below a 2°C increase above pre-industrial levels.

Last month, the company signed a letter to President Trump supporting continued U.S. participation in the Paris Climate Agreement.

BP has adopted some modest measures to reduce greenhouse gas emissions from its internal operations. The company has set a cost assumption of $40 per tonne of CO2-equivalent for larger products in industrialized countries, but it is not clear whether BP applies the price to all components of the supply chain.

The company has undertaken efforts to reduce emissions as part of the “Zero Routine Flaring by 2030” pledge, reports annually on low-carbon research and development, and offers a limited breakdown of greenhouse gas emissions from direct operations and purchased electricity, steam, and heat for a year.

Yet BP has no company-wide plan for reducing heat-trapping emissions in line with the temperature goals set by the Paris Climate Agreement. BP’s April 2017 Sustainability Report does little to address BP’s long-term planning for a low-carbon future. CEO Bob Dudley continues to insist that “we see oil and gas continuing to meet at least half of all demand for the next several decades.”

BP’s Energy Outlook webpage confirms that the company’s “Most Likely” demand forecasts, plans for capital expenditure, and strategic priorities plan on a greater-than-3°C global warming scenario. BP also fails to provide a corporate remuneration policy that incentivizes contributions toward a clean energy transition (read ShareAction’s thorough and thoughtful analysis of BP’s remuneration policy here).

We look forward to hearing how BP responds to shareholder questions about the misalignment of its business plans and executive incentives with its stated commitment to keeping global temperature increase well below 2°C.

4) When will BP advocate for fair and effective climate policies?

BP 2016 Score: Good

BP consistently calls for a government carbon policy framework, including a global price on carbon, as a policy it supports, and touts its membership in the Carbon Pricing Leadership Coalition.

The question here is simple: when will BP identify specific climate-related legislation or regulation that it supports, and advocate publicly and consistently for those policies?

We will be awaiting answers from BP’s leadership at tomorrow’s AGM.

Three Reasons Congress Should Support a Budget Increase for Organic Agriculture Research

UCS Blog - The Equation (text only) -

Recent headlines about the US Department of Agriculture’s leadership and scientific integrity have been unsettling, as have indications that the Trump administration intends to slash budgets for agriculture and climate research and science more generally. But today there’s a rare piece of good news: a bipartisan trio in Congress has introduced legislation that would benefit just about everyone—farmers and eaters, scientists and food system stakeholders, rural and urban Americans. Not only that, but the new bill promises to achieve these outcomes while maintaining a shoestring budget.

Organic dairy producers need sound science to be able to make informed decisions about forage production for their herds. At this on-farm demonstration at the Chuck Johnson farm in Philadelphia, Tennessee, Dr. Gina Pighetti and her research team from the University of Tennessee and the University of Kentucky grow organic crimson clover (right) and wheat to develop best management practices that will help farmers make production decisions. Source: University of Tennessee.

Representatives Chellie Pingree (D-ME), Dan Newhouse (R-WA), and Jimmy Panetta (D-CA) are sponsoring the Organic Agriculture Research Act of 2017, which calls for an increase in mandatory funding for a small but crucial USDA research program, the Organic Research Extension Initiative (OREI). Congress allocated this program a mere $20 million annually in both the 2008 and 2014 Farm Bills, but that small investment stretched across the country with grants awarded in more than half of all states. The new bill proposes to increase that investment to $50 million annually in future years.

While a $30 million increase to a $20 million program may seem like a lot, it is worth noting that these numbers are small relative to other programs. For example, the USDA recently announced that its flagship research program, the Agriculture and Food Research Initiative (AFRI), will receive $425 million this year (another piece of good news, by the way). And many R&D programs at other agencies have much higher price tags (e.g., the NIH will receive $34 billion this year). But the return on investment of agricultural research and investment is very high, so this increase could do a lot of good.

Students at UC Davis, under the leadership of Charles Brummer, Professor of Plant Sciences, examine their “jalapeño popper” crop, a cross between a bell pepper and a jalapeño pepper. This public plant breeding pipeline supports organic farming systems by designing new vegetable and bean cultivars with the particular needs of the organic farming community in mind. Source: UC Davis.

While there are many reasons we are excited about a possible budget boost for the Organic Research Extension Initiative (OREI), I’ll highlight just three:

1)  We need more agroecologically-inspired research. More than 450 scientists from all 50 states have signed our expert statement calling for more public support for agroecological research, which is urgently needed to address current and future farming challenges that affect human health, the environment, and urban and rural communities. This call is built upon agroecology’s successful track record of finding ways to work with nature rather than against it, producing nutritious food while also boosting soil health, protecting our drinking water, and more. Unfortunately, the diminishing overall support for public agricultural research is particularly problematic for agroecology, because this research tends to reduce farmers reliance on purchased inputs, which means that gaps in funding are unlikely to be filled by the private sector. So, programs that direct public funding more toward agroecological research and practice are particularly needed, and OREI is one of these.

2)  When it comes to agroecology, this program is a rock star. The OREI funds some of the most effective federal agricultural research, especially around ecologically-driven practices that can protect our natural resources and maintain farmer profits.  One highlight of the program is that it stresses multi-disciplinary research; according the USDA “priority concerns include biological, physical, and social sciences, including economics”, an approach that can help ensure that research leads to farming practices that are both practical and scalable. Importantly, this program also targets projects that will “assist farmers and ranchers with whole farm planning by delivering practical information”, making sure that research will directly and immediately benefit those who need it most. But it’s not just the program description that leads us to believe this is a strong investment. In fact, our own research on competitive USDA grants found that OREI is among the most important programs for advancing agroecology.  And this in-depth analysis of USDA’s organic research programs by the Organic Farming Research Foundation further highlighted the vital importance of OREI.

3) Research from programs like OREI can benefit all farmers, while focusing on practices required for a critical and growing sector of US agriculture. The OREI program is designed to support organic farms first and foremost, funding research conducted on certified organic land or land in transition to organic certification. However, the research from OREI can benefit a much wider group of farmers as well, as such results are relevant to farmers of many scales and farming styles, organic or not. Of course, directing funds to support organic farmers makes lots of sense, since this sector of agriculture is rapidly growing and maintaining high premiums that benefit farmers. But it’s important to recognize that the benefits of the research extend far beyond the organic farming community.

For all of the reasons listed above, this bill marks an important step in the right direction. It is essential that the next farm bill increases support for science-based programs that will ensure the long-term viability of farms while regenerating natural resources and protecting our environment. Expanding the OREI is a smart way forward.

 

One of Many Risks of the Regulatory Accountability Act: Flawed Risk Assessment Guidelines

UCS Blog - The Equation (text only) -

Tomorrow, the Senate will begin marking up Senator Rob Portman’s version of the Regulatory Accountability Act (RAA), which my colleague Yogin wrote a primer about last week. This bill is an attempt to impose excessive burdens on every federal agency to freeze the regulatory process or otherwise tie up important science-based rules in years of judicial review.

One of the most egregious pieces of this bill as an affront to the expertise at federal agencies is the provision ordering the White House Office of Management and Budget’s (OMB) Office of Regulatory and Information Affairs (OIRA) to establish guidelines for “risk assessments that are relevant to rulemaking,” including criteria for how best to select studies and models, evaluate and weigh evidence, and conduct peer reviews. This requirement on its own is reason enough to reject this bill, let alone the long list of other glaring issues that together would fundamentally alter the rulemaking process.

The RAA is a backdoor attempt at giving OIRA another chance to try and prescribe standardized guidelines for risk assessment that would apply to all agencies, even though each agency conducts different types of risk assessments based on statutory requirements.

OIRA should not dole out science advice

The way in which agencies conduct their risk assessments should be left to the agencies and scientific advisory committees, whether it is to determine the risks of a pesticide to human health, the risks of a plant pest to native plant species, the risks of a chemical to factory workers, or the risks of an endangered species determination to an ecosystem. Agencies conduct risk assessments that are specific to the matter at hand; therefore an OIRA guidance prescribing a one-size-fits-all risk assessment methodology will not be helpful for agencies and could even tie up scientifically rigorous risk assessments in court if the guidelines are judicially reviewable.

OIRA already tried writing guidance a decade ago, and it was a total flop. In January 2006, OMB released its proposed Risk Assessment Bulletin which would have covered any scientific or technical document assessing human health or environmental risks. It’s worth noting that OIRA’s main responsibilities are to ensure that agency rules are not overlapping in any way before they are issued and to evaluate agency-conducted cost-benefit analyses of proposed rules. Therefore OIRA’s staff is typically made up of economists and lawyers, not individuals with scientific expertise appropriate for determining how agency scientists should conduct risk assessments.

OMB received comments from agencies and the public and asked the National Academy of Sciences’ National Research Council (NRC) to conduct an independent review of the document. That NRC study gave the OMB a failing grade, calling the guidance a “fundamentally flawed” document which, if implemented, would have a high potential for negative impacts on the practice of risk assessment in the federal government. Among the reasons for their conclusions was that the bulletin oversimplified the degree of uncertainty that agencies must factor into all of their evaluations of risk. As a result, the document that OIRA issued a year later, under Portman’s OMB, was heavily watered down. In September 2007, OIRA and the White House Office of Science and Technology Policy (OSTP) released a Memorandum on Updated Principles for Risk Analysis to “reinforce generally-accepted principles for risk analysis upon which a wide consensus now exists.”

Luckily, in this case, the OMB called upon the National Academies for an objective review of the policy, which resulted in final guidelines that were far less extreme. As the RAA is written, it does not require that same check on OIRA’s work, which means that we could end up with highly flawed guidelines with little recourse. And the Trump administration’s nominee for OIRA director is Neomi Rao, a law professor whose work at the George Mason University Law School’s Center for the Study of the Administrative State emphasizes the importance of the role of the executive branch, while describing agency policymaking authority as “excessive.” I think it’s fair to say that under her leadership, OIRA will not necessarily scale back its encroachment into what should be expert-driven policy matters.

Big business is behind the RAA push

An analysis of OpenSecrets lobbying data revealed that trade associations, PACs and individuals linked to companies that have lobbied in support of the RAA also contributed $3.3 million to Senator Rob Portman’s 2016 campaign. One of the most vocal supporters of the bill is the U.S. Chamber of Commerce, whose support for the bill rests on the assumption that we now have a “federal regulatory bureaucracy that is opaque, unaccountable, and at times overreaching in its exercise of authority.” Yet this characterization actually sounds a lot to me like OIRA itself, which tends to be fairly anti-regulatory and non-transparent, and has a history of holding up science-based rules for years without justification (like the silica rule). Senator Portman’s RAA would give OIRA even more power over agency rulemaking by tasking the agency with writing guidelines on how agencies should conduct risk assessments and conveniently not requiring corporations to be held to the same standards.

When OIRA tried to write guidelines for risk assessments in 2006, the Chamber of Commerce advocated for OIRA’s risk assessment guidelines to be judicially reviewable so they could be “adequately enforced,” claiming that agencies use “unreliable information to perform the assessments,” which can mean that business and industry are forced to spend millions of dollars to remedy those issues. It is no wonder, then, that the Chamber would be so supportive of the RAA, which would mandate OIRA guideline development for risk assessments, possibly subject to judicial review. OIRA issuing guidelines is one thing, but making those guidelines subject to judicial review ramps up the egregiousness of this bill. All sorts of troubling scenarios could be imagined.

Take, for example, the EPA’s human health risk assessment for the pesticide chlorpyrifos, which is just one study that will be used for the agency’s registration review of the chemical, which has been linked to developmental issues in children. The EPA sought advice from the FIFRA Scientific Advisory Panel on using a particular model to better determine a chemical’s effects on a person based on their age or genetics and to predict how different members of a population would be affected by exposure, called the physiologically-based pharmacokinetic and pharmacodynamic (PBPK/PD) model. The agency found that there is sufficient evidence that neurodevelopmental effects may occur at exposure levels that are well below previously measured exposure levels.

If OIRA were to produce risk assessment guidelines that were judicially reviewable, the maker of chlorpyrifos, Dow Chemical Company, could sue the agency on the grounds that it did not use an appropriate model, consider the best available studies, or that its peer review was insufficient. This would quickly become a way for industry to inject uncertainty into the agency’s process and tie up regulatory decisions about its products in court for years, delaying important public health protections. A failure to ban a pesticide like chlorpyrifos based on inane risk assessment criteria would allow more incidences of completely preventable acute and chronic exposure, like the poisoning of 50 farmworkers in California from chlorpyrifos in early May.

“Risk assessment is not a monolithic method”

A one-size fits all approach to government risk assessments is a bad idea, plain and simple. As the NRC wrote in its 2007 report:

Risk assessment is not a monolithic process or a single method. Different technical issues arise in assessing the probability of exposure to a given dose of a chemical, of a malfunction of a nuclear power plant or air-traffic control system, or of the collapse of an ecosystem or dam.

Prescriptive guidance from OIRA would serve to squash the diversity and flexibility that different agencies are able to use depending on the issue and the development of new models and technologies that best capture risks. David Michaels, head of OSHA during the Obama Administration, wrote in his book Doubt Is Their Product that regulatory reform, and in this case the RAA, offers industry a “means of challenging the supporting science ‘upstream.’” Its passage would allow industry to exert more influence in the process by potentially opening up agency science to judicial review. Ultimately, the RAA is a form of regulatory obstruction that would make it more difficult for agencies to issue evidence-based rules by blocking the use of science in some of the earliest stages of the regulatory process.

The bill will be marked up in the Senate Homeland Security and Governmental Affairs Committee tomorrow, and then will likely move onto the floor for a full senate vote in the coming months. Help us fight to stop this destructive piece of legislation by tweeting at your senators and telling them to vote no on the RAA today.

Reduce Risk, Increase Benefits: More Energy Progress for Massachusetts?

UCS Blog - The Equation (text only) -

A new analysis shows how strengthening a key Massachusetts energy policy can create jobs, cut pollution, and manage risks. Here are 5 questions (and answers) about what’s at stake and what the study tells us.

The study, prepared for the Northeast Clean Energy Council (NECEC) in partnership with Mass Energy Consumers Alliance, was carried out by two leading Massachusetts-based energy consulting firms, Synapse Energy Economics and Sustainable Energy Advantage (SEA). (UCS was part of an advisory working group providing input on assumptions and analytical approaches.)

An Analysis of the Massachusetts Renewable Portfolio Standard looks at what kind of benefits could come from strengthening that key policy. And the results look pretty attractive.

Why do we want more renewables?

First, back to basics: Why do we want renewables?  Turns out there are a lot of problems that renewables are a great answer to, from financial risks associated with potentially volatile fuel pricing (think natural gas), to pollution and associated negative public health impacts, to not enough jobs.

That was where we were coming from when we did a study last year about how we could cut our risk of overreliance on natural gas, make progress on climate change, and bring about other benefits. That study showed that a combination of policies to drive renewables could do all that, and at really reasonable costs.

How do we get renewable energy?

So if renewables are a good thing, how do we make them happen?

One of the most important policies for driving renewable energy in the US over the last two decades has been the renewable portfolio standard (RPS; also known as the renewable electricity standard). Under RPSs, utilities have targets for how much renewable energy they need to get for their customers by certain dates, and then let the market figure out the actual technology mix (wind, solar, etc.).

And Massachusetts has a particular leadership role for this particular policy. The Bay State was the first to put in place a state-wide RPS, and now 29 states have RPSs. They work, and they can do even more: a recent analysis by two premier national energy labs found good benefits from stronger RPSs: less pollution, potential savings, more jobs.

States can complement RPSs with policies aimed at particular technologies or approaches. Massachusetts has done that in a big new energy law that incorporated some of the policies we modeled in our study.

More clean energy leadership to come? (Credit: Tony Hisgett)

The 2016 Act to Promote Energy Diversity requires the state’s utilities to enter into cost-effective long-term contracts for renewable energy totaling 15-20% of the state’s electricity demand. It also requires utilities to go after offshore wind, to kick-start a major new source of clean energy, for another 10-15%.

Is too much of a good thing a bad thing?

In our 2016 study, before the legislation happened, we modeled versions of those polices coupled with a strengthening of the RPS. Why is that increase important? Because much of that renewable energy (not including large hydro, which is allowed to compete for those contracts) would count for meeting the Massachusetts RPS.

Alas, while the RPS increase was supported by the state senate, it didn’t make it in to the final bill. Without that piece, we’re on track to end up with more renewable energy credits than the policy calls for (each megawatt-hour of renewable energy is worth one REC, and that’s what utilities use to show that they’ve complied with the RPS).

So are too many RECs a bad thing? No—except that if supply outpaces demand, REC prices fall (sometimes precipitously). And we need REC prices to be high enough to not only keep existing renewable energy projects online, but also drive new renewables. RPSs work, and part of keeping them working is keeping them just out in front of the market.

So, why this study?

That’s what makes this new study so important. To use the RPS to best effect for Massachusetts, we need to understand what level of RPS will be enough to keep the market for renewables strong across the board, to complement the long-term contracts for land-based renewables and offshore wind under the Energy Diversity Act.

The study looked at a base case and compared it with a range of possible approaches to keeping REC prices driving renewables by increasing RPS demand in Massachusetts (and in Connecticut, as the next biggest electricity market in New England). Specifically, Synapse/SEA modeled the Massachusetts RPS increasing 2% or 3% per year (instead of the current 1%), combined in some cases with a continuation of the Connecticut RPS’s 1.5%-per-year growth past its current 2020 end date.

They also looked at what would happen under those scenarios if natural gas prices were to increase, and what it might mean to move more quickly to electric vehicles.

Can we drive more renewables, and what do we get from them?

So what does all this mean?

Renewable energy demand – What would it mean for the REC supply-demand picture—specifically, would there be enough demand because of the RPS to drive the additional renewables we know we need?

Here’s what the analysis found:

Synapse/Sustainable Energy Advantage

As the graph shows, the RPS base case wouldn’t be expected to drive additional renewables beyond that required under the offshore wind and other long-term contracting provisions of the Energy Diversity Act. The higher RPS targets, on the other hand, could do the trick in terms of keeping REC prices able to drive more renewables.

Global-warming pollution reduction – How would that extra growth in renewable energy match our needs, in terms of the requirements under the state’s landmark Global Warming Solutions Act (GWSA), for example?

Good news there, too:

Synapse/Sustainable Energy Advantage

As the graph shows, if the increase in the RPS is paired with more vehicle electrification, it gets us most of the way to where we’ll probably need to be in 2030 based on the GWSA.

Electricity price and bill impacts – What about the finances? While getting the RPS in balance means that REC prices will go up, those increases are partially balanced by decreases in wholesale electricity prices because of the added renewables. For the average Massachusetts homeowner/billpayer, they project that the overall effect would be an electricity bill increase of $0.15 to $2.17 per month.

More renewables can also mean less natural gas, and a corresponding drop in risks from natural gas overreliance, which would be particularly important if gas prices were to rise:

Between 2018 and 2030, increasing the diversity of New England’s electricity mix by adding more renewables and reducing reliance on natural gas could save New England up to $2.1 billion in wholesale energy costs, in the face of a higher natural gas price.

Job creation – What would these policies do for employment? One (other) great thing about renewable energy is that it means jobs. In this case, even when taking into account reduced jobs in the fossil fuel sector, it could mean something like 37,000 extra jobs (job-years) between 2018 and 2030—on top of jobs created by the requirements under the 2016 Energy Diversity Act.

Seeking harmony

The overall conclusion of the study is that balance is better:

Two of Massachusetts’ key renewable energy policies—the RPS and long-term contracting authorizations—require harmonization in order for the Commonwealth to meet its long-term clean energy and climate goals.

The numbers suggest that getting that “harmonization” right would bring a load of benefits to Massachusetts and the region, and provide extra oomph for a state on the move toward a truly clean energy future.

See here for the study press release.

ConocoPhillips Shareholders to Consider Climate-Related Lobbying and Executive Perks

UCS Blog - The Equation (text only) -

Today ConocoPhillips holds a virtual annual shareholders’ meeting, where the company will face two significant climate-related resolutions. These resolutions intersect with some of the key findings and recommendations of UCS’s 2016 report The Climate Accountability Scorecard. ConocoPhillips responded to the report shortly after its release, and UCS has been engaging with company officials over the company’s climate-related positions and actions since then. We’ll be following the shareholders’ meeting with keen interest.

1) Lobbying disclosure

This proposal, filed by Walden Asset Management, calls for ConocoPhillips to report on its direct and indirect lobbying expenditures and grassroots lobbying communications at the federal, state, and local levels. It received support of one-quarter of ConocoPhillips shareholders last year.

The resolution highlights ConocoPhillips’s representation on the Board of the US Chamber of Commerce (US Chamber) and the lack of transparency about the company’s payments to the US Chamber—including the portion of those payments used for lobbying. Last year alone, the US Chamber spent $104 million on lobbying.

While the US Chamber claims to represent the interests of the business community, few companies publicly agree with the group’s controversial positions on climate change. Last month, a range of civil society organizations urged Disney, Gap, and Pepsi to withdraw from US Chamber because of the inconsistency between their positions on climate change and the US Chamber’s lobbying on the issue.

Today the US Chamber is also reportedly hosting an event to highlight a new oil and gas-industry sponsored report attacking the recommendations of the Task Force on Climate-Related Financial Disclosures (TCFD), on which UCS submitted comments.

Chaired by former New York City Mayor and businessman Michael Bloomberg, the TCFD has conducted an open, collaborative process through which it is recommending consistent, comparable, and timely disclosures of climate-related risks and opportunities in public financial filings.

Implementation of these common-sense, mainstream recommendations by companies across all sectors of the economy—including transparent discussion of the business implications of a 2° Celsius scenario—would begin to fill gaps in existing disclosures and provide necessary data to investors and other stakeholders.

Indeed, some companies are already following such guidelines, and the broad range of respondents to the TCFD’s public consultation process were generally supportive of the recommendations. However, the IHS Markit report (funded by ConocoPhillips along with BP, Chevron, and Total SA) claims that adoption of the TCFD recommendations could obscure material information, create a false sense of certainty about the financial implications of climate-related risks, and distort markets.

This effort by the fossil fuel industry and the US Chamber to resist transparency is alarming, particularly in light of the oil and gas companies’ limited disclosure of physical and other climate-related risks to investors and in light of evidence that climate change poses financial risks to the fossil fuel industry. (And this pushback against corporate transparency is particularly alarming under a Trump administration that has close ties to the fossil fuel industry and has shown no inclination to hold these companies accountable).

ConocoPhillips’s affiliation with the US Chamber contributed to its “Poor” score in the area of Renouncing disinformation on climate science and policy in UCS’s Climate Accountability Scorecard.

ConocoPhillips is also represented on the Boards of the American Petroleum Institute (API) and the National Association of Manufacturers (NAM), two other trade associations that UCS has found to spread disinformation on climate science and/or block climate action. Both API and NAM are named defendants in Juliana vs. United States, a lawsuit through which 21 young people supported by Our Children’s Trust are seeking science-based action by the U.S. government to stabilize the climate system.

UCS recommends that ConocoPhillips use its role as chair of API and its leverage as a leader within NAM and the US Chamber to demand an end to the groups’ disinformation on climate science and policy, and speak publicly about these efforts.

The company has shown some discretion in managing its public policy advocacy: ConocoPhillips confirmed in 2013 that it was no longer a member of the American Legislative Exchange Council (ALEC), it provides good disclosure of its political spending, and it has extensive policies and oversight related to political activities in general.

2) Executive compensation link to 2 degrees transition

Click here to read ConocoPhillips Accountability Scorecard

Another resolution, filed by the Unitarian Universalist Association (on whose Socially Responsible Investing Committee I serve), calls for a report to shareholders on the alignment of ConocoPhillips’s executive compensation incentives with a low-carbon future. Proponents are seeking information, for example, on the ways the company’s incentive compensation programs for senior executives link the amount of incentive pay to the volume of fossil fuel production or exploration and/or encourage the development of a low-carbon transition strategy.

In March, ConocoPhillips CEO Ryan Lance expressed support for the U.S. staying in the Paris Climate Agreement. However, in UCS’s Climate Accountability Scorecard, ConocoPhillips ranked “poor” in the area of Planning for a world free from carbon pollution.

On the positive side, the company provides details about efforts to improve energy efficiency, reduce natural-gas flaring, and reduce the intensity of emissions from oil sands. It uses carbon scenarios, including a low-carbon scenario, to evaluate its current portfolio and investment options. And ConocoPhillips has set limited, short-term emissions reduction targets—but not in the service of the Paris Climate Agreement goal of keeping warming well below a 2°C increase above pre-industrial levels.

Building on discussions at today’s annual shareholders’ meeting, UCS looks forward to further dialogue with ConocoPhillips over its response to our Scorecard findings and recommendations, toward improvements in the company’s climate-related positions and actions.

©corlaffra/Shutterstock.com

Five Ways to Move Beyond the March: A Guide for Scientists Seeking Strong, Inclusive Science

UCS Blog - The Equation (text only) -

The March for Science took place April 22 in locations all over the world — an exciting time for scientist-advocates and a proud moment for the global scientific community.

As we reflect on the March, we must also reflect on the fact that organization of the March on Science 2017 has been a microcosm of the structural, systemic challenges that scientists continue to face in addressing equity, access, and inclusion in the sciences.

Others have written eloquently regarding the steep learning curve that the March on Science Washington DC organizers faced in ensuring an inclusive and equitable March. The organizers’ initial missteps unleashed a backlash on social media, lambasting their failure to design a movement for all scientists and exhorting them to consider more deeply the ways in which science interacts with the varying experiences of language, race, economic status, ableness, gender, religion, ethnic identity, and national origin.

The March has taken steps to correct these initial missteps, correctly choosing to engage directly with the issue and consult with historically excluded scientists to better understand and examine the ways in which science interacts with the ongoing political reality of bias in society.  It must be noted, however, that improvements like their new Diversity and Inclusion Principles, though an excellent initial step, still mask the unheralded efforts of multiple scientists of color to correct the narrative.

At the core of the controversy, and perhaps underlying its intellectual origins, is the popular fiction among scientists that Science can (or should) be apolitical.

Science is never apolitical.

It is, inherently, a system of gaining knowledge that has been devised by, refined by, practiced by, misused by, and even (at times) weaponized by human beings — and as human beings, we are inherently political.

Therefore science is not a completely neutral machine, functioning of its own volition and design; but rather a system with which we tinker and adjust; which we tune to one frequency or the other; and by dint of which we may or may not determine (or belatedly rationalize) the course of human action.

And so when we understand that science is not apolitical, we are freed to examine the biases, exclusions, and blind spots it may create — and then correct for them. In doing so, we can improve ourselves, broaden the inclusivity of our work (and potentially improve its usefulness and/or applicability), and advance the quest of scientific inquiry: to find the unwavering truths of the universe.

The March on Science organizers have come a long way in recognizing the importance of diversity, equity, and inclusion in science, but what comes next? How can scientists living in this current political moment engage in individual and collective action (hint: it’s not just about calling your representatives). What can we do?

  1. Study the history and culture of science. As scientists, we are natural explorers and inherently curious. We ought to direct some of that curiosity toward ourselves; toward better understanding where we come from, who we are, and why we think the way we do. Historians of science and those engaged in social study of science have demonstrated how science is a human enterprise, influenced by culture and politics of specific times and places. These studies have shown how blind spots — in language, in culture, in worldview, in political orientation—can change our results, skew our data, or put a foot on the scales of measurement. At times, these biases have caused great harm, and at others have been fairly benign—but these analyses together all point out how science is more robust for recognizing sociocultural impacts on its practice.
  2. Understand our own political reality, and seek to understand the realities of others. Take some time — even ten minutes a week — to ask yourself if your actions reflect your beliefs. What beliefs do you hold dear, both as a scientist and as a person? How do they influence the way you think about, study, and conduct science? What do you assume to be true about the world? How does that impact the way in which you frame your scientific questions? How does it influence the methods, study sites, or populations you choose? How does the political reality which you inhabit—and its associated privileges and problems—direct your attention, shape your questions, or draw you to one discipline or the other? What presumptions do you make about people, about systems, or about the planet itself? What do you do, think, or feel when your assumptions are challenged? How willing are you to be wrong?
  3. Open the discourse. Inclusive science won’t happen by accident—it will happen because we work to eliminate the sources of bias in our systems and structures that list the ship toward one side or the other. And the only way we can learn about these sources of bias is to (1) acknowledge their existence, then (2) begin to look for them. Talk to other scientists—at conferences, on Twitter, on Facebook, on reddit, on Snapchat, through email chains, through list-servs—any way you can. Listen for the differences in your perspectives and approaches. Ponder on the political reality from which they might originate. Ask questions, and genuinely want to hear (and accept) the answers. Then go back and reconsider the questions regarding your political reality and how you could now approach your science based on what you have learned of others. As a clear example, western science has consistently overlooked the already-learned lessons of indigenous science and disregarded the voiced experiences of indigenous researchers. Greater recognition of—and collaboration with—indigenous scientists has the potential to greatly speed and improve advances in our work. Opening the discourse is a first step toward ameliorating this deficit in our learning.
  4. Collaborate, collaborate, collaborate. Reach out to scientists who do not look like you, do not speak your dialect, do not come from your country, do not share your values or religion, do not frame questions in the same way, and do not hold the same theories precious. Share equally in the experience of scientific discovery. Choose a journal that will assign multiple-first-authorships. Publish open-access if you can, and share directly if you can’t.
  5. Choose to include. Take responsibility at all stages—in the planning for science, the choosing of methods, the hiring of staff, the implementation—for creating strong, inclusive scientific teams and systems. Be aware of how your own political reality affect your scientific design, planning, or implementation. Check your unrecognized presumptions or biases. Challenge yourself to ask your question through a different lens or through different eyes. Choose to participate in the improvement and refinement of our shared scientific machine.

Ignoring politics doesn’t insulate us from it—if scientists want to be champions for knowledge, then we have to defend our practice from the human tendencies that threaten to unravel it—exclusion, tribalism, competition, and bias. Science can’t be apolitical, but it can be a better path to knowledge—so let’s make it happen.

 

Alexandra E. Sutton Lawrence is an Associate in Research at the Duke Initiative for Science & Society, where she focuses on analyzing innovation & policy in the energy sector. She’s also a doctoral candidate in the Nicholas School of the Environment, and a member of the Society for Conservation Biology’s Equity, Inclusion and Diversity Committee. She’s also a former member of the global governing board for the International Network of Next Generation Ecologists (INNGE).

 

 

Dr. Rae Wynn-Grant is a conservation biologist with a focus on large carnivore ecology in human-modified landscapes, with a concurrent interest in communicating science to diverse audiences. Dr. Wynn-Grant is the deputy chair of the Equity, Inclusion, and Diversity committee for the Society for Conservation Biology.

 

 

 

Cynthia Malone is a conservation scientist and social justice organizer, whose intersectional, trans-disciplinary research ranges from primate ecology to human wildlife conflict across the tropics, including Indonesia and Cameroon. She is a cofounder and current co-chair of the Society of Conservation Biology’s Equity, Inclusion, and Diversity Committee.

 

 

Dr. Eleanor Sterling has interdisciplinary training in biology and anthropology and has over 30 years of field research and community outreach experience with direct application to biodiversity conservation in Africa, Asia, Latin America, and Oceania. Dr. Sterling is active in the Society for Conservation Biology (SCB), having served for 12 years on the SCB Board of Governors and she currently co-chairs the SCB’s Equity, Inclusion, and Diversity Committee, which she co-founded. She also co-founded the Women in the Natural Sciences Association for Women in Sciences chapter in New York City.

 

Martha Groom is a Professor in the School of Interdisciplinary Arts and Sciences at the University of Washington Bothell and the College of the Environment at the University of Washington.  Her work focuses on the intersections of biodiversity conservation and sustainable development, and on effective teaching practice. A member of the SCB Equity, Inclusion and Diversity Committee, she is also a leader of the Doris Duke Conservation Scholars Program at the University of Washington, a summer intensive program for undergraduates aimed at building truly inclusive conservation practice.

 

Dr. Mary Blair is a conservation biologist and primatologist leading integrative research to inform conservation efforts, including spatial priority-setting and wildlife trade management. She is the President of the New York Women in Natural Sciences, a chapter of the Association for Women in Science, and a member of the Society for Conservation Biology’s Equity, Inclusion, and Diversity Committee.

 

 

Science Network Voices gives Equation readers access to the depth of expertise and broad perspective on current issues that our Science Network members bring to UCS. The views expressed in Science Network posts are those of the author alone.

Chevron, ExxonMobil Face Growing Investor Concerns About Climate Risk

UCS Blog - The Equation (text only) -

In preparation for their annual meetings on May 31, both Chevron and ExxonMobil opposed every climate-related resolution put forth by their shareholders. In a previous post, I wrote that Chevron continues to downplay climate risks while attempting to convince shareholders that the company’s political activities—which include support for groups that spread climate disinformation—are in shareholders’ long-term interests.

Now the proponents of a shareholder resolution calling for Chevron to publish an annual assessment of long-term impacts of climate change, including 2°C scenarios, have withdrawn it from consideration at the annual meeting.

In a carefully calibrated statement, investors Wespath and Hermes noted that the report “Managing Climate Change Risks: A Perspective for Investors” lacks a substantive discussion of Chevron’s strategies, but accepted the report as a first step and decided to give the company more time to explain how climate change is factored into its strategic planning.

Similar resolutions are gaining momentum with shareholders of utility and fossil fuel companies this spring, receiving more than 40% support at AES Corporation, Dominion Resources Inc., Duke Energy Corporation, and Marathon Petroleum Corporation. Last Friday, a majority of Occidental Petroleum Corporation shareholders voted in favor of a resolution (also filed by Wespath) calling on the company, with Board oversight, to “produce an assessment of long-term portfolio impacts of plausible scenarios that address climate change.”

ExxonMobil shareholders will vote on a comparable proposal in two weeks. In 2016, a resolution urging the company to report on how its business will be affected by worldwide climate policies received the highest vote ever (38%) from company shareholders in favor of a climate change proposal.

The 2°C scenario analysis proposal, co-filed by the Church Commissioners for England and New York State Comptroller Thomas P. DiNapoli as Trustee of the New York State Common Retirement Fund, is on the agenda again this year, and a coalition of institutional investors with more than $10 trillion in combined assets under management is pushing for its adoption. (Look for a forthcoming blog on ExxonMobil’s 2017 annual shareholders’ meeting).

Chevron has bought some time from shareholders, but the company would be wise to improve its disclosures in response to growing investor concerns about the potential business, strategic, and financial implications of climate change. Instead, the company (along with BP, ConocoPhillips, and Total SA) funded a new report criticizing the recommendations of the Task Force on Climate-Related Financial Disclosures (TCFD—see below for additional details).

The US Chamber of Commerce will roll out the oil and gas company-sponsored report at an event this week. While the US Chamber claims to represent the interests of the business community, few companies publicly agree with the group’s controversial positions on climate change.

Meanwhile, carbon asset risk is still on the agenda for Chevron’s shareholders this month: the proposal on transition to a low-carbon economy filed by As You Sow will go forward to a vote. As UCS closely monitors Chevron’s and ExxonMobil’s communications and engagement with concerned shareholders over its climate-related positions and actions, our experts and supporters will be stepping up the pressure on both companies in the lead-up to their annual meetings at the end of May.

North Korea’s Missile in New Test Would Have 4,500 km Range

UCS Blog - All Things Nuclear (text only) -

North Korea launched a missile in a test early in the morning of May 14, North Korean time. If the information that has been reported about the test are correct, the missile has considerably longer range than its current missiles.

Reports from Japan say that the missile fell into the Sea of Japan after traveling about 700 km (430 miles), after flying for about 30 minutes.

A missile with a range of 1,000 km (620 miles), such as the extended-range Scud, or Scud-ER, would only have a flight time of about 12 minutes if flown on a slightly lofted trajectory that traveled 700 km.

A 30-minute flight time would instead require a missile that was highly lofted, reaching an apogee of about 2,000 km (1,240 miles) while splashing down at a range of 700 km. If that same missile was flown on a standard trajectory, it would have a maximum range of about 4,500 km (2,800 km).

New press reports are in fact giving a 2,000 km apogee for the test.

Fig. 1  The black curve is the lofted trajectory flown on the test. The red curve is the same missile flown on a normal (MET) trajectory.

This range is considerably longer than the estimated range of the Musudan missile, which showed a range of about 3,000 km in a test last year. Guam is 3,400 km from North Korea. Reaching the US West Coast would require a missile with a range of more than 8,000 km. Hawaii is roughly 7,000 km from North Korea.

This missile may have been the new mobile missile seen in North Korea’s April 15 parade (Fig. 2).

Fig. 2 (Source: KCNA)

Shake, Rattle, and Rainout: Federal Support for Disaster Research

UCS Blog - The Equation (text only) -

Hurricanes, wildfires, and earthquakes are simply natural events—until humans get in their way. The resulting disasters are particularly devastating in urban areas, due to high concentrations of people and property. Losses from disasters have risen steadily over the past five decades, thanks to increased populations and urban development in high-hazard areas, particularly the coasts. There is also significant evidence that climate change is making weather-related events more frequent and more severe as well. As a result, it is more critical than ever that natural hazards research is being incorporated into emergency planning decisions.

NOAA map denotes a range of billion dollar weather and climate disasters for 2016.

Improving emergency planning for the public’s benefit

A handful of far-sighted urban planning and management researchers, with particular support from the National Science Foundation, began studying these events during the 1970s. I participated in two of these research studies. Both opportunities afforded me clear opportunities to make a difference in people’s lives, a major reason I chose my field.

In 2000, a group of researchers from the University of New Orleans and Tulane University looked into the effects of natural hazards on two communities: Torrance, CA (earthquakes) and Chalmette, LA (hurricanes). This research focused on the oil refineries in both communities. We looked at emergency-management protocols, potential toxic effects due to refinery damage, and population impacts.

Hurricane Katrina photo of oil spill in Chalmette, showing oil tanks & streets covered with oil slick. US EPA photo from “http://www.epa.gov/katrina/images/oilspill_650.jpg” by the United States Environmental Protection Agency

Although California has a far better-developed emergency management system at all levels of government, Chalmette was less vulnerable than Torrance, due to the advanced warning available for hurricanes. We also found that, though even well-informed homeowners tend to be less prepared than expected, renters are more vulnerable to disaster effects due to inadequate knowledge, dependence on landlords to secure their buildings, and generally lower socioeconomic status. Our findings had major implications for community-awareness campaigns, suggesting that more than disaster “fairs”, public flyers, and media attention are needed. We concluded with a series of recommendations for emergency managers and planners to improve their communities’ prospects.

This conjoint-hazard research also stimulated in-depth studies of the various aspects of what is now called “natech”. For example, a pair of researchers subsequently found that natural hazards were the principal cause of more than 16,000 releases of hazardous materials between 1990 and 2008—releases that could have been prevented with better hazard-mitigation planning and preparation. The implications for regulation of businesses that use hazardous substances are obvious. So are the ramifications for public outreach and disaster response.

The second NSF-funded study, conducted at Florida Atlantic University, began in the aftermath of Hurricane Katrina. Before starting, we scoured the literature for earlier research on housing recovery, only to discover that most of it dealt with either developing countries or one or two earthquake events in California.

We focused on housing recovery along the eight-state “hurricane coast” from North Carolina south and west to Texas. A case study of New Orleans quickly revealed the extent to which local circumstances, population characteristics, and state and federal policies and capacity impaired people’s ability to restore their homes and rebuild their lives. We assembled data on the socioeconomic, housing, and property-insurance characteristics of the first- and second-tier coastal counties, as well as information about state and local disaster-recovery policies and planning.

The research team then developed a vulnerability index that provides a numerical snapshot for each county, as well as a series of indicators that contributed to the overall rating. These indicators can be used to evaluate specific areas in need of improvement, such as building regulations, flood-protection measures, and reconstruction policies—for example, restrictions on temporary housing—as well as the extent to which each area contributes to overall vulnerability.

Science informs public policies

Although imperfect, indexes do provide policy-makers and stakeholders with valuable insights. Moreover, our analysis of post-disaster housing policies revealed the inadequacies in federal provision of temporary housing, the most critical need once community safety has been restored. The controversies surrounding FEMA’s travel-trailers—high cost, toxic materials, and haphazard placement—made national news. Now there is increasing recognition that small, pre-fabricated houses are a better approach, presuming that local jurisdictions allow them to be built regardless of pre-disaster construction regulations. More planners are engaged in looking at these regulations with disaster recovery in mind.

I’m proud of the research I’ve contributed to, but I’m even more gratified with the impacts of that research. Many of our recommendations have been directed at government actors, and it is through those actors that real differences are made in people’s day-to-day lives—and in their resiliency in the face of disaster. In an era of accelerating environmental change, helping communities endure will be ever more dependent on cutting-edge research of this kind. I’m grateful to have had the opportunity to participate in the endeavor.

 

Joyce Levine, PhD, AICP, received her PhD from the University of New Orleans. As an urban planner with thirty years of experience, she became interested in pre- and post-disaster planning by preparing her dissertation under hazard-mitigation guru Raymond J Burby. She participated in two NSF-funded projects that focused on hazard-prone states — California and Louisiana in the first, and the southern “hurricane coast” in the second. She is the author of an extensive study of the housing problems i New Orleans reported by government and the media during the first six months after Katrina. Although she has retired from academia, she continues to follow disaster research in the U.S.

Science Network Voices gives Equation readers access to the depth of expertise and broad perspective on current issues that our Science Network members bring to UCS. The views expressed in Science Network posts are those of the author alone.

Graphic: NOAA

New Arctic Climate Change Report: Stark Findings Confront Secretary of State Tillerson Ahead of G7

UCS Blog - The Equation (text only) -

On May 11, US Secretary of State Rex Tillerson will cap two years of US chairmanship of the Arctic Council and present progress made over that time and look at likely future directions.

The forthcoming declaration by the Nordic ministers puts climate change front and center in the lead up to this week’s Arctic Ministerial meeting. The world is paying attention and will be looking for how the issue of climate change is addressed in the Arctic Council ministerial statement, including any signals indicating how Secretary Tillerson might characterize future US actions under the Paris Climate Agreement.

SWIPA 2017: Snow, Water, Ice and Permafrost in the Arctic (2017) – long from AMAP on Vimeo.

Stark findings

The Arctic Monitoring and Assessment Program report, Snow, Water, Ice, Permafrost (SWIPA 2017) will be presented at the Arctic Ministerial meeting this week. It includes two stark findings. First, the least bad scenario for sea level rise has gotten a lot worse—what scientists thought was the best possible chance (i.e. the lower end of the confidence range) for a slow and manageable sea level rise under a fully implemented Paris Climate Agreement just got faster and higher. Second, the global costs run into trillions stemming from  the changes over this century in the Arctic region.

One reason we are in suspense is that there is one additional seat at the table—the proverbial seat occupied by the elephant in the room (i.e. evidence from the just-released science report requested by the Arctic Council).

It is likely that a binding agreement for continued scientific cooperation will be signed by the eight Artic Nations. Will the security implications of the SWIPA 2017 report be a cause for recalibration of the mix of investments in climate adaptation and mitigation (i.e. tackling the root causes of accelerating changes in the Arctic)?

The forthcoming Fairbanks Declaration from this tenth Arctic Ministerial may well reverberate around the world with implications for the G7 leader’s summit in Sicily at the end of May.

Arctic warning: Time to update adaptation plans for sea level rise

According to SWIPA 2017, Arctic land ice contributed around a third of global sea level rise between 2004 and 2010. Overall, two-thirds of global sea level rise is attributed to the transfer of water previously stored on land (as ice or underground or in other reservoirs on land) and one-third of global sea level rise is attributed to warming of the ocean.

Global Sea Level Rise Contributions 2004-2010

Global sea level rise is attributed to a third from warming of the ocean and two thirds from the transfer of water previously stored on land (as ice or underground or in other reservoirs on land) to the ocean. Source: AMAP SWIPA 2017

The SWIPA 2017 report compares the “greenhouse gas reduction scenario” (known as RCP 4.5, which also serves as a proxy for an emissions scenario consistent with the long term goals of the Paris Climate Agreement) with the high emissions scenario (known as RCP 8.5 and used as a proxy for business as usual without a Paris Agreement).

It may be time to update adaptation plans to fully take into account more realistic projections of global sea level rise—SWIPA 2017 “estimates are almost double the minimum estimates made by the IPCC in 2013” for global sea level rise from all sources.

The difference between a fully implemented Paris Climate Agreement scenario and business as usual could not be more stark. The report declares that “the rise in global sea level by 2100 would be at least 52 cm  (20 inches) for a greenhouse gas reduction scenario and 74 cm (29 inches) for a business-as-usual scenario.” This is the best estimate likely “lock in” range for minimal, least-cost, coastal adaptation depending on the choices we make to reduce heat-trapping emissions and short-lived climate forcers.

Arctic slush fund: The high costs of displaced communities, melting, flooding, and burning in the Arctic

The Arctic matters to all of us: what happens in the Arctic does not stay in the Arctic. Case in point is the recent economic analysis presented in SWIPA 2017.

Global cumulative costs of changes underway in the Arctic would likely cost $7–$90 trillion US dollars over 2010-2100. The costs include a wide range of climate change consequences, from Arctic infrastructure damage to communities exposed to sea level rise. For comparison, the US annual “real” gross domestic product in 2016 was $18.6 trillion in current dollars.

Implications for the G7 summit and Paris Climate Agreement

The Arctic Ministerial meeting May 11 is a chance for high level officials from the Arctic Council to meet and discuss progress in a setting historically noteworthy for peaceful cooperation to achieve shared goals.

There is a high degree of overlap between the Arctic council members and observer non-Arctic states and the Group of 7 (G7) summit in Sicily a few weeks later. There is also a high degree of overlap with the highest emitting nations and the members of the Arctic Council.

The lessons learned and issues of climate change that are grappled with during the Arctic Ministerial may very well carry through to the G7 forum. After the summit we expect to hear more definitively about US actions regarding contributions to the Paris Agreement going forward.

For the moment, eyes are focused on Secretary of State Tillerson and his remarks in Fairbanks Alaska, and the Fairbanks declaration, expected to be signed on May 11.

 

 

AMAP SWIPA 2017

5 Reasons Why the Regulatory Accountability Act is Bad for Science

UCS Blog - The Equation (text only) -

Last week, Senator Rob Portman introduced his version of the Regulatory Accountability Act (RAA), a bill that would significantly disrupt our science-based rulemaking process. A version of this inherently flawed, impractical proposal has been floating around Washington for nearly seven years now, and the latest, S. 951, is just as troubling as previous iterations.

The impact of the RAA will be felt by everyone who cares about strong protections and safeguards established by the federal government. Think about food safety, environmental safeguards, clean air, clean water, the toys that your kids play with, the car you drive, workplace safety standards, federal guidance on campus sexual assault, financial safeguards, protections from harmful chemicals in everyday products, and more. You name it, the Portman RAA has an impact on it.

The Portman RAA is at best a solution in search of a problem. It imposes significant (and new) burdensome requirements on every single federal agency charged with using science to protect consumers, public health, worker safety, the environment, and more at a time when Congress and the president are cutting agency resources. It also requires agencies to finalize the most “cost effective” rule, which sounds nice, but in practice is an impossible legal standard to meet and would most likely result in endless litigation. This requirement is emblematic of the overall thrust of the bill, a backdoor attempt to put the interests of regulated industries ahead of the public interest.

Basically, because there isn’t public support for repealing the Clean Air Act, the Clean Water Act, the Consumer Product Safety Act, and other popular laws that use evidence to protect the public interest (including civil rights and disabilities laws, worker protection laws, transportation safety laws, and more), the Portman RAA weakens the ability of agencies to implement these laws by rewriting the entire process by which safeguards for Americans are enacted. In doing so, the Portman RAA would impact everyone’s public health and safety, especially low-income communities and communities of color, which often face the greatest burden of health, environmental, and other safety risks.

For this blog, I have chosen to focus on what the Portman RAA means for the scientific process that is the foundation for federal rulemaking. For information on all of the other troubling provisions in the legislation, legal scholars at the Center for Progressive Reform have a neat summary here.

Here are 5 destructive provisions in the Portman RAA as they relate to science and science-based rulemaking. Bear with me as we take this journey into administrative law Wonkville.

1. The RAA ignores intellectual property, academic freedom, and personal privacy concerns.

S. 951 includes harmful language similar to the infamous HONEST Act (previously known as the Secret Science Reform Act) and applies it to every single agency. While the Portman RAA language (page 7 starting at line 19 and page 25 starting at line 14) includes some exemptions that address the criticisms UCS has made of the HONEST Act, the bill would still require agencies to publicly make available “all studies, models, scientific literature, and other information” that they use to propose or finalize a rule.

The exemptions fall considerably short because the language has zero protections for intellectual property of scientists and researchers who are doing groundbreaking work to keep America great. For most scientists, especially those in academia and at major research institutions, much of this work, such as specific computer codes or modeling innovations, is intellectual property and is crucial for advancement in scientific understanding as well as career advancement.

In effect, this provision of the Portman RAA would prevent agencies from using cutting-edge research because scientists will be reluctant to give up intellectual property rights and sacrifice academic freedom. In addition, many researchers don’t or can’t share their underlying raw data, at least until they have made full use of it in multiple publications.

Given that the research of scientists and the expertise built up by labs is their scientific currency, S. 951’s intellectual property and academic privacy language would lead to one of two outcomes:

  • One, it would stifle innovation, especially when it comes to public health and safety research, as many early career scientists may not want to publicly share their code or computer models and undermine their careers. Scientists could risk all their ideas and work being pirated through the rulemaking docket if a federal agency wanted to use their information as part of the basis for proposing and/or finalizing a regulation.
  • Two, agencies wouldn’t be able to rely on the best available science in their decision-making process because those who have the best information may not want to make their intellectual property public. And of course, agencies are required to propose and finalize regulations based on the best available science. This is even reaffirmed by the Portman RAA (more on that later). Thus, you have a catch-22.

Like the HONEST Act, this language fundamentally misunderstands the scientific process. There is no reason for anyone to have access to computer models, codes, and more, to understand the science. Industry understands this very well because of patent law and because of the trade secrets exemptions (industry data would be exempted from the same disclosure requirements as intellectual property and academic research) but there is no equivalent protection for scientists, whose basic goal is to advance understanding of the world and publish their work.

And while the exemptions attempt to ensure protections of private medical data, they do not go far enough. For example, agencies that rely on long-term public health studies to propose and finalize science-based regulations could still be forced to disclose underlying private health data related to a study participant’s location and more, all of which may lead to someone’s privacy being put at risk.

2. The RAA puts science on trial.

The Portman RAA provides an opportunity for industry to put the best available science that informs high-impact and major rules on trial. In a provision (page 16 lines 13-17)  that reminds me of Senator Lankford’s radical BEST Act, S. 951 will give industry an opportunity to initiate an adversarial hearing putting science and other “complex factual issues that are genuinely disputed” on trial.

But what does it mean for science and other facts to be genuinely disputed? The RAA is silent on that point. Hypothetically, if an industry or any individual produces their own study or even an opinion without scientific validity that conflicts with the accepted science on the dangers of a certain chemical or product (say atrazine, e-cigarettes, chlorpyrifos pesticide, or lead), federal agencies charged with protecting the public using best available science would be forced to slow down an already exhaustive process. The thing is, you can always find at least one bogus study that disagrees with the accepted facts. If this provision had been around when the federal government was attempting to regulate tobacco, the industry would have been able to use it to create even more roadblocks by introducing bogus studies to dispute the facts and put a halt to the public health regulations.

This is just another way to elongate (and make less accessible to the public) an already exhaustive rulemaking process where everyone already can present their views through the notice-and-comment period. This provision plays up the “degree of uncertainty” that naturally exists in science, while ignoring a more sensible “weight of evidence” approach, which is exactly what opponents of science-based rulemaking want. This adversarial hearing process does nothing to streamline the regulatory process, but it does make it harder for federal agencies to finalize science-based public health, safety, and environmental protections. The Scopes-Monkey trial has already taught us that putting science on trial doesn’t work. It was a bad idea nearly 100 years ago, and it’s a bad idea today.

3. The RAA adds red tape to the science-based rulemaking process.

The Portman RAA, ironically, includes duplicitous language that requires proposed and final rules to be based on the “best reasonably available” science (page 8 lines 10-14 and page 25 lines 14-18). The thing is, this already happens. Many underlying authorizing statutes, such as the Clean Air Act, have this requirement, and to the extent that this bill is supposed to streamline the regulatory process, this appears to do the opposite. If anything, this is litigation bait for industry, meaning that the legally obscure language could be used to sue an agency and prevent science-based rulemakings to be implemented.

The thing is, anyone can already challenge the scientific basis of regulations since they are already required to be grounded in facts. This just rests upon a faulty assumption that agencies aren’t doing their jobs. The bottom line? Through this and other provisions, S. 951 adds redundancy and procedure when the supporters of the bill are claiming to get rid of it.

4. The RAA has imprecise language that could force burdensome requirements on agency science.

The Portman RAA uses vague language to define agency “guidance” (page 2, lines 14-16) that could be interpreted to encompass agency science documents, such as risk assessments. For example, if an agency conducts a study on the safety of a chemical, finds a health risk associated and publishes that document, would that study be subject to the burdensome RAA requirements on guidance (i.e. go through a cost-benefit analysis)? The language is ambiguous enough that this remains an open question.

Furthermore, by adding additional requirements for guidance documents, such as cost-benefit analysis, it would make it harder for regulators to be nimble and flexible to explain policy decisions that don’t have the binding effect of law, or to react to emerging threats. For example, the Centers for Disease Control and Prevention (CDC) has frequently used guidance documents to quickly communicate to the public and healthcare providers about the risks associated with the Zika virus, an emerging threat that required a swift response from the federal government. Just imagine the amount of time it would take for the CDC to effectively respond to this type of threat in the future if the agency was forced to conduct a cost-benefit analysis on this type of guidance.

Overall, many agencies use guidance as a means of explaining how they interpret statutory mandates. Because they don’t have the effect of law, they can be easily challenged and modified. The new hurdles simply prolong the guidance process and make it more difficult for agencies to explain interpretations of their legal mandates.

5. The RAA increases the potential for political interference in agency science.

The Portman RAA would give the White House Office of Information and Regulatory Affairs (OIRA) the power to establish one set of guidelines for risk assessments for all of the federal science agencies (page 33 lines 16-18). The thing is, this one-size-fits-all idea is unworkable. Individual agencies set guidelines for risk assessments they conduct because different issues require different kinds of analysis. OIRA is filled with economists who are not scientific experts that can appropriately understand public health and environmental threats. Under this bill, OIRA would also determine the criteria for what kinds of scientific assessments should be used in rulemaking. This office should not have the responsibility to put forward guidelines dictating agency science. This is a clear way to insert politics into a science-based decision. My colleague Genna Reed will be expanding on this point specifically later this week because of how troubling this provision is.

For a proposal that is aimed at streamlining the regulatory process, the question must be asked, for whom? If anything, the Portman RAA grinds the issuance of science-based protections to a halt, and adds additional red-tape to a process that is already moving at a glacial pace.

The bottom line is that this latest version of the RAA, albeit different from previously introduced versions in the Senate and somewhat distinct from the House-passed H.R. 5, leads to the same outcome in reality: a paralysis by analysis at federal agencies working to protect the public from health and environmental threats and a potential halt to the issuance of new science-based standards to ensure access to safe food, clean air and clean drinking water, and other basic consumer protections.

Photo: James Gathany, CDC

Solar Job, Coal Jobs, and the Value of Jobs in General

UCS Blog - The Equation (text only) -

Science isn’t done by guesswork or gut instinct. It requires expertise not only to conduct but to evaluate; in-depth research in a field outside of my own is often beyond my ability to critique. I don’t have the knowledge to review a paper on molecular biology, although I might notice a really blindingly obvious flaw.

I have more knowledge of economics than I do of molecular biology. Even so, it’s not my primary field of expertise, so when I saw a recent post by American Enterprise Institute scholar Mark J. Perry, I was a little confused. His “Bottom Line” includes this: “The goal of America’s energy sector isn’t to create as many jobs as possible… we want the fewest number of energy workers.”

Is there something I’m missing? Is AEI actually saying that jobs are bad?

Don’t we want jobs?

Now, the basic economic argument behind Dr. Perry’s argument is that labor carries a cost, and producers of any sort of good seek to keep their costs down. This is a story as old as civilization. As agriculture improved and we needed fewer hands to work on the farms, people moved to the towns and cities and produced new goods and services. In the Industrial Revolution, machines reduced the number of workers needed to produce a given quantity of goods. Luddite rioters smashed a few machines in protest, but more were built, and society adapted.

In the coal sector, automation cost jobs well before the shale gas boom made its impact felt. It’s important to recognize that even if the economy as a whole benefits by replacing people with machines in repetitive tasks, in the near term the individuals who were doing those tasks—coal miners, but also people in many other fields, such as factory workers, cashiers, truck drivers, even accountants and paralegals—will be adversely affected by losing their jobs. The displaced workers and their communities will face economic, social, and health impacts. We need to plan for these shocks better than we have in the past.

Human labor is not simply a fungible, tradeable commodity like a ton of steel or a bushel of wheat; it is far more complex. Human beings have feelings, needs, and rights. The field of “labor economics” examines this topic in more detail.

Coal and solar jobs

Dr. Perry argues that solar’s greater reliance on human labor is a flaw, not an asset. And in doing so, he makes a very basic mathematical error.

The solar industry employed, depending on the source, 260,000 to 374,000 workers in 2016. Solar power produced 56.22 million megawatt-hours according to the U.S. Energy Information Administration (Dr. Perry mistakenly includes a much lower figure). Meanwhile, coal employed about 160,000 workers and produced 1240 million megawatt-hours. Therefore, solar needed about 35-50 times as many workers per unit of electricity produced.

But here’s the major error: the solar workers in 2016 installed systems that will generate power for decades. Or were researching new solar panel chemistries that might pay off decades in the future. The labor of coal workers in 2016 was largely fueling and operating existing power plants for only that year. On an amortized basis of labor-per-kilowatt-hour, the two technologies would be a great deal closer. By a rough estimate, it would be a ratio of about two to one.

There are also differences in costs per worker. Coal has higher labor costs per employee, when you factor in executive bonuses, hazard pay, and the pension and health benefits that unions secured for workers in decades past. These benefits are necessary because of the health risks faced by coal workers. Solar doesn’t pay badly, but coal mining was historically a very good-paying if risky job, as my colleague Jeremy Richardson points out. Factor that in, and the gap in labor costs per unit of electricity shrinks even further.

So is that the end of the story? No. Labor isn’t the only input to production. Industrial output can be modeled with something called a Cobb-Douglas Production Function. This equation states that production requires multiple inputs, like capital, labor, and materials, and that these can be substituted for one another to a greater or lesser degree.

Since the unsubsidized costs of new solar power have now fallen below those of new coal power, if solar power uses more labor, then it must be making more efficient use of capital and materials (in dollar terms) than coal power. Solar gives more people jobs, uses less other stuff in the process, creates less pollution, and comes out ahead.

The value of solar jobs

Is the labor dependence of solar power a bad thing? Not for the men and women who are actually working in the field. Not for you and me and the other consumers, and our utilities who are buying solar power at record-low price levels. Not for a country seeking to create opportunities for its citizens.

In basic economics, prices tell the story. From that point of view, solar’s labor requirements are clearly not a disadvantage. When we consider that labor and jobs are actual people’s livelihoods, and not just numbers on a page, it becomes clear that the labor requirements are actually beneficial.

Pages

Subscribe to Union of Concerned Scientists aggregator - Combined UCS Blogs